• Home

  • Custom Ecommerce
  • Application Development
  • Database Consulting
  • Cloud Hosting
  • Systems Integration
  • Legacy Business Systems
  • Security & Compliance
  • GIS

  • Expertise

  • About Us
  • Our Team
  • Clients
  • Careers

  • Blog

  • VisionPort

  • Contact
  • Our Blog

    Ongoing observations by End Point Dev people

    Vulnerability Scanning

    Jeremy Freeman

    By Jeremy Freeman
    September 15, 2022

    A mountain ridge. One side of the ridge is still covered in shadows, while the sunrise illuminates the other side.

    Define Your Terms

    A security vulnerability is a flaw or bug that could be exploited by a threat agent/threat actor. According to CrowdStrike, “A threat actor, also known as a malicious actor, is any person or organization that intentionally causes harm in the digital sphere.” Once a bug or flaw is deemed a vulnerability, it is registered by the MITRE Corporation as a Common Vulnerability or Exposure (CVE) and stored in their CVE database. A CVE is given an identifying number by a CVE Numbering Authority (CNA), for example, Red Hat, Microsoft, and other designated authorities. The National Institute of Standards and Technology (NIST) is a federal agency housing the National Vulnerability Database (NVD). Threat levels are quantified by assigning a Common Vulnerability Scoring System (CVSS) score from 0 to 10. CVSS is a free and open standard for evaluating the level of threat to a business or organization maintained by the Forum of Incident Response and Security Teams (FIRST). NIST provides a CVSS calculator.

    The Attackers

    Real live people spend a lot of time and money trying to break into specific high-value targets, as do bots that clever people have weaponized to attack more cheaply and broadly at all hours of the day.

    Main Security Vulnerability Categories

    The main information security vulnerability categories are:

    • Broken Authentication.

      When security credentials are stolen, attackers can usurp user identities and sessions as if they were the user.

    • SQL Injection.

      Attackers can hijack database content by injecting malicious code. It can allow attackers to acquire sensitive data, modify or delete data, impersonate identities, and conduct other nefarious activities.

    • Cross-site scripting (XSS).

      This type of attack inserts malicious code into a website. Its target is the website user, threatening sensitive user information.

    • Cross-Site Request Forgery (CSRF).

      This attack may mislead the user into an action that they would do only unwittingly. For example, social engineering may hoodwink the user into clicking a link in an email or chat to get the user to perform an action of the attacker’s choice. It could be having their browser use JavaScript to submit a form that they don’t even know about.

    • Security Misconfiguration.

      A configuration error that can be exploited by attackers. For example:

      • Default passwords left in place
      • No password strength requirements to prevent users from setting weak passwords that can easily be found in dictionary attacks.
      • Web server directory listings left enabled, possibly exposing files that shouldn’t be seen
      • Unused software modules or plugins left enabled, increasing the attack surface
    • Software Vulnerabilities.

      Especially out-of-date software. As more time passes without getting security updates, attackers can examine code for bugs and flaws that could be exploited. Thus, a given piece of software that was secure at one time may be vulnerable six months later.

    For more on vulnerability categories, see our blog post OWASP Top 10. OWASP defines itself as “a nonprofit foundation that works to improve the security of software. Through community-led open-source software projects, hundreds of local chapters worldwide, tens of thousands of members, and leading educational and training conferences, the OWASP Foundation is the source for developers and technologists to secure the web.”

    Solutions

    Applications to scan for vulnerabilities are available. Some are free and/or open source, some are payed, and some have a free version and a payed version. Vulnerability scanners are automated tools that look into a database of known CVEs for the specific software and library versions running on a given server. They also examine systems for flaw types that could be exploited. The NIST NVD database synchronizes with the MITRE CVE database. The latter does not include CVSS scores or solutions, which are found in the NVD. In general, the vulnerabilities fall into three classifications:

    • System/Network: The application scans for system misconfiguration and network CVEs.
    • Web: The application scans for SQL Injections, XSS, CSRF, etc.
    • Software Analysis: Static Application Security Testing (SAST). SAST analyzes source code to find security vulnerabilities. Many organizations and even individuals provide apps that perform SAST. They specify which languages they can scan, sometimes one or two, sometimes up to a dozen or more.

    In summary, vulnerability scanning is a vital process that spans many aspects of Information Technology (IT). Its solutions are constantly evolving to keep up with new threats. An endless cat-and-mouse game where the cats must be ever adaptive to keep ahead of the ever-evolving malicious mice.


    security

    CI/CD with Azure DevOps

    Dylan Wooters

    By Dylan Wooters
    August 28, 2022

    Moonrise in Sibley Volcanic Park. The sun casts a shadow over everything but the tops of the trees and a brown, grassy hill. The moon has risen just above the hill, against the backdrop of a light blue cloudless evening sky.
    Photo by Dylan Wooters, 2022.

    A development process that includes manual builds, tests, and deployments can work for a small-scale project, but as your codebase and team grow, it can quickly become time-consuming and unwieldy. This can be particularly true if you’re a .NET developer. We all know the struggle of merging in the latest feature and clicking build in Visual Studio, only to have it fail, citing cryptic errors. “Maybe a clean will fix it?”

    If this sounds like your current situation, it’s likely time to consider building a Continuous Integration and Continuous Deployment pipeline, commonly known as “CI/CD”. A good CI/CD pipeline will help you automate the painful areas of building, testing, and deploying your code, as well as help to enforce best practices like pull requests and build verification.

    There are many great options to choose from when selecting a CI/CD tool. There are self-hosted options like Jenkins and TeamCity. There are also providers like GitHub and Azure DevOps, which offer CI/CD alongside cloud-hosted source control. All of these options have pros and cons, but if you’re looking for a large feature set, flexibility, and in particular good support for .NET solutions, you should consider Azure DevOps.

    In this post, I’ll show you how to set up an Azure CI/CD pipeline for a self-hosted .NET MVC web solution targeting two different environments.

    Creating Environments

    The first step is to sign up for a free account at Azure DevOps. Once you have your account created, you can then create your Environments. The Environments are the places where you want your app to be deployed.

    In the case of a recent project here at End Point Dev, we had three different environments—UAT, Stage, and Production—all running as IIS websites on self-hosted Windows VMs. The UAT environment was on the UAT server, and the Stage and Production environments were on the Production server. So you’ll want to plan out your environments accordingly.

    Once you do so, head to Azure DevOps and then click on Environments under the Pipelines section, then click Create Environment.

    Azure DevOps. A menu in the drawer on the left has Pipelines expanded, with Environments selected in its sub-menu. On the right, outside the menu, is a button that reads Create environment.

    Then enter a name from the environment and select Virtual Machines and click Next.

    A dialog box called “New environment.” The Name field is filled out as “UAT,” and the Description field is filled out with “Testing environment.” The Resource field has three radio buttons, with Virtual machines selected. There is a “Next” button at the bottom of the dialog, which is highlighted.

    In the following window, select Generic Provider, and then select your OS, which in our case is Windows. You will see a registration script with a copy icon next to it. Click the icon to copy the (rather lengthy) PowerShell registration script to the clipboard, and then paste it into a text file.

    The same “New environment” dialog. Under the “Virtual machine resource” section, “Provider” has “Generic provider” selected, “Operating system” has “Windows” selected, and under “Registration script” is a copy icon with instructions to run it in PowerShell.

    Next, connect to the target environment. Open a PowerShell window as Administrator, copy and paste the registration script, and then press Enter. The PowerShell script will then do its magic and register the environment with Azure. It may take a minute to run, but afterwards you should see a success message.

    Now, if you head back to Azure DevOps and click on the Environment, you should see the server name of the machine that you ran the PowerShell script on. The server name is referred to as a Resource in Azure.

    The UAT Environment, Resources tab. It shows a server with a redacted name and a latest job ID with a green check mark.

    You will then want to complete the above steps for each of your additional target environments, for example, Staging and Production. In our case, Staging and Production are hosted on the same web server, so we only need to create one additional environment (Production) in Azure.

    The Production Enviroment, Resources tab.

    Creating the Pipelines

    Now it’s time to create the actual Pipeline. The Pipeline will be responsible for building and deploying your app to the Environments that you created in the previous step. To start, Click on Pipelines in Azure DevOps, and then click the Create Pipeline button.

    Azure DevOps. A menu in the drawer on the left has Pipelines expanded, with Pipelines selected in its sub-menu. Outside the menu is a button that reads Create Pipeline.

    You’ll then be asked where your source code lives. In our case, the source code exists in a repo in Azure DevOps. The nice thing about Azure is that you can also target source code that exists on another provider, for example, GitHub, Bitbucket, or even a self-hosted Git repo (Other Git). Click on your hosting provider and follow the instructions.

    Once you’re connected to your source provider, then you can configure the Pipeline. The “Pipeline” is actually just a YAML file within your target repo. You can read all about the Azure YAML syntax here. We’ll choose a Starter Pipeline, which opens up a text editor and enables you to create your YAML file.

    Azure DevOps. A menu in the drawer on the left has Pipelines expanded. On the right a configure tab is selcted. A heading reads “Configure your pipeline,” under which are a couple options. highlighted is “Starter pipeline.

    Delete the sample text that appears in the editor, and then enter the following YAML. This is a pre-baked YAML pipeline that restores and builds a .NET web project, and then publishes the build assets to your three different target environments. In the next section, we will take a deep dive into how the YAML works so that you can edit it to fit your needs.

    trigger:
    - uat
    - staging
    - main
    
    pool:
      vmImage: 'windows-latest'
    
    variables:
      projectPath: 'ci_tutorial/ci_tutorial_web.csproj'
      packageName: 'ci_tutorial_web.zip'
      solution: '**/*.sln'
      buildPlatform: 'AnyCPU'
      artifactName: 'AzureDrop'
      ${{ if eq(variables['Build.SourceBranchName'], 'uat') }}:
        websiteName: 'ci_tutorial_uat'
        buildConfiguration: 'UAT'
        environmentName: 'UAT'
        targetVM: 'UATWEBAPP1'
      ${{ if eq(variables['Build.SourceBranchName'], 'staging') }}:
        websiteName: 'ci_tutorial_staging'
        buildConfiguration: 'Staging'
        environmentName: 'Production'
        targetVM: 'WEBAPP1'
      ${{ if eq(variables['Build.SourceBranchName'], 'main') }}:
        websiteName: 'ci_tutorial_prod'
        buildConfiguration: 'Release'
        environmentName: 'Production'
        targetVM: 'WEBAPP1'
    
    stages:
    - stage: build
      jobs:
      - job: RestoreAndBuild
        steps:
        - task: NuGetToolInstaller@1
        - task: NuGetCommand@2
          inputs:
            restoreSolution: '$(solution)'
        - task: VSBuild@1
          inputs:
            solution: '$(projectPath)'
            msbuildArgs: '/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactStagingDirectory)"'
            platform: '$(buildPlatform)'
            configuration: '$(buildConfiguration)'
        #- task: VSTest@2
        #  inputs:
        #    platform: '$(buildPlatform)'
        #    configuration: '$(buildConfiguration)'
        - task: PublishBuildArtifacts@1
          inputs:
            PathtoPublish: '$(Build.ArtifactStagingDirectory)'
            ArtifactName: '$(artifactName)'
            publishLocation: 'Container'
    
    - stage: Deploy
      displayName: Deploy to IIS
      dependsOn: Build
      jobs:
      - deployment: DeploytoIIS
        displayName: Deploy the web application to dev environment
        environment:
          name: ${{ variables.environmentName }}
          resourceName: ${{ variables.targetVM }}
          resourceType: VirtualMachine
        strategy:
          runOnce:
            deploy:
              steps:
              - task: DownloadBuildArtifacts@0
                inputs:
                  buildType: 'current'
                  downloadType: 'specific'
                  downloadPath: '$(System.ArtifactsDirectory)'
              - task: IISWebAppManagementOnMachineGroup@0
                displayName: 'Create App Pool and Website'
                inputs:
                  WebsiteName: '$(websiteName)'
                  WebsitePhysicalPath: '%SystemDrive%\inetpub\wwwroot\$(websiteName)'
                  CreateOrUpdateAppPoolForWebsite: true
                  AppPoolNameForWebsite: '$(websiteName)'
              # For testing, to confirm target website
              - task: PowerShell@2
                displayName: Display target websitename
                inputs:
                  targetType: 'inline'
                  script: 'Write-Host "Target Website Name: $(websiteName)"'
              - task: IISWebAppDeploymentOnMachineGroup@0
                displayName: 'Deploy IIS Website'
                inputs:
                  WebSiteName: '$(websiteName)'
                  Package: '$(System.ArtifactsDirectory)\$(artifactName)\$(packageName)'
    

    Understanding the YAML

    The first section of the YAML, trigger, determines which branches will trigger the pipeline when they receive a push. In our case, we have three branches: uat, staging, and main.

    trigger:
    - uat
    - staging
    - main
    

    The variables section defines variables that are used later in the pipeline YAML. An important part of this section is the use of if statements to assign values to certain variables (i.e., environmentName) based on the source branch for the build. This mechanism allows the pipeline to deploy to several different environments using a single YAML file.

    Note that the if statements also switch the build configuration. If you have config transforms, this will be crucial for ensuring that the correct config files are deployed to each environment.

    ${{ if eq(variables['Build.SourceBranchName'], 'uat') }}:
      websiteName: 'ci_tutorial_uat'
      buildConfiguration: 'UAT'
      environmentName: 'UAT'
      targetVM: 'UATWEBAPP1'
    ${{ if eq(variables['Build.SourceBranchName'], 'staging') }}:
      websiteName: 'ci_tutorial_staging'
      buildConfiguration: 'Staging'
      environmentName: 'Production'
      targetVM: 'WEBAPP1'
    ${{ if eq(variables['Build.SourceBranchName'], 'main') }}:
      websiteName: 'ci_tutorial_prod'
      buildConfiguration: 'Release'
      environmentName: 'Production'
      targetVM: 'WEBAPP1'
    

    Next, we get into the actual actions, or stages, of the pipeline. This pipeline has two stages: build and deploy. The build stage runs a NuGet restore and then a VS build, and then publishes the build output/​artifacts to an Azure cloud storage location. The last bit is something that took me a while to wrap my head around: The build does not occur on the target environment via the runner—it actually occurs up in “Azure land”, on a Windows cloud VM.

    The VM on which the build occurs can be defined using the pool: vmImage section. In our case, we are running the build on windows-latest, which currently translates to the brand new Windows Server 2022.

    The deploy stage then downloads the build artifacts from Azure and deploys them to IIS on the target machine, using the runner that you installed as part of the environment setup above. Note that the variables defined in the if statements come into play here—this is how the pipeline knows which environment/VM to target.

    environment:
      name: ${{ variables.environmentName }}
      resourceName: ${{ variables.targetVM }}
      resourceType: VirtualMachine
    

    The IISWebAppManagementOnMachineGroup task will automatically create an app pool and website in IIS for the application. This feature can be toggled using the CreateOrUpdateAppPoolForWebsite setting. If a website already exists based on the WebsiteName, Azure will simply deploy to that existing app pool and website.

    Adding Branch Protection and Build Validation

    In order to ensure successful deployments to Production, you will likely want to add in branch protection for your main or master branch, and also consider build validation.

    Branch protection allows you to enforce reviews for pull requests into certain branches. Build validation is a feature that pre-compiles your .NET code to ensure that it builds prior to merging a pull request. Both features help to improve code quality and prevent broken deployments.

    To add branch protection, go to Repo → Branches → your target branchright side menu, then select Branch Policies.

    Under the Branches header, the “main” branch has a menu expanded on the far right of its row. Highlighted is “Branch policies.

    To enforce reviews, turn on “Require a minimum number of reviewers”, and adjust the number of reviewers and settings as required.

    A dialog box titled “Branch Policies.” A switch next to “Require a minimum number of reviewers” is turned on.

    For build validation, scroll down to the Build Validation section, and click the plus button to add a new policy. Select the target build pipeline, adjust the policy settings as necessary (see below), and then give it a display name. Click save, and the build validation policy will be applied to the branch.

    An important note here is that the build validation essentially runs the CI pipeline. In our case, that means it will run the build and the deployment. This is not ideal; we only want it to run the build (NuGet restore and VS build). So, it’s advisable to create a separate pipeline with only the build stage. This can be done easily by copying our example YAML and removing the “Deploy” stage, then saving it as a new pipeline. Then, choose that new pipeline as the target build pipeline in the validation policy.

    A header reads “Add build policy.” under that is a form, with “Build pipeline (required)” highlighted. Highlight text reads “Add your new ‘build-only’ pipeline here.” “ci-tutorial” is selected.

    Now when you open up a PR, it can only be merged once a review has been applied, and the build validation has run successfully.

    Pull request view, “Overview” tab. Next to green check marks are messages saying “Required check succeeded” with the successful build validation under it, and “1 reviewer approved.

    Monitoring and Troubleshooting the Pipeline

    You can see the overall status of your pipeline by clicking on the Pipelines section in the left-side navigation. Green means good!

    To dive into more details, click on the pipeline name. This will show all the recent pipeline runs. Then click on a particular run. This will bring up the run details, including a flowchart for each stage.

    Pipeline flowchart, with a completed “build” leading into a completed “Deploy to IIS.

    To dive further, click on a specific stage. You’ll then see all of the tasks in the stage, with actual output details on the right, similar to what you would see in the output window in Visual Studio.

    Stage details. On the left are the successful completed jobs from the “build” and “Deploy to IIS” sections. On the right is more verbose output for a selected job.

    If you encounter an error in your pipeline, the stage details page is a great place to troubleshoot. Azure will show you the task that generated the error by displaying a red error icon next to the task. You can then click on the task and trace the output on the right to determine the exact error message.

    Wrapping Up

    Hopefully, this post gives you a good overview of CI/CD options for a self-hosted .NET application. Azure is almost infinitely configurable, so there are many other options to explore that are not covered in this post, whether it be within the pipeline YAML, or through other settings like branch policies/​protection. If you have any questions, please feel free to leave a comment. Happy automating!


    dotnet devops cloud

    Kansas State University: One Year with VisionPort

    Sanford Libo

    By Sanford Libo
    August 18, 2022

    A wide view of KSU’s campus from above, in Google Earth, showing what might be displayed on KSU’s VisionPort.

    It has been almost a year since Kansas State University brought the VisionPort platform into their Hale library. I recently had the pleasure of connecting with Jeff Sheldon, Associate Director of the Sunderland Foundation Innovation Lab, to discuss how the school has been using the platform.

    It’s no surprise to hear that the Architecture, Planning and Design (AP) students have taken to VisionPort immediately. Being originally designed around displaying geographic information system (GIS) data, the platform allows users to fly over and through city streets and see buildings in 3D, as well as travel around the world looking for areas of possible real estate development. Many of our clients also use VisionPort to give panoramic, three-dimensional tours of building interiors, to show future tenants properties right from their office and brainstorm design possibilities.

    In addition to the AP Design students, VisionPort has found itself being used to immerse students in their education with an incredible National Geographic presentation that features 360° videos including swimming with sharks and getting up close to sea lions and elephants in their natural habitats, as well as presentations about the moon landing, and even the reconstruction of the Hale library after a fire in 2018.

    An extremely innovative application KSU has come up with is using the VisionPort system to create a quiet space for students. During exam weeks, the platform is used to create a calming environment, displaying relaxing scenes to help students alleviate stress and promote good mental health.

    “Ultimately, people don’t expect this sort of device in a library and VisionPort has helped change the longstanding narrative that a library is just about books.”

    It’s always great to hear about VisionPort being a catalyst for collaboration among students and faculty. Students are encouraged to work together to bring their projects to life on VisionPort and share tips and tricks they’ve learned by working with our content management system.

    Interview

    End Point Dev: How has VisionPort aided the evolution of the Hale Library?

    Jeff Sheldon: VisionPort, as a feature of the new Sunderland Foundation Innovation Lab in Hale Library, has helped evolve the University’s library space in several ways. As a visualization tool, VisionPort allows our students, faculty, and visitors to experience rich immersive presentations in a way they haven’t before. Featuring the same Google products we see in classroom instruction, at scale and in high resolution, also affords students an unexpectedly different perspective from what they experience more traditionally on a small screen or in washed-out projections. This has been especially valuable amidst the pandemic, which has transformed expectations to the virtual almost as a default.

    Ultimately, people don’t expect this sort of device in a library and VisionPort has helped change the longstanding narrative that a library is just about books.

    What types of presentations have students and faculty been creating with VisionPort?

    The presentations our students and faculty have been most excited about feature video across our entire 7-panel display and 360° integration for photos and videos. Some have experimented with data visualization and location-to-location journeys, but an overwhelming number have been drawn to the simple navigation controls to use locations and sites as visual aids during group presentations and as a complement to other examples they wheel in.

    What are some ways in which Google Earth is being utilized with VisionPort?

    Google Earth’s ability to roll onto the screen and zoom into specific locations has the effect of taking viewers on a journey. You can read that in the faces of those watching as a destination unfolds or the promise of a demonstration materializes in front of a tour group. We have quite a few guests visit their hometowns to reminisce and share stories with others, access remote sites to plan for travel, wander the streets of historic figures, and to experience the cultural and socioeconomic influences on a community.

    A photo of Kansas State University’s VisionPort in its library. Several modern chairs are set up facing the 7-screen VisionPort, displaying KSU’s campus on Google Earth.

    What content on the VisionPort have students been most excited about?

    That’s a tough one to narrow down. There have been a few unexpected moments during some of our featured 360° videos where a visitor will turn the camera view around and be charged by an elephant or sniffed by a lion. Seeing such an experience result in a yelp or jump is thrilling.

    What fields of study are being presented most frequently?

    Kansas State’s Architecture, Planning and Design (AP Design) students are regulars, but we’ve also fielded student presentations from our history and language departments. A number of ad-hoc classes have worked on presentations as well, but there are many curious about the act of how to visualize and present data to different audiences and who enjoy the process of exploration, but aren’t always looking to produce a specific outcome to share. We create accounts for those students to access the system and learn from their peers and the stock examples.

    How are panoramic images being used on the VisionPort?

    We’re very interested in panoramic and 360° images for the sake of storytelling. One example has been to tell the history of Hale Library in the aftermath of the 2018 fire it endured. Another has been to show the ruins of culturally significant sites some of our students have visited. We’ve also worked with faculty members to produce wellness videos during exam weeks, such as a calming aquarium or nature scene.

    Conclusion

    We are always happy to hear first-hand experiences universities are having with the VisionPort platform and we are honored to be able to contribute to the education experience.

    Thank you to Mr. Sheldon for the great meeting. We look forward to hearing how KSU innovates with VisionPort over the next year!

    For more information about VisionPort, email sales@visionport.com or visit www.visionport.com.


    visionport education

    Ansible tutorial with AWS EC2

    Jeffry Johar

    By Jeffry Johar
    August 11, 2022

    A ferris wheel lit up by red lights at night
    Photo by David Buchi

    Ansible is a tool to manage multiple remote systems from a single command center. In Ansible, the single command center is known as the control node and the remote systems to be managed are known as managed nodes. The following describes the 2 nodes:

    1. Control node:

      • The command center where Ansible is installed.
      • Supported systems are Unix and Unix-like (Linux, BSD, macOS).
      • Python and sshd are required.
      • Remote systems to be managed are listed in a YAML or INI file called an inventory.
      • Tasks to be executed are defined in a YAML file called a playbook.
    2. Managed node:

      • The remote systems to be managed.
      • Supported systems are Unix/Unix-like, Windows, and Appliances (eg: Cisco, NetApp).
      • Python and sshd are required for Unix/Unix-like.
      • PowerShell and WinRM are required for Windows.

    In this tutorial we will use Ansible to manage multiple EC2 instances. For simplicity, we are going to provision EC2 instances in the AWS web console. Then we will configure one EC2 as the control node that will be managing multiple EC2 instances as managed nodes.

    Prerequisites

    For this tutorial we will need the following from AWS:

    • An active AWS account.
    • EC2 instances with Amazon Linux 2 as the OS.
    • AWS Keys for SSH to access control node and managed nodes.
    • Security group which allows SSH and HTTP.
    • A decent editor such as Vim or Notepad++ to create the inventory and the playbook.

    EC2 Instances provisioning

    The following are the steps to provision EC2 instances with the AWS web console.

    1. Go to AWS Console → EC2 → Launch Instances.
    2. Select the Amazon Linux 2 AMI.
    3. Select a key pair. If there are no available key pairs, please create one according to Amazon’s instructions.
    4. Allow SSH and HTTP.
    5. Set Number of Instances to 4.
    6. Click Launch Instance.

    AWS web console, open to the “Instances” tab in the toolbar. This is circled and pointing to the table column starting with “Public IPv4…

    Ansible nodes and SSH keys

    In this section we will gather the IP addresses of EC2 instances and set up the SSH keys.

    1. Go to AWS Console → EC2 → Launch Instances.

    2. Get the Public IPv4 addresses.

    3. We will choose the first EC2 to be the Ansible control node and the rest to be the managed nodes:

      • control node: 13.215.159.65
      • managed nodes: 18.138.255.51, 13.229.198.36, 18.139.0.15

    AWS web console, again open to the Instances tab, with the Public IPv4 column circled. A green banner says that the EC2 instance was successfully started, followed by a long ID

    Login to the control node using our key pair. For me, it is kaptenjeffry.pem.

    ssh -i kaptenjeffry.pem ec2-user@13.215.159.65
    

    Open another terminal and copy the key pair to the control node

    scp -i kaptenjeffry.pem kaptenjeffry.pem ec2-user@13.215.159.65:~/.ssh
    

    Go back to the control node terminal. Try to log in from the control node to one of the managed nodes by using the key pair. This is to ensure the key pair is usable to access the managed nodes.

    ssh -i .ssh/kaptenjeffry.pem ec2-user@18.138.255.51
    

    Register the rest of the managed nodes as known hosts to the control nodes, in bulk:

    ssh-keyscan -t ecdsa-sha2-nistp256 13.229.198.36 18.139.0.15 >> .ssh/known_hosts
    

    Ansible Installation and Configuration

    In this section we will install Ansible in the control node and create the inventory file.

    1. In the control node, execute the following commands to install Ansible:

      sudo yum update
      sudo amazon-linux-extras install ansible2
      ansible --version
      

      Where:

      • yum update updates all installed packages using the yum package manager,
      • amazon-linux-extras install installs Ansible, and
      • ansible --version checks the installed version of Ansible.
    2. Create a file named myinventory.ini. Insert the IP addresses that we identified earlier to be the managed nodes in the following format:

    [mynginx]
    red ansible_host=18.138.255.51
    green ansible_host=13.229.198.36
    blue ansible_host=18.139.0.15
    

    Where:

    • [mynginx] is the group name of the managed nodes,
    • red, green, and blue are the aliases of each managed node, and
    • ansible_host=x.x.x.x sets the IP Address each managed node.

    myinventory.ini is a basic inventory file in a INI format. An inventory file could be either in INI or YAML format. For more information on inventory see the Ansible docs.

    Ansible modules and Ansible ad hoc commands

    Ansible modules are scripts to do a specific task at managed nodes. For example, there are modules to check availability, copy files, install applications, and lots more. To get the full list of modules, you can check the official Ansible modules page.

    A quick way to use Ansible modules is with an ad hoc command. Ad hoc commands use the ansible command-line interface to execute modules at the managed nodes. The usage is as follows:

    ansible <pattern> -m <module> -a "<module options>" -i <inventory>
    

    Where:

    • <pattern> is the IP address, hostname, alias or group name,
    • -m module is name of the module to be used,
    • -a "<module options>" sets options for the module, and
    • -i <inventory> is the inventory of the managed nodes.

    Ad hoc command examples

    The following are some example of Ansible ad hoc commands:

    ping checks SSH connectivity and Python interpreter at the managed node. To use the ping module against the mynginx group of servers (all 3 hosts: red, green, and blue), run:

    ansible mynginx -m ping -i myinventory.ini
    

    Sample output of ping. Several green blocks of JSON show successful ping responses

    copy copies files to a managed node. To copy a text file (/home/ec2-user/hello.txt in our test case) from the Control node to /tmp/ at all managed nodes in the mynginx group, run:

    ansible mynginx -m copy \
    -a 'src=/home/ec2-user/hello.txt dest=/tmp/hello.txt' \
    -i myinventory.ini
    

    shell executes a shell script at a managed node. To use module shell to execute uptime at all managed nodes in the mynginx group, run:

    ansible mynginx -m shell -a 'uptime' -i myinventory.ini
    

    Ansible playbooks

    Ansible playbooks are configuration files in a YAML format that tell Ansible what to do. A playbook executes its assigned tasks sequentially from top to bottom. Tasks in playbooks are grouped by a block of instructions called a play. The following diagram shows the high level structure of a playbook:

    An outer box labeled 'Playbook' contains two smaller boxes. The first is labeled 'Play 1', the second is labeled 'Play 2'. They contain stacked boxes similar to each other. The first box is a lighter color than the others, labeled 'Hosts 1 (or Hosts 2, for the 'Play 2' box)'. The others are labeled 'Task 1', 'Task 2', and after an ellipsis 'Task N'.

    Now we are going to use a playbook to install Nginx at our three managed nodes as depicted in the following diagram:

    At the left, a box representing a control node, with the Ansible logo inside. Pointing to the Ansible logo inside the control node box are flags reading “playbook” and “inventory”. The control node box points to three identical “managed node” boxes, each with the Nginx logo inside.

    Create the following YAML file and name it nginx-playbook.yaml. This is a playbook with one play that will install and configure Nginx service at the managed node.

    ---
    - name: Installing and Managing Nginx Server 
      hosts: mynginx   
      become: True
      vars:
        nginx_version: 1
        nginx_html: /usr/share/nginx/html
        user_home: /home/ec2-user
        index_html: index.html
      tasks:
        - name: Install the latest version of nginx
          command: amazon-linux-extras install nginx{{ nginx_version }}=latest -y
    
        - name: Start nginx service
          service:
            name: nginx
            state: started
    
        - name: Enable nginx service
          service:
             name: nginx
             enabled: yes
        - name: Copy index.html to managed nodes
          copy:
            src:  "{{ user_home }}/{{ index_html }}"
            dest: "{{ nginx_html }}"
    

    Where:

    • name (top most) is the name of this play,
    • hosts specifies the managed nodes for this play,
    • become says whether to use superuser privilege (sudo for Linux),
    • vars defines variables for this play,
    • tasks is the start of the task section,
    • name (under task section) specifies the name of each task, and
    • name (in a service section) specifies the name of a module.

    Let’s try to execute this playbook. Firstly we need to create the source index.html to be copied to managed nodes.

    echo 'Hello World!' > index.html
    

    Execute ansible-playbook against our playbook. Just like the ad hoc command, we need to specify the inventory with the -i switch.

    ansible-playbook nginx-playbook.yaml -i myinventory.ini
    

    A shell with the results of the ansible-playbook command above

    Now we can curl our managed nodes to check on the Nginx service and the custom index.html.

    curl 18.138.255.51
    curl 13.229.198.36
    curl 18.139.0.15
    

    The output of each curl command above, with the responses being identical: 'Hello World!'

    Conclusion

    That’s all, folks. We have successfully managed EC2 instances with Ansible. This tutorial covered the fundamentals of Ansible to start managing remote servers.

    Ansible rises above its competitors due to its simplicity of its installation, configuration, and usage. To get further information about Ansible you may visit its official documentation.


    ansible aws linux sysadmin

    Implementing Backend Tasks in ASP.NET Core

    Kevin Campusano

    By Kevin Campusano
    August 8, 2022

    As we’ve already established, Ruby on Rails is great. The amount and quality of tools that Rails puts at our disposal when it comes to developing web applications is truly outstanding. One aspect of web application development that Rails makes particularly easy is that of creating backend tasks.

    These tasks can be anything from database maintenance, file system cleanup, overnight heavy computations, bulk email dispatch, etc. In general, functionality that is typically initiated by a sysadmin in the backend, or scheduled in a cron job, which has no GUI, but rather, is invoked via command line.

    By integrating with Rake, Rails allows us to very easily write such tasks as plain old Ruby scrips. These scripts have access to all the domain logic and data that the full-fledged Rails app has access to. The cherry on top is that the command-line interface to invoke such tasks is very straightforward. It looks something like this: bin/rails fulfillment:process_new_orders.

    All this is included right out of the box for new Rails projects.

    ASP.NET Core, which is also great, doesn’t support this out of the box like Rails does.

    However, I think we should be able to implement our own without too much hassle, and have a similar sysadmin experience. Let’s see if we can do it.

    There is a Table of contents at the end of this post.

    What we want to accomplish

    So, to put it in concrete terms, we want to create a backend task that has access to all the logic and data of an existing ASP.NET Core application. The task should be callable via command-line interface, so that it can be easily executed via the likes of cron or other scripts.

    In order to meet these requirements, we will create a new .NET console app that:

    1. References the existing ASP.NET Core project.
    2. Loads all the classes from it and makes instances of them available via dependency injection.
    3. Has a usable, Unix-like command-line interface that sysadmins would be familiar with.
    4. Is invokable via the .NET CLI.

    We will do all this within the context of an existing web application. One that I’ve been building upon thoughout a few articles.

    It is a simple ASP.NET Web API backed by a Postgres database. It has a few endpoints for CRUDing automotive related data and for calculating values of vehicles based on various aspects of them.

    You can find the code on GitHub. If you’d like to follow along, clone the repository and check out this commit: 9a078015ce. It represents the project as it was before applying all the changes from this article. The finished product can be found here.

    You can follow the instructions in the project’s README file if you want to get the app up and running.

    For our demo use case, we will try to develop a backend task that creates new user accounts for our existing application.

    Let’s get to it.

    Creating a new console app that references the existing web app as a library

    The codebase is structured as a solution, as given away by the vehicle-quotes.sln file located at the root of the repository. Within this solution, there are two projects: VehicleQuotes which is the web app itself, and VehicleQuotes.Tests which contains the app’s test suite. For this article, we only care about the web app.

    Like I said, the backend task that we will create is nothing fancy in itself. It’s a humble console app. So, we start by asking the dotnet CLI to create a new console app project for us.

    From the repository’s root directory, we can do so with this command:

    dotnet new console -o VehicleQuotes.CreateUser
    

    That should’ve resulted in a new VehicleQuotes.CreateUser directory being created, and within it, (along with some other nuts and bolts) our new console app’s Program.cs (the code) and VehicleQuotes.CreateUser.csproj (the project definition) files. The name that we’ve chosen is straightforward: the name of the overall solution and the action that this console app is going to perform.

    There’s more info regarding the dotnet new command in the official docs.

    Now, since we’re using a solution file, let’s add our brand new console app project to it with:

    dotnet sln add VehicleQuotes.CreateUser
    

    OK, cool. That should’ve produced the following diff on vehicle-quotes.sln:

    diff --git a/vehicle-quotes.sln b/vehicle-quotes.sln
    index 537d864..5da277d 100644
    --- a/vehicle-quotes.sln
    +++ b/vehicle-quotes.sln
    @@ -7,6 +7,8 @@ Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "VehicleQuotes", "VehicleQuo
     EndProject
     Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "VehicleQuotes.Tests", "VehicleQuotes.Tests\VehicleQuotes.Tests.csproj", "{5F6470E4-12AB-4E30-8879-3664ABAA959D}"
     EndProject
    +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "VehicleQuotes.CreateUser", "VehicleQuotes.CreateUser\VehicleQuotes.CreateUser.csproj", "{EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}"
    +EndProject
     Global
            GlobalSection(SolutionConfigurationPlatforms) = preSolution
                    Debug|Any CPU = Debug|Any CPU
    @@ -44,5 +46,17 @@ Global
                    {5F6470E4-12AB-4E30-8879-3664ABAA959D}.Release|x64.Build.0 = Release|Any CPU
                    {5F6470E4-12AB-4E30-8879-3664ABAA959D}.Release|x86.ActiveCfg = Release|Any CPU
                    {5F6470E4-12AB-4E30-8879-3664ABAA959D}.Release|x86.Build.0 = Release|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Debug|Any CPU.Build.0 = Debug|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Debug|x64.ActiveCfg = Debug|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Debug|x64.Build.0 = Debug|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Debug|x86.ActiveCfg = Debug|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Debug|x86.Build.0 = Debug|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Release|Any CPU.ActiveCfg = Release|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Release|Any CPU.Build.0 = Release|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Release|x64.ActiveCfg = Release|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Release|x64.Build.0 = Release|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Release|x86.ActiveCfg = Release|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Release|x86.Build.0 = Release|Any CPU
            EndGlobalSection
     EndGlobal
    

    This allows the .NET tooling to know that we’ve got some intentional organization going on in our code base. That these projects each form part of a bigger whole.

    It’s also nice to add a .gitignore file for our new VehicleQuotes.CreateUser project to keep things manageable. dotnet new can help with that if we were to navigate into the VehicleQuotes.CreateUser directory and run:

    dotnet new gitignore
    

    You can learn more about how to work with solutions via the .NET CLI in the official docs.

    Now let’s modify our new project’s .csproj file so that it references the main web app project under VehicleQuotes. This will allow our console app to access all of the classes defined in the web app, as if it was a library or package.

    If we move to the VehicleQuotes.CreateUser directory, we can do that with the following command:

    dotnet add reference ../VehicleQuotes/VehicleQuotes.csproj
    

    The command itself is pretty self-explanatory. It just expects to be given the .csproj file of the project that we want to add as a reference in order to do its magic.

    Running that should’ve added the following snippet to VehicleQuotes.CreateUser/VehicleQuotes.CreateUser.csproj:

    <ItemGroup>
      <ProjectReference Include="..\VehicleQuotes\VehicleQuotes.csproj" />
    </ItemGroup>
    

    This way, .NET allows the code defined in the VehicleQuotes project to be used within the VehicleQuotes.CreateUser project.

    You can learn more about the add reference command in the official docs.

    Setting up dependency injection in the console app

    As a result of the previous steps, our new console app now has access to the classes defined within the web app. However, classes by themselves are no good if we can’t actually create instances of them that we can interact with. The premier method for making instances of classes available throughout a .NET application is via dependency injection. So, we need to set that up for our little console app.

    Dependency injection is something that comes out of the box for ASP.NET Core web apps. Luckily for us, .NET makes it fairly easy to set it up in console apps as well by leveraging the same components.

    For this app, we want to create user accounts. In the web app, user account management is done via ASP.NET Core Identity. Specifically, the UserManager class is used to create new user accounts. This console app will do the same.

    Take a look at VehicleQuotes/Controllers/UsersController.cs to see how the user accounts are created. If you’d like to know more about integrating ASP.NET Core Identity into an existing web app, I wrote an article about it.

    Before we do the dependency injection setup, let’s add a new class to our console app project that will encapsulate the logic of leveraging the UserManager for user account creation. This is the actual task that we want to perform. The new class will be defined in VehicleQuotes.CreateUser/UserCreator.cs and these will be its contents:

    using Microsoft.AspNetCore.Identity;
    
    namespace VehicleQuotes.CreateUser;
    
    class UserCreator
    {
        private readonly UserManager<IdentityUser> _userManager;
    
        public UserCreator(UserManager<IdentityUser> userManager) {
            _userManager = userManager;
        }
    
        public IdentityResult Run(string username, string email, string password)
        {
            var userCreateTask = _userManager.CreateAsync(
                new IdentityUser() { UserName = username, Email = email },
                password
            );
    
            var result = userCreateTask.Result;
    
            return result;
        }
    }
    

    This class is pretty lean. All it does is define a constructor that expects an instance of UserManager<IdentityUser>, which will be supplied via dependency injection; and a simple Run method that, when given a username, email, and password, asks the UserManager<IdentityUser> instance that it was given to create a user account.

    Moving on to setting up dependency injection now, we will do it in VehicleQuotes.CreateUser/Program.cs. Replace the contents of that file with this:

    using Microsoft.Extensions.DependencyInjection;
    using Microsoft.Extensions.Hosting;
    using VehicleQuotes.CreateUser;
    
    IHost host = Host.CreateDefaultBuilder(args)
        .UseContentRoot(System.AppContext.BaseDirectory)
        .ConfigureServices((context, services) =>
        {
            var startup = new VehicleQuotes.Startup(context.Configuration);
            startup.ConfigureServices(services);
    
            services.AddTransient<UserCreator>();
        })
        .Build();
    
    var userCreator = host.Services.GetRequiredService<UserCreator>();
    userCreator.Run(args[0], args[1], args[2]);
    

    Let’s dissect this bit by bit.

    First off, we’ve got a few using statements that we need in order to access some classes and extension methods that we need down below.

    Next, we create and configure a new IHost instance. .NET Core introduced the concept of a “host” as an abstraction for programs; and packed in there a lot of functionality to help with things like configuration, logging and, most importantly for us, dependency injection. To put it simply, the simplest way of enabling dependency injection in a console app is to use a Host and all the goodies that come within.

    There’s much more information about hosts in .NET’s official documentation.

    Host.CreateDefaultBuilder(args) gives us an IHostBuilder instance that we can use to configure our host. In our case, we’ve chosen to call UseContentRoot(System.AppContext.BaseDirectory) on it, which makes it possible for the app to find assets (like appconfig.json files!) regardless of where its deployed and where its being called from.

    This is important for us because, as you will see later, we will install this console app as a .NET Tool. .NET Tools are installed in directories picked by .NET and can be run from anywhere in the system. So we need to make sure that our app can find its assets wherever it has been installed.

    After that, we call ConfigureServices where we do a nice trick in order to make sure our console app has all the same configuration as the web app as far as dependency injection goes.

    You see, in ASP.NET Core, all the service classes that are to be made available to the application via dependency injection are configured within the web app’s Startup class' ConfigureServices method. VehicleQuotes is no exception. So, in order for our console app to have access to all of the services (i.e. instances of classes) that the web app does, the console app needs to call that same code. And that’s exactly what’s happening in these two lines:

    var startup = new VehicleQuotes.Startup(context.Configuration);
    startup.ConfigureServices(services);
    

    We create a new instance of the web app’s Startup class and call its ConfigureServices method. That’s the key element that allows the console app to have access to all the logic that the web app does. Including the services/classes provided by ASP.NET Core Identity like UserManager<IdentityUser>, which UserCreator needs in order to function.

    Once that’s done, the rest is straightforward.

    We also add our new UserCreator to the dependency injection engine via:

    services.AddTransient<UserCreator>();
    

    Curious about what Transient means? The official .NET documentation has the answer.

    And that allows us to obtain an instance of it with:

    var userCreator = host.Services.GetRequiredService<UserCreator>();
    

    And then, it’s just a matter of calling its Run method like so, passing it the command-line arguments:

    userCreator.Run(args[0], args[1], args[2]);
    

    args is a special variable that contains an array with the arguments given by command line. That means that our console app can be called like this:

    dotnet run test_username test_email@email.com mysecretpassword
    

    Go ahead, you can try it out and see the app log what it’s doing. Once done, it will also have created a new record in the database.

    $ psql -h localhost -U vehicle_quotes
    psql (14.3 (Ubuntu 14.3-0ubuntu0.22.04.1), server 14.2 (Debian 14.2-1.pgdg110+1))
    Type "help" for help.
    
    vehicle_quotes=# select user_name, email from public."AspNetUsers";
       user_name   |           email            
    ---------------+----------------------------
     test_username | test_email@email.com
    (1 rows)
    

    Pretty neat, huh? At this point we have a console app that creates user accounts for our existing web app. It works, but it could be better. Let’s add a nice command-line interface experience now.

    Improving the CLI with CommandLineParser

    With help from CommandLineParser, we can develop a Unix-like command-line interface for our app. We can use it to add help text, examples, have strongly typed parameters and useful error messages when said parameters are not correctly provided. Let’s do that now.

    First, we need to install the package in our console app project by running the following command from within the project’s directory (VehicleQuotes.CreateUser):

    dotnet add package CommandLineParser
    

    After that’s done, a new section will have been added to VehicleQuotes.CreateUser/​VehicleQuotes.CreateUser.csproj that looks like this:

    <ItemGroup>
      <PackageReference Include="CommandLineParser" Version="2.9.1" />
    </ItemGroup>
    

    Now our console app can use the classes provided by the package.

    All specifications for CommandLineParser are done via a plain old C# class that we need to define. For this console app, which accepts three mandatory arguments, such a class could look like this:

    using CommandLine;
    using CommandLine.Text;
    
    namespace VehicleQuotes.CreateUser;
    
    class CliOptions
    {
        [Value(0, Required = true, MetaName = "username", HelpText = "The username of the new user account to create.")]
        public string Username { get; set; }
    
        [Value(1, Required = true, MetaName = "email", HelpText = "The email of the new user account to create.")]
        public string Email { get; set; }
    
        [Value(2, Required = true, MetaName = "password", HelpText = "The password of the new user account to create.")]
        public string Password { get; set; }
    
        [Usage(ApplicationAlias = "create_user")]
        public static IEnumerable<Example> Examples
        {
            get
            {
                return new List<Example> {
                    new (
                        "Create a new user account",
                        new CliOptions { Username = "name", Email = "email@domain.com", Password = "secret" }
                    )
                };
            }
        }
    }
    

    I’ve decided to name it CliOptions but really, it could have been anything. Go ahead and create it in VehicleQuotes.CreateUser/CliOptions.cs. There are a few interesting elements to note here.

    The key aspect is that we have a few properties: Username, Email, and Password. These represent our three command-line arguments. Thanks to the Value attributes that they have been annotated with, CommandLineParser will know that that’s their purpose. You can see how the attributes themselves also contain each argument’s specification like the order in which they should be supplied, as well as their name and help text.

    This class also defines an Examples getter which is used by CommandLineParser to print out usage examples into the console when our app’s help option is invoked.

    Other than that, the class itself is unremarkable. In summary, it’s a number of fields annotated with attributes so that CommandLineParser knows what to do with it.

    In order to actually put it to work, we update our VehicleQuotes.CreateUser/Program.cs like so:

     using Microsoft.Extensions.DependencyInjection;
     using Microsoft.Extensions.Hosting;
    +using CommandLine;
     using VehicleQuotes.CreateUser;
     
    +void Run(CliOptions options)
    +{
         IHost host = Host.CreateDefaultBuilder(args)
             .UseContentRoot(System.AppContext.BaseDirectory)
             .ConfigureServices((context, services) =>
             {
                 var startup = new VehicleQuotes.Startup(context.Configuration);
                 startup.ConfigureServices(services);
     
                 services.AddTransient<UserCreator>();
             })
             .Build();
     
         var userCreator = host.Services.GetRequiredService<UserCreator>();
    -    userCreator.Run(args[0], args[1], args[2]);
    +    userCreator.Run(options.Username, options.Email, options.Password);
    +}
    +
    +Parser.Default
    +    .ParseArguments<CliOptions>(args)
    +    .WithParsed(options => Run(options));
    

    We’ve wrapped Program.cs’s original code into a method simply called Run.

    Also, we’ve added this snippet at the bottom of the file:

    Parser.Default
        .ParseArguments<CliOptions>(args)
        .WithParsed(options => Run(options));
    

    That’s how we ask CommandLineParser to parse the incoming CLI arguments, as specified by CliOptions and, if it can be done successfully, then execute the rest of the program by calling the Run method.

    Neatly, we also no longer have to use the args array directly in order to get the command-line arguments provided to the app, instead we use the options object that CommandLineParser creates for us once it has done its parsing. You can see it in this line:

    userCreator.Run(options.Username, options.Email, options.Password);
    

    options is an instance of our very own CliOptions class, so we can access the properties that we defined within it. These contain the arguments that were passed to the program.

    If you were to try dotnet run right now, you’d see the following output:

    VehicleQuotes.CreateUser 1.0.0
    Copyright (C) 2022 VehicleQuotes.CreateUser
    
    ERROR(S):
      A required value not bound to option name is missing.
    USAGE:
    Create a new user account:
      create_user name email@domain.com secret
    
      --help               Display this help screen.
    
      --version            Display version information.
    
      username (pos. 0)    Required. The username of the new user account to create.
    
      email (pos. 1)       Required. The email of the new user account to create.
    
      password (pos. 2)    Required. The password of the new user account to create.
    

    As you can see, CommandLineParser detected that no arguments were given, and as such, it printed out an error message, along with the descriptions, help text and example that we defined. Basically the instructions on how to use our console app.

    Deploying the console app as a .NET tool

    OK, we now have a console app that does what it needs to do, with a decent interface. The final step is to make it even more accessible by deploying it as a .NET tool. If we do that, we’d be able to invoke it with a command like this:

    dotnet create_user Kevin kevin@gmail.com secretpw
    

    .NET makes this easy for us. There’s a caveat that we’ll discuss later, but for now, let’s go through the basic setup.

    .NET tools are essentially just glorified NuGet packages. As such, we begin by adding some additional package-related configuration options to VehicleQuotes.CreateUser/VehicleQuotes.CreateUser.csproj. We add them as children elements to the <PropertyGroup>:

    <PackAsTool>true</PackAsTool>
    <PackageOutputPath>./nupkg</PackageOutputPath>
    <ToolCommandName>create_user</ToolCommandName>
    <VersionPrefix>1.0.0</VersionPrefix>
    

    With that, we signal .NET that we want the console app to be packed as a tool, the path where it should put the package itself, and what its name will be. That is, how will it be invoked via the console (remember we want to be able to do dotnet create_user).

    Finally, we specify a version number. When dealing with NuGet packages, versioning them is very important, as that drives caching and downloading logic in NuGet. More on that later when we talk about the aforementioned caveats.

    Now, to build the package, we use:

    dotnet pack
    

    That will build the application and produce a VehicleQuotes.CreateUser/nupkg/VehicleQuotes.CreateUser.1.0.0.nupkg file.

    We won’t make the tool available for the entire system. Instead, we will make it available from within our solution’s directory only. We can make that happen if we create a tool manifest file in the source code’s root directory. That’s done with this command, run from the root directory:

    dotnet new tool-manifest
    

    That should create a new file: .config/dotnet-tools.json.

    Now, also from the root directory, we can finally install our tool:

    dotnet tool install --add-source ./VehicleQuotes.CreateUser/nupkg VehicleQuotes.CreateUser
    

    This is the regular command to install any tools in .NET. The interesting part is that we use the --add-source option to point it to the path where our freshly built package is located.

    After that, .NET shows this output:

    $ dotnet tool install --add-source ./VehicleQuotes.CreateUser/nupkg VehicleQuotes.CreateUser
    You can invoke the tool from this directory using the following commands: 'dotnet tool run create_user' or 'dotnet create_user'.
    Tool 'vehiclequotes.createuser' (version '1.0.0') was successfully installed. Entry is added to the manifest file /path/to/solution/.config/dotnet-tools.json.
    

    It tells us all we need to know. Check out the .config/dotnet-tools.json to see how the tool has been added there. All this means that now we can run our console app as a .NET tool:

    $ dotnet create_user --help
    VehicleQuotes.CreateUser 1.0.0
    Copyright (C) 2022 VehicleQuotes.CreateUser
    USAGE:
    Create a new user account:
      create_user name email@domain.com secret
    
      --help               Display this help screen.
    
      --version            Display version information.
    
      username (pos. 0)    Required. The username of the new user account to create.
    
      email (pos. 1)       Required. The email of the new user account to create.
    
      password (pos. 2)    Required. The password of the new user account to create.
    

    Pretty sweet, huh? And yes, it has taken a lot more effort than what it would’ve taken in Ruby on Rails, but hey, the end result is pretty fabulous I think, and we learned a new thing. Besides, once you’ve done it once, the skeleton can be easily reused for all kinds of different backend tasks.

    Now, before we wrap this up, there’s something we need to consider when actively developing these tools. That is, when making changes and re-installing constantly.

    The main aspect to understand is that tools are just NuGet packages, and as such are beholden to the NuGet package infrastructure. Which includes caching. If you’re in the process of developing your tool and are quickly making and deploying changes, NuGet won’t update the cache unless you do one of two things:

    1. Manually clear it with a command like dotnet nuget locals all --clear.
    2. Bump up the version of the tool by updating the value of <VersionSuffix> in the project (.csproj) file.

    This means that, unless you do one these, the changes that you make to the app between re-builds (with dotnet pack) and re-installs (with dotnet dotnet tool install) won’t ever make their way to the package that’s actually installed in your system. So be sure to keep that in mind.

    Table of contents


    csharp dotnet aspdotnet

    SSH Key Auth using KeeAgent with Git Bash and Windows CLI OpenSSH

    Ron Phipps

    By Ron Phipps
    August 8, 2022

    A leather couch in surprisingly good condition sits on a patch of grass between the sidewalk and the road. Harsh sunlight casts shadows of trees and buildings on the street and couch.

    In a previous blog post we showed how to configure KeePass and KeeAgent on Windows to provide SSH key agent forwarding with confirmation while using PuTTY and other PuTTY agent compatible programs. In this post we’ll expand on that by showing how to use the same key agent to provide SSH key auth when using Git Bash and the Windows command line OpenSSH.

    Git Bash support

    Open KeePass, click on Tools → Options, select the KeeAgent tab.

    Create C:\Temp if it does not exist.

    Check the two boxes in the Cygwin/MSYS Integration section.

    Directly after each box, fill in the path: C:\Temp\cygwin-ssh.socket for the Cygwin compatible socket file, and C:\Temp\msys-ssh.socket for the msysGit compatible socket file.

    KeePass options, open to the KeeAgent tab. Highlighted is the Cygwin/MSYS section, with two boxes checked. One reads “Create Cygwin compatible socket file (works with some versions of MSYS)”. The other reads “Create msysGit compatible socket file”. After each is the path described above.

    Click OK.

    Open Git Bash.

    Create the file ~/.bash_profile with the contents:

    test -f ~/.profile && . ~/.profile
    test -f ~/.bashrc && . ~/.bashrc
    

    Create the file ~/.bashrc with the contents:

    export SSH_AUTH_SOCK="C:\Temp\cygwin-ssh.socket"
    

    Close and reopen Git Bash.

    You should now be able to SSH with Git Bash using your loaded SSH key and a dialog box should appear to approve the use of the key.

    Git Bash running ssh to a redacted server, with a dialog box reading “(ssh) has requested to use the SSH key (redacted) with fingerprint (redacted). Do you want to allow this?” The dialog’s “No” button is selected by default.

    Windows command line OpenSSH support

    Open KeePass, click on Tools → Options, select the KeeAgent tab.

    Scroll down and click on the box next to “Enable agent for Windows OpenSSH (experimental).”

    KeePass options open to the KeeAgent tab. Inside a scrollable list is a checked checkbox reading “Enable agent for Windows OpenSSH (experimental)

    Click OK.

    Open a Windows Command Prompt.

    You should now be able to SSH with Windows CLI using your loaded SSH key and a dialog box should appear to approve the use of the key.

    Windows Command Prompt running SSH, with the same KeePass dialog box asking approval for using the loaded SSH key


    windows ssh

    Auburn University and VisionPort: How the World Gets Its Water

    Samuel Stern

    By Samuel Stern
    July 28, 2022

    A VisionPort presenting about the Central Arizona Project

    The IBT Water Project at Auburn University, headed by Associate Professor P.L. Chaney, has done outstanding work illustrating in a GIS format how cities around the world get their water. The Geoscience department has mapped how water is captured and distributed in Australia, Egypt, India, Mexico, Kazakhstan, and the western USA.

    The department chose the Central Arizona Project to turn into an interactive presentation on the VisionPort platform.

    GIS showing water pumping sites

    Starting at the Mark Wilmer Pumping Plant, water is pumped from the Colorado River towards over a dozen plants and lifted up over 2,000 feet in elevation across a series of “stair-steps” before it reaches its final destination near Tucson, where it is then distributed across the state to where it is most needed.

    This data displayed on their VisionPort, installed in a custom wood case in their library, allows students to see the entire journey in a 3D environment spanning seven 65-inch displays. The presenter can take them to each stop and explain the functions of the many plants, check gates, and turnouts along the way.

    A man giving a presentation with the VisionPort

    Numerous departments at Auburn University have had success turning their presentations into engaging experiences on the VisionPort platform and I look forward to seeing and reporting on what their students and faculty do next.

    For more information about VisionPort, email sales@visionport.com or visit www.visionport.com.


    visionport gis education

    Running PostgreSQL on Docker

    Jeffry Johar

    By Jeffry Johar
    July 27, 2022

    An elephant in a jungle

    Introduction

    PostgreSQL, or Postgres, is an open-source relational database. It is officially supported on all the major operating systems: Windows, Linux, BSD, MacOS, and others.

    Besides running as an executable binary in an operating system, Postgres is able to run as a containerized application on Docker! In this article we are going to walk through the Postgres implementation on Docker.

    Prerequisites

    • Docker or Docker Desktop. Please refer to my previous article for help with Docker installation.
    • Internet access is required to pull or download the Postgres container image from the Docker Hub.
    • A decent text editor, such as Vim or Notepad++, to create the configuration YAML files.

    Get to know the official Postgres Image

    Go to Docker Hub and search for “postgres”.

    Docker Hub website search screen shot

    There are a lot of images for PostgreSQL at Docker Hub. If you don’t have any special requirements, it is best to select the official image. This is the image maintained by the Docker PostgreSQL Community.

    Docker Hub website search result for postgres

    The page that search result links to describes the Postgres image, how it was made and how to use it. From this page we know the image name and the required parameters. This is essential information for starting a Docker image, as we will see in the following steps.

    Run the Postgres image as a basic Postgres container

    The following command is the bare minimum for running Postgres on Docker:

    docker run --name basic-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
    

    Where:

    • --name basic-postgres sets the container name to basic-postgres,
    • -e POSTGRES_PASSWORD=mysecretpassword sets the password of the default user postgres,
    • -d runs the container in detached mode or in other words in the background, and
    • postgres uses the postgres image. By default it will get the image from https://hub.docker.com.

    Execute docker ps to check on running Docker containers. We should see our basic-postgres container running. docker ps is like ps -ef on Linux/Unix, which lists all running applications.

    Sample output:

    Screen shot of terminal showing docker ps output after postgres container was started

    Working with the Postgres container

    Just as Postgres running natively on an operating system, Postgres on Docker comes with the psql front-end client for accessing the Postgres database. To access psql in the Postgres container execute the following command:

    docker exec -it basic-postgres psql -U postgres
    

    Where:

    • exec -it executes something interactive (-i) with a TTY (-t),
    • basic-postgres specifies the container, and
    • psql -U postgres is the psql command with its switch to specify the Postgres user.

    Now we are able to execute any psql command.

    Let’s try a few Postgres commands and import the famous “dvdrental” sample database to our Postgres installation.

    List all available databases:

    \l
    

    Create a database named dvdrental:

    create database dvdrental;
    

    List all available databases. We should now see the created dvdrental database.

    \l
    

    Quit from psql:

    \q
    

    Download the dvdrental database backup from postgresqltutorial.com and after it succeeds, unzip it:

    curl -O https://www.postgresqltutorial.com/wp-content/uploads/2019/05/dvdrental.zip
    unzip dvdrental.zip
    

    Execute the following command to import the data. It will restore the dvdrental.tar backup to our Postgres database.

    docker exec -i basic-postgres pg_restore -U postgres -v -d dvdrental < dvdrental.tar
    

    Where:

    • exec -i executes something interactive,
    • basic-postgres specifies which container,
    • pg_restore -U postgres -v -d dvdrental is the pg_restore command with its own arguments:
      • -U postgres says to connect as the postgres user,
      • -v enables verbose mode,
      • -d dvdrental specifies the database to connect to, and
    • < dvdrental.tar says which file’s data the outside shell should pass into the container to pg_restore.

    Log in to the dvdrental database:

    docker exec -it basic-postgres psql -U postgres -d dvdrental
    

    Where:

    • exec -it executes something interactive with a terminal,
    • basic-postgres specifies which container, and
    • psql -U postgres -d dvdrental is the psql command with the postgres user and the dvdrental database specified.

    List all tables by describing the tables in the dvdrental database:

    \dt
    

    List the first 10 actors from the actor table:

    select * from actor limit 10;
    

    Quit from psql:

    \q
    

    Gracefully stop the Docker container:

    docker stop basic-postgres
    

    If you don’t need it anymore you can delete the container:

    docker rm basic-postgres
    

    Sample output:

    Screen shot of terminal showing import of dvdrental sample database into Postgres

    And later:

    Screen shot of terminal showing psql investigation of dvdrental sample database

    Run Postgres image as a “real world” Postgres container

    The basic Postgres container is only good for learning or testing. It requires more features to be able to serve as a working database for a real world application. We will add 2 more features to make it useable:

    • Persistent storage: By default the container filesystem is ephemeral. What this means is whenever we restart a terminated or deleted container, it will get an all-new, fresh filesystem and all previous data will be wiped clean. This is not suitable for database systems. To be a working database, we need to add a persistent filesystem to the container.
    • Port forwarding from host to container: The container network is isolated, making it inaccessible from the outside world. A database is no use if it can’t be accessed. To make it accessible we need to forward a host operating system port to the container port.

    Let’s start building a “real world” Postgres container. Firstly we need to create the persistent storage. In Docker this is known as a volume.

    Execute the following command to create a volume named pg-data:

    docker volume create pg-data
    

    List all Docker volumes and ensure that pg-data was created:

    docker volume ls | grep pg-data
    

    Run a Postgres container with persistent storage and port forwarding:

    docker run --name real-postgres \
    -e POSTGRES_PASSWORD=mysecretpassword \
    -v pg-data:/var/lib/postgresql/data \
    -p 5432:5432 \
    -d \
    postgres
    

    Where:

    • --name real-postgres sets the container name,
    • -e POSTGRES_PASSWORD=mysecretpassword sets the password of the default user postgres
    • -v pg-data:/var/lib/postgresql/data mounts the pg-data volume as the postgres data directory,
    • -p 5432:5432 forwards from port 5432 of host operating system to port 5432 of container,
    • -d runs the container in detached mode or, in other words, in the background, and
    • postgres use the postgres image. By default it will get the image from https://hub.docker.com

    Execute docker ps to check on running containers on Docker. Take note that the real-postgres container has port forwarding information.

    Now we are going to try to access the Postgres container with psql from the host operating system.

    psql -h localhost -p 5432 -U postgres
    

    Sample output:

    Screen shot of terminal showing access to Postgres in Docker with persistent storage

    Cleaning up the running container

    Stop the container:

    docker stop real-postgres
    

    Delete the container:

    docker rm real-postgres
    

    Delete the volume:

    docker volume rm pg-data
    

    Managing Postgres container with Docker Compose

    Managing a container with a long list of arguments to Docker is tedious and error prone. Instead of the Docker CLI command we could use Docker Compose, which is a tool for managing containers from a YAML manifest file.

    Create the following file named docker-compose.yaml:

    version: '3.1'
    services:
      db:
        container_name: real-postgres-2
        image: postgres
        restart: always
        ports:
          - "5432:5432"
        environment:
          POSTGRES_PASSWORD: mysecretpassword
        volumes:
          - pg-data-2:/var/lib/postgresql/data
    volumes:
      pg-data-2:
        external: false
    

    To start the Postgres container with Docker Compose, execute the following command in the same location as docker-compose.yaml:

    docker-compose up -d
    

    Where -d runs the container in detached mode.

    Execute docker ps to check on running Docker containers. Take note that real-postgres-2 was created by Docker Compose.

    To stop Postgres container with Docker Compose, execute the following command in the same location as docker-compose.yaml:

    docker-compose down
    

    Sample output:

    Screen shot of terminal showing Postgres container deployed by Docker Compose

    Conclusion

    That’s all, folks. We have successfully deployed PostgreSQL on Docker.

    Now we are able to reap the benefits of container technology for PostgreSQL, including portability, agility, and better management.


    docker postgres containers
    Page 1 of 206 • Next page