• Home

  • Custom Ecommerce
  • Application Development
  • Database Consulting
  • Cloud Hosting
  • Systems Integration
  • Legacy Business Systems
  • Security & Compliance
  • GIS

  • Expertise

  • About Us
  • Our Team
  • Clients
  • Careers

  • Blog

  • VisionPort

  • Contact
  • Our Blog

    Ongoing observations by End Point Dev people

    Ansible tutorial with AWS EC2

    Jeffry Johar

    By Jeffry Johar
    August 11, 2022

    A ferris wheel lit up by red lights at night
    Photo by David Buchi

    Ansible is a tool to manage multiple remote systems from a single command center. In Ansible, the single command center is known as the control node and the remote systems to be managed are known as managed nodes. The following describes the 2 nodes:

    1. Control node:

      • The command center where Ansible is installed.
      • Supported systems are Unix and Unix-like (Linux, BSD, macOS).
      • Python and sshd are required.
      • Remote systems to be managed are listed in a YAML or INI file called an inventory.
      • Tasks to be executed are defined in a YAML file called a playbook.
    2. Managed node:

      • The remote systems to be managed.
      • Supported systems are Unix/Unix-like, Windows, and Appliances (eg: Cisco, NetApp).
      • Python and sshd are required for Unix/Unix-like.
      • PowerShell and WinRM are required for Windows.

    In this tutorial we will use Ansible to manage multiple EC2 instances. For simplicity, we are going to provision EC2 instances in the AWS web console. Then we will configure one EC2 as the control node that will be managing multiple EC2 instances as managed nodes.

    Prerequisites

    For this tutorial we will need the following from AWS:

    • An active AWS account.
    • EC2 instances with Amazon Linux 2 as the OS.
    • AWS Keys for SSH to access control node and managed nodes.
    • Security group which allows SSH and HTTP.
    • A decent editor such as Vim or Notepad++ to create the inventory and the playbook.

    EC2 Instances provisioning

    The following are the steps to provision EC2 instances with the AWS web console.

    1. Go to AWS Console → EC2 → Launch Instances.
    2. Select the Amazon Linux 2 AMI.
    3. Select a key pair. If there are no available key pairs, please create one according to Amazon’s instructions.
    4. Allow SSH and HTTP.
    5. Set Number of Instances to 4.
    6. Click Launch Instance.

    AWS web console, open to the “Instances” tab in the toolbar. This is circled and pointing to the table column starting with “Public IPv4…

    Ansible nodes and SSH keys

    In this section we will gather the IP addresses of EC2 instances and set up the SSH keys.

    1. Go to AWS Console → EC2 → Launch Instances.

    2. Get the Public IPv4 addresses.

    3. We will choose the first EC2 to be the Ansible control node and the rest to be the managed nodes:

      • control node: 13.215.159.65
      • managed nodes: 18.138.255.51, 13.229.198.36, 18.139.0.15

    AWS web console, again open to the Instances tab, with the Public IPv4 column circled. A green banner says that the EC2 instance was successfully started, followed by a long ID

    Login to the control node using our key pair. For me, it is kaptenjeffry.pem.

    ssh -i kaptenjeffry.pem ec2-user@13.215.159.65
    

    Open another terminal and copy the key pair to the control node

    scp -i kaptenjeffry.pem kaptenjeffry.pem ec2-user@13.215.159.65:~/.ssh
    

    Go back to the control node terminal. Try to log in from the control node to one of the managed nodes by using the key pair. This is to ensure the key pair is usable to access the managed nodes.

    ssh -i .ssh/kaptenjeffry.pem ec2-user@18.138.255.51
    

    Register the rest of the managed nodes as known hosts to the control nodes, in bulk:

    ssh-keyscan -t ecdsa-sha2-nistp256 13.229.198.36 18.139.0.15 >> .ssh/known_hosts
    

    Ansible Installation and Configuration

    In this section we will install Ansible in the control node and create the inventory file.

    1. In the control node, execute the following commands to install Ansible:

      sudo yum update
      sudo amazon-linux-extras install ansible2
      ansible --version
      

      Where:

      • yum update updates all installed packages using the yum package manager,
      • amazon-linux-extras install installs Ansible, and
      • ansible --version checks the installed version of Ansible.
    2. Create a file named myinventory.ini. Insert the IP addresses that we identified earlier to be the managed nodes in the following format:

    [mynginx]
    red ansible_host=18.138.255.51
    green ansible_host=13.229.198.36
    blue ansible_host=18.139.0.15
    

    Where:

    • [mynginx] is the group name of the managed nodes,
    • red, green, and blue are the aliases of each managed node, and
    • ansible_host=x.x.x.x sets the IP Address each managed node.

    myinventory.ini is a basic inventory file in a INI format. An inventory file could be either in INI or YAML format. For more information on inventory see the Ansible docs.

    Ansible modules and Ansible ad hoc commands

    Ansible modules are scripts to do a specific task at managed nodes. For example, there are modules to check availability, copy files, install applications, and lots more. To get the full list of modules, you can check the official Ansible modules page.

    A quick way to use Ansible modules is with an ad hoc command. Ad hoc commands use the ansible command-line interface to execute modules at the managed nodes. The usage is as follows:

    ansible <pattern> -m <module> -a "<module options>" -i <inventory>
    

    Where:

    • <pattern> is the IP address, hostname, alias or group name,
    • -m module is name of the module to be used,
    • -a "<module options>" sets options for the module, and
    • -i <inventory> is the inventory of the managed nodes.

    Ad hoc command examples

    The following are some example of Ansible ad hoc commands:

    ping checks SSH connectivity and Python interpreter at the managed node. To use the ping module against the mynginx group of servers (all 3 hosts: red, green, and blue), run:

    ansible mynginx -m ping -i myinventory.ini
    

    Sample output of ping. Several green blocks of JSON show successful ping responses

    copy copies files to a managed node. To copy a text file (/home/ec2-user/hello.txt in our test case) from the Control node to /tmp/ at all managed nodes in the mynginx group, run:

    ansible mynginx -m copy \
    -a 'src=/home/ec2-user/hello.txt dest=/tmp/hello.txt' \
    -i myinventory.ini
    

    shell executes a shell script at a managed node. To use module shell to execute uptime at all managed nodes in the mynginx group, run:

    ansible mynginx -m shell -a 'uptime' -i myinventory.ini
    

    Ansible playbooks

    Ansible playbooks are configuration files in a YAML format that tell Ansible what to do. A playbook executes its assigned tasks sequentially from top to bottom. Tasks in playbooks are grouped by a block of instructions called a play. The following diagram shows the high level structure of a playbook:

    An outer box labeled 'Playbook' contains two smaller boxes. The first is labeled 'Play 1', the second is labeled 'Play 2'. They contain stacked boxes similar to each other. The first box is a lighter color than the others, labeled 'Hosts 1 (or Hosts 2, for the 'Play 2' box)'. The others are labeled 'Task 1', 'Task 2', and after an ellipsis 'Task N'.

    Now we are going to use a playbook to install Nginx at our three managed nodes as depicted in the following diagram:

    At the left, a box representing a control node, with the Ansible logo inside. Pointing to the Ansible logo inside the control node box are flags reading “playbook” and “inventory”. The control node box points to three identical “managed node” boxes, each with the Nginx logo inside.

    Create the following YAML file and name it nginx-playbook.yaml. This is a playbook with one play that will install and configure Nginx service at the managed node.

    ---
    - name: Installing and Managing Nginx Server 
      hosts: mynginx   
      become: True
      vars:
        nginx_version: 1
        nginx_html: /usr/share/nginx/html
        user_home: /home/ec2-user
        index_html: index.html
      tasks:
        - name: Install the latest version of nginx
          command: amazon-linux-extras install nginx{{ nginx_version }}=latest -y
    
        - name: Start nginx service
          service:
            name: nginx
            state: started
    
        - name: Enable nginx service
          service:
             name: nginx
             enabled: yes
        - name: Copy index.html to managed nodes
          copy:
            src:  "{{ user_home }}/{{ index_html }}"
            dest: "{{ nginx_html }}"
    

    Where:

    • name (top most) is the name of this play,
    • hosts specifies the managed nodes for this play,
    • become says whether to use superuser privilege (sudo for Linux),
    • vars defines variables for this play,
    • tasks is the start of the task section,
    • name (under task section) specifies the name of each task, and
    • name (in a service section) specifies the name of a module.

    Let’s try to execute this playbook. Firstly we need to create the source index.html to be copied to managed nodes.

    echo 'Hello World!' > index.html
    

    Execute ansible-playbook against our playbook. Just like the ad hoc command, we need to specify the inventory with the -i switch.

    ansible-playbook nginx-playbook.yaml -i myinventory.ini
    

    A shell with the results of the ansible-playbook command above

    Now we can curl our managed nodes to check on the Nginx service and the custom index.html.

    curl 18.138.255.51
    curl 13.229.198.36
    curl 18.139.0.15
    

    The output of each curl command above, with the responses being identical: 'Hello World!'

    Conclusion

    That’s all, folks. We have successfully managed EC2 instances with Ansible. This tutorial covered the fundamentals of Ansible to start managing remote servers.

    Ansible rises above its competitors due to its simplicity of its installation, configuration, and usage. To get further information about Ansible you may visit its official documentation.


    ansible aws linux sysadmin

    Implementing Backend Tasks in ASP.NET Core

    Kevin Campusano

    By Kevin Campusano
    August 8, 2022

    As we’ve already established, Ruby on Rails is great. The amount and quality of tools that Rails puts at our disposal when it comes to developing web applications is truly outstanding. One aspect of web application development that Rails makes particularly easy is that of creating backend tasks.

    These tasks can be anything from database maintenance, file system cleanup, overnight heavy computations, bulk email dispatch, etc. In general, functionality that is typically initiated by a sysadmin in the backend, or scheduled in a cron job, which has no GUI, but rather, is invoked via command line.

    By integrating with Rake, Rails allows us to very easily write such tasks as plain old Ruby scrips. These scripts have access to all the domain logic and data that the full-fledged Rails app has access to. The cherry on top is that the command-line interface to invoke such tasks is very straightforward. It looks something like this: bin/rails fulfillment:process_new_orders.

    All this is included right out of the box for new Rails projects.

    ASP.NET Core, which is also great, doesn’t support this out of the box like Rails does.

    However, I think we should be able to implement our own without too much hassle, and have a similar sysadmin experience. Let’s see if we can do it.

    There is a Table of contents at the end of this post.

    What we want to accomplish

    So, to put it in concrete terms, we want to create a backend task that has access to all the logic and data of an existing ASP.NET Core application. The task should be callable via command-line interface, so that it can be easily executed via the likes of cron or other scripts.

    In order to meet these requirements, we will create a new .NET console app that:

    1. References the existing ASP.NET Core project.
    2. Loads all the classes from it and makes instances of them available via dependency injection.
    3. Has a usable, Unix-like command-line interface that sysadmins would be familiar with.
    4. Is invokable via the .NET CLI.

    We will do all this within the context of an existing web application. One that I’ve been building upon thoughout a few articles.

    It is a simple ASP.NET Web API backed by a Postgres database. It has a few endpoints for CRUDing automotive related data and for calculating values of vehicles based on various aspects of them.

    You can find the code on GitHub. If you’d like to follow along, clone the repository and check out this commit: 9a078015ce. It represents the project as it was before applying all the changes from this article. The finished product can be found here.

    You can follow the instructions in the project’s README file if you want to get the app up and running.

    For our demo use case, we will try to develop a backend task that creates new user accounts for our existing application.

    Let’s get to it.

    Creating a new console app that references the existing web app as a library

    The codebase is structured as a solution, as given away by the vehicle-quotes.sln file located at the root of the repository. Within this solution, there are two projects: VehicleQuotes which is the web app itself, and VehicleQuotes.Tests which contains the app’s test suite. For this article, we only care about the web app.

    Like I said, the backend task that we will create is nothing fancy in itself. It’s a humble console app. So, we start by asking the dotnet CLI to create a new console app project for us.

    From the repository’s root directory, we can do so with this command:

    dotnet new console -o VehicleQuotes.CreateUser
    

    That should’ve resulted in a new VehicleQuotes.CreateUser directory being created, and within it, (along with some other nuts and bolts) our new console app’s Program.cs (the code) and VehicleQuotes.CreateUser.csproj (the project definition) files. The name that we’ve chosen is straightforward: the name of the overall solution and the action that this console app is going to perform.

    There’s more info regarding the dotnet new command in the official docs.

    Now, since we’re using a solution file, let’s add our brand new console app project to it with:

    dotnet sln add VehicleQuotes.CreateUser
    

    OK, cool. That should’ve produced the following diff on vehicle-quotes.sln:

    diff --git a/vehicle-quotes.sln b/vehicle-quotes.sln
    index 537d864..5da277d 100644
    --- a/vehicle-quotes.sln
    +++ b/vehicle-quotes.sln
    @@ -7,6 +7,8 @@ Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "VehicleQuotes", "VehicleQuo
     EndProject
     Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "VehicleQuotes.Tests", "VehicleQuotes.Tests\VehicleQuotes.Tests.csproj", "{5F6470E4-12AB-4E30-8879-3664ABAA959D}"
     EndProject
    +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "VehicleQuotes.CreateUser", "VehicleQuotes.CreateUser\VehicleQuotes.CreateUser.csproj", "{EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}"
    +EndProject
     Global
            GlobalSection(SolutionConfigurationPlatforms) = preSolution
                    Debug|Any CPU = Debug|Any CPU
    @@ -44,5 +46,17 @@ Global
                    {5F6470E4-12AB-4E30-8879-3664ABAA959D}.Release|x64.Build.0 = Release|Any CPU
                    {5F6470E4-12AB-4E30-8879-3664ABAA959D}.Release|x86.ActiveCfg = Release|Any CPU
                    {5F6470E4-12AB-4E30-8879-3664ABAA959D}.Release|x86.Build.0 = Release|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Debug|Any CPU.Build.0 = Debug|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Debug|x64.ActiveCfg = Debug|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Debug|x64.Build.0 = Debug|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Debug|x86.ActiveCfg = Debug|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Debug|x86.Build.0 = Debug|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Release|Any CPU.ActiveCfg = Release|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Release|Any CPU.Build.0 = Release|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Release|x64.ActiveCfg = Release|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Release|x64.Build.0 = Release|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Release|x86.ActiveCfg = Release|Any CPU
    +               {EDBB33E3-DCCE-4957-8A69-DC905D1BEAA4}.Release|x86.Build.0 = Release|Any CPU
            EndGlobalSection
     EndGlobal
    

    This allows the .NET tooling to know that we’ve got some intentional organization going on in our code base. That these projects each form part of a bigger whole.

    It’s also nice to add a .gitignore file for our new VehicleQuotes.CreateUser project to keep things manageable. dotnet new can help with that if we were to navigate into the VehicleQuotes.CreateUser directory and run:

    dotnet new gitignore
    

    You can learn more about how to work with solutions via the .NET CLI in the official docs.

    Now let’s modify our new project’s .csproj file so that it references the main web app project under VehicleQuotes. This will allow our console app to access all of the classes defined in the web app, as if it was a library or package.

    If we move to the VehicleQuotes.CreateUser directory, we can do that with the following command:

    dotnet add reference ../VehicleQuotes/VehicleQuotes.csproj
    

    The command itself is pretty self-explanatory. It just expects to be given the .csproj file of the project that we want to add as a reference in order to do its magic.

    Running that should’ve added the following snippet to VehicleQuotes.CreateUser/VehicleQuotes.CreateUser.csproj:

    <ItemGroup>
      <ProjectReference Include="..\VehicleQuotes\VehicleQuotes.csproj" />
    </ItemGroup>
    

    This way, .NET allows the code defined in the VehicleQuotes project to be used within the VehicleQuotes.CreateUser project.

    You can learn more about the add reference command in the official docs.

    Setting up dependency injection in the console app

    As a result of the previous steps, our new console app now has access to the classes defined within the web app. However, classes by themselves are no good if we can’t actually create instances of them that we can interact with. The premier method for making instances of classes available throughout a .NET application is via dependency injection. So, we need to set that up for our little console app.

    Dependency injection is something that comes out of the box for ASP.NET Core web apps. Luckily for us, .NET makes it fairly easy to set it up in console apps as well by leveraging the same components.

    For this app, we want to create user accounts. In the web app, user account management is done via ASP.NET Core Identity. Specifically, the UserManager class is used to create new user accounts. This console app will do the same.

    Take a look at VehicleQuotes/Controllers/UsersController.cs to see how the user accounts are created. If you’d like to know more about integrating ASP.NET Core Identity into an existing web app, I wrote an article about it.

    Before we do the dependency injection setup, let’s add a new class to our console app project that will encapsulate the logic of leveraging the UserManager for user account creation. This is the actual task that we want to perform. The new class will be defined in VehicleQuotes.CreateUser/UserCreator.cs and these will be its contents:

    using Microsoft.AspNetCore.Identity;
    
    namespace VehicleQuotes.CreateUser;
    
    class UserCreator
    {
        private readonly UserManager<IdentityUser> _userManager;
    
        public UserCreator(UserManager<IdentityUser> userManager) {
            _userManager = userManager;
        }
    
        public IdentityResult Run(string username, string email, string password)
        {
            var userCreateTask = _userManager.CreateAsync(
                new IdentityUser() { UserName = username, Email = email },
                password
            );
    
            var result = userCreateTask.Result;
    
            return result;
        }
    }
    

    This class is pretty lean. All it does is define a constructor that expects an instance of UserManager<IdentityUser>, which will be supplied via dependency injection; and a simple Run method that, when given a username, email, and password, asks the UserManager<IdentityUser> instance that it was given to create a user account.

    Moving on to setting up dependency injection now, we will do it in VehicleQuotes.CreateUser/Program.cs. Replace the contents of that file with this:

    using Microsoft.Extensions.DependencyInjection;
    using Microsoft.Extensions.Hosting;
    using VehicleQuotes.CreateUser;
    
    IHost host = Host.CreateDefaultBuilder(args)
        .UseContentRoot(System.AppContext.BaseDirectory)
        .ConfigureServices((context, services) =>
        {
            var startup = new VehicleQuotes.Startup(context.Configuration);
            startup.ConfigureServices(services);
    
            services.AddTransient<UserCreator>();
        })
        .Build();
    
    var userCreator = host.Services.GetRequiredService<UserCreator>();
    userCreator.Run(args[0], args[1], args[2]);
    

    Let’s dissect this bit by bit.

    First off, we’ve got a few using statements that we need in order to access some classes and extension methods that we need down below.

    Next, we create and configure a new IHost instance. .NET Core introduced the concept of a “host” as an abstraction for programs; and packed in there a lot of functionality to help with things like configuration, logging and, most importantly for us, dependency injection. To put it simply, the simplest way of enabling dependency injection in a console app is to use a Host and all the goodies that come within.

    There’s much more information about hosts in .NET’s official documentation.

    Host.CreateDefaultBuilder(args) gives us an IHostBuilder instance that we can use to configure our host. In our case, we’ve chosen to call UseContentRoot(System.AppContext.BaseDirectory) on it, which makes it possible for the app to find assets (like appconfig.json files!) regardless of where its deployed and where its being called from.

    This is important for us because, as you will see later, we will install this console app as a .NET Tool. .NET Tools are installed in directories picked by .NET and can be run from anywhere in the system. So we need to make sure that our app can find its assets wherever it has been installed.

    After that, we call ConfigureServices where we do a nice trick in order to make sure our console app has all the same configuration as the web app as far as dependency injection goes.

    You see, in ASP.NET Core, all the service classes that are to be made available to the application via dependency injection are configured within the web app’s Startup class' ConfigureServices method. VehicleQuotes is no exception. So, in order for our console app to have access to all of the services (i.e. instances of classes) that the web app does, the console app needs to call that same code. And that’s exactly what’s happening in these two lines:

    var startup = new VehicleQuotes.Startup(context.Configuration);
    startup.ConfigureServices(services);
    

    We create a new instance of the web app’s Startup class and call its ConfigureServices method. That’s the key element that allows the console app to have access to all the logic that the web app does. Including the services/classes provided by ASP.NET Core Identity like UserManager<IdentityUser>, which UserCreator needs in order to function.

    Once that’s done, the rest is straightforward.

    We also add our new UserCreator to the dependency injection engine via:

    services.AddTransient<UserCreator>();
    

    Curious about what Transient means? The official .NET documentation has the answer.

    And that allows us to obtain an instance of it with:

    var userCreator = host.Services.GetRequiredService<UserCreator>();
    

    And then, it’s just a matter of calling its Run method like so, passing it the command-line arguments:

    userCreator.Run(args[0], args[1], args[2]);
    

    args is a special variable that contains an array with the arguments given by command line. That means that our console app can be called like this:

    dotnet run test_username test_email@email.com mysecretpassword
    

    Go ahead, you can try it out and see the app log what it’s doing. Once done, it will also have created a new record in the database.

    $ psql -h localhost -U vehicle_quotes
    psql (14.3 (Ubuntu 14.3-0ubuntu0.22.04.1), server 14.2 (Debian 14.2-1.pgdg110+1))
    Type "help" for help.
    
    vehicle_quotes=# select user_name, email from public."AspNetUsers";
       user_name   |           email            
    ---------------+----------------------------
     test_username | test_email@email.com
    (1 rows)
    

    Pretty neat, huh? At this point we have a console app that creates user accounts for our existing web app. It works, but it could be better. Let’s add a nice command-line interface experience now.

    Improving the CLI with CommandLineParser

    With help from CommandLineParser, we can develop a Unix-like command-line interface for our app. We can use it to add help text, examples, have strongly typed parameters and useful error messages when said parameters are not correctly provided. Let’s do that now.

    First, we need to install the package in our console app project by running the following command from within the project’s directory (VehicleQuotes.CreateUser):

    dotnet add package CommandLineParser
    

    After that’s done, a new section will have been added to VehicleQuotes.CreateUser/​VehicleQuotes.CreateUser.csproj that looks like this:

    <ItemGroup>
      <PackageReference Include="CommandLineParser" Version="2.9.1" />
    </ItemGroup>
    

    Now our console app can use the classes provided by the package.

    All specifications for CommandLineParser are done via a plain old C# class that we need to define. For this console app, which accepts three mandatory arguments, such a class could look like this:

    using CommandLine;
    using CommandLine.Text;
    
    namespace VehicleQuotes.CreateUser;
    
    class CliOptions
    {
        [Value(0, Required = true, MetaName = "username", HelpText = "The username of the new user account to create.")]
        public string Username { get; set; }
    
        [Value(1, Required = true, MetaName = "email", HelpText = "The email of the new user account to create.")]
        public string Email { get; set; }
    
        [Value(2, Required = true, MetaName = "password", HelpText = "The password of the new user account to create.")]
        public string Password { get; set; }
    
        [Usage(ApplicationAlias = "create_user")]
        public static IEnumerable<Example> Examples
        {
            get
            {
                return new List<Example> {
                    new (
                        "Create a new user account",
                        new CliOptions { Username = "name", Email = "email@domain.com", Password = "secret" }
                    )
                };
            }
        }
    }
    

    I’ve decided to name it CliOptions but really, it could have been anything. Go ahead and create it in VehicleQuotes.CreateUser/CliOptions.cs. There are a few interesting elements to note here.

    The key aspect is that we have a few properties: Username, Email, and Password. These represent our three command-line arguments. Thanks to the Value attributes that they have been annotated with, CommandLineParser will know that that’s their purpose. You can see how the attributes themselves also contain each argument’s specification like the order in which they should be supplied, as well as their name and help text.

    This class also defines an Examples getter which is used by CommandLineParser to print out usage examples into the console when our app’s help option is invoked.

    Other than that, the class itself is unremarkable. In summary, it’s a number of fields annotated with attributes so that CommandLineParser knows what to do with it.

    In order to actually put it to work, we update our VehicleQuotes.CreateUser/Program.cs like so:

     using Microsoft.Extensions.DependencyInjection;
     using Microsoft.Extensions.Hosting;
    +using CommandLine;
     using VehicleQuotes.CreateUser;
     
    +void Run(CliOptions options)
    +{
         IHost host = Host.CreateDefaultBuilder(args)
             .UseContentRoot(System.AppContext.BaseDirectory)
             .ConfigureServices((context, services) =>
             {
                 var startup = new VehicleQuotes.Startup(context.Configuration);
                 startup.ConfigureServices(services);
     
                 services.AddTransient<UserCreator>();
             })
             .Build();
     
         var userCreator = host.Services.GetRequiredService<UserCreator>();
    -    userCreator.Run(args[0], args[1], args[2]);
    +    userCreator.Run(options.Username, options.Email, options.Password);
    +}
    +
    +Parser.Default
    +    .ParseArguments<CliOptions>(args)
    +    .WithParsed(options => Run(options));
    

    We’ve wrapped Program.cs’s original code into a method simply called Run.

    Also, we’ve added this snippet at the bottom of the file:

    Parser.Default
        .ParseArguments<CliOptions>(args)
        .WithParsed(options => Run(options));
    

    That’s how we ask CommandLineParser to parse the incoming CLI arguments, as specified by CliOptions and, if it can be done successfully, then execute the rest of the program by calling the Run method.

    Neatly, we also no longer have to use the args array directly in order to get the command-line arguments provided to the app, instead we use the options object that CommandLineParser creates for us once it has done its parsing. You can see it in this line:

    userCreator.Run(options.Username, options.Email, options.Password);
    

    options is an instance of our very own CliOptions class, so we can access the properties that we defined within it. These contain the arguments that were passed to the program.

    If you were to try dotnet run right now, you’d see the following output:

    VehicleQuotes.CreateUser 1.0.0
    Copyright (C) 2022 VehicleQuotes.CreateUser
    
    ERROR(S):
      A required value not bound to option name is missing.
    USAGE:
    Create a new user account:
      create_user name email@domain.com secret
    
      --help               Display this help screen.
    
      --version            Display version information.
    
      username (pos. 0)    Required. The username of the new user account to create.
    
      email (pos. 1)       Required. The email of the new user account to create.
    
      password (pos. 2)    Required. The password of the new user account to create.
    

    As you can see, CommandLineParser detected that no arguments were given, and as such, it printed out an error message, along with the descriptions, help text and example that we defined. Basically the instructions on how to use our console app.

    Deploying the console app as a .NET tool

    OK, we now have a console app that does what it needs to do, with a decent interface. The final step is to make it even more accessible by deploying it as a .NET tool. If we do that, we’d be able to invoke it with a command like this:

    dotnet create_user Kevin kevin@gmail.com secretpw
    

    .NET makes this easy for us. There’s a caveat that we’ll discuss later, but for now, let’s go through the basic setup.

    .NET tools are essentially just glorified NuGet packages. As such, we begin by adding some additional package-related configuration options to VehicleQuotes.CreateUser/VehicleQuotes.CreateUser.csproj. We add them as children elements to the <PropertyGroup>:

    <PackAsTool>true</PackAsTool>
    <PackageOutputPath>./nupkg</PackageOutputPath>
    <ToolCommandName>create_user</ToolCommandName>
    <VersionPrefix>1.0.0</VersionPrefix>
    

    With that, we signal .NET that we want the console app to be packed as a tool, the path where it should put the package itself, and what its name will be. That is, how will it be invoked via the console (remember we want to be able to do dotnet create_user).

    Finally, we specify a version number. When dealing with NuGet packages, versioning them is very important, as that drives caching and downloading logic in NuGet. More on that later when we talk about the aforementioned caveats.

    Now, to build the package, we use:

    dotnet pack
    

    That will build the application and produce a VehicleQuotes.CreateUser/nupkg/VehicleQuotes.CreateUser.1.0.0.nupkg file.

    We won’t make the tool available for the entire system. Instead, we will make it available from within our solution’s directory only. We can make that happen if we create a tool manifest file in the source code’s root directory. That’s done with this command, run from the root directory:

    dotnet new tool-manifest
    

    That should create a new file: .config/dotnet-tools.json.

    Now, also from the root directory, we can finally install our tool:

    dotnet tool install --add-source ./VehicleQuotes.CreateUser/nupkg VehicleQuotes.CreateUser
    

    This is the regular command to install any tools in .NET. The interesting part is that we use the --add-source option to point it to the path where our freshly built package is located.

    After that, .NET shows this output:

    $ dotnet tool install --add-source ./VehicleQuotes.CreateUser/nupkg VehicleQuotes.CreateUser
    You can invoke the tool from this directory using the following commands: 'dotnet tool run create_user' or 'dotnet create_user'.
    Tool 'vehiclequotes.createuser' (version '1.0.0') was successfully installed. Entry is added to the manifest file /path/to/solution/.config/dotnet-tools.json.
    

    It tells us all we need to know. Check out the .config/dotnet-tools.json to see how the tool has been added there. All this means that now we can run our console app as a .NET tool:

    $ dotnet create_user --help
    VehicleQuotes.CreateUser 1.0.0
    Copyright (C) 2022 VehicleQuotes.CreateUser
    USAGE:
    Create a new user account:
      create_user name email@domain.com secret
    
      --help               Display this help screen.
    
      --version            Display version information.
    
      username (pos. 0)    Required. The username of the new user account to create.
    
      email (pos. 1)       Required. The email of the new user account to create.
    
      password (pos. 2)    Required. The password of the new user account to create.
    

    Pretty sweet, huh? And yes, it has taken a lot more effort than what it would’ve taken in Ruby on Rails, but hey, the end result is pretty fabulous I think, and we learned a new thing. Besides, once you’ve done it once, the skeleton can be easily reused for all kinds of different backend tasks.

    Now, before we wrap this up, there’s something we need to consider when actively developing these tools. That is, when making changes and re-installing constantly.

    The main aspect to understand is that tools are just NuGet packages, and as such are beholden to the NuGet package infrastructure. Which includes caching. If you’re in the process of developing your tool and are quickly making and deploying changes, NuGet won’t update the cache unless you do one of two things:

    1. Manually clear it with a command like dotnet nuget locals all --clear.
    2. Bump up the version of the tool by updating the value of <VersionSuffix> in the project (.csproj) file.

    This means that, unless you do one these, the changes that you make to the app between re-builds (with dotnet pack) and re-installs (with dotnet dotnet tool install) won’t ever make their way to the package that’s actually installed in your system. So be sure to keep that in mind.

    Table of contents


    csharp dotnet aspdotnet

    SSH Key Auth using KeeAgent with Git Bash and Windows CLI OpenSSH

    Ron Phipps

    By Ron Phipps
    August 8, 2022

    A leather couch in surprisingly good condition sits on a patch of grass between the sidewalk and the road. Harsh sunlight casts shadows of trees and buildings on the street and couch.

    In a previous blog post we showed how to configure KeePass and KeeAgent on Windows to provide SSH key agent forwarding with confirmation while using PuTTY and other PuTTY agent compatible programs. In this post we’ll expand on that by showing how to use the same key agent to provide SSH key auth when using Git Bash and the Windows command line OpenSSH.

    Git Bash support

    Open KeePass, click on Tools → Options, select the KeeAgent tab.

    Create C:\Temp if it does not exist.

    Check the two boxes in the Cygwin/MSYS Integration section.

    Directly after each box, fill in the path: C:\Temp\cygwin-ssh.socket for the Cygwin compatible socket file, and C:\Temp\msys-ssh.socket for the msysGit compatible socket file.

    KeePass options, open to the KeeAgent tab. Highlighted is the Cygwin/MSYS section, with two boxes checked. One reads “Create Cygwin compatible socket file (works with some versions of MSYS)”. The other reads “Create msysGit compatible socket file”. After each is the path described above.

    Click OK.

    Open Git Bash.

    Create the file ~/.bash_profile with the contents:

    test -f ~/.profile && . ~/.profile
    test -f ~/.bashrc && . ~/.bashrc
    

    Create the file ~/.bashrc with the contents:

    export SSH_AUTH_SOCK="C:\Temp\cygwin-ssh.socket"
    

    Close and reopen Git Bash.

    You should now be able to SSH with Git Bash using your loaded SSH key and a dialog box should appear to approve the use of the key.

    Git Bash running ssh to a redacted server, with a dialog box reading “(ssh) has requested to use the SSH key (redacted) with fingerprint (redacted). Do you want to allow this?” The dialog’s “No” button is selected by default.

    Windows command line OpenSSH support

    Open KeePass, click on Tools → Options, select the KeeAgent tab.

    Scroll down and click on the box next to “Enable agent for Windows OpenSSH (experimental).”

    KeePass options open to the KeeAgent tab. Inside a scrollable list is a checked checkbox reading “Enable agent for Windows OpenSSH (experimental)

    Click OK.

    Open a Windows Command Prompt.

    You should now be able to SSH with Windows CLI using your loaded SSH key and a dialog box should appear to approve the use of the key.

    Windows Command Prompt running SSH, with the same KeePass dialog box asking approval for using the loaded SSH key


    windows ssh

    Auburn University and VisionPort: How the World Gets Its Water

    Samuel Stern

    By Samuel Stern
    July 28, 2022

    A VisionPort presenting about the Central Arizona Project

    The IBT Water Project at Auburn University, headed by Associate Professor P.L. Chaney, has done outstanding work illustrating in a GIS format how cities around the world get their water. The Geoscience department has mapped how water is captured and distributed in Australia, Egypt, India, Mexico, Kazakhstan, and the western USA.

    The department chose the Central Arizona Project to turn into an interactive presentation on the VisionPort platform.

    GIS showing water pumping sites

    Starting at the Mark Wilmer Pumping Plant, water is pumped from the Colorado River towards over a dozen plants and lifted up over 2,000 feet in elevation across a series of “stair-steps” before it reaches its final destination near Tucson, where it is then distributed across the state to where it is most needed.

    This data displayed on their VisionPort, installed in a custom wood case in their library, allows students to see the entire journey in a 3D environment spanning seven 65-inch displays. The presenter can take them to each stop and explain the functions of the many plants, check gates, and turnouts along the way.

    A man giving a presentation with the VisionPort

    Numerous departments at Auburn University have had success turning their presentations into engaging experiences on the VisionPort platform and I look forward to seeing and reporting on what their students and faculty do next.

    For more information about VisionPort, email sales@visionport.com or visit www.visionport.com.


    visionport gis education

    Running PostgreSQL on Docker

    Jeffry Johar

    By Jeffry Johar
    July 27, 2022

    An elephant in a jungle

    Introduction

    PostgreSQL, or Postgres, is an open-source relational database. It is officially supported on all the major operating systems: Windows, Linux, BSD, MacOS, and others.

    Besides running as an executable binary in an operating system, Postgres is able to run as a containerized application on Docker! In this article we are going to walk through the Postgres implementation on Docker.

    Prerequisites

    • Docker or Docker Desktop. Please refer to my previous article for help with Docker installation.
    • Internet access is required to pull or download the Postgres container image from the Docker Hub.
    • A decent text editor, such as Vim or Notepad++, to create the configuration YAML files.

    Get to know the official Postgres Image

    Go to Docker Hub and search for “postgres”.

    Docker Hub website search screen shot

    There are a lot of images for PostgreSQL at Docker Hub. If you don’t have any special requirements, it is best to select the official image. This is the image maintained by the Docker PostgreSQL Community.

    Docker Hub website search result for postgres

    The page that search result links to describes the Postgres image, how it was made and how to use it. From this page we know the image name and the required parameters. This is essential information for starting a Docker image, as we will see in the following steps.

    Run the Postgres image as a basic Postgres container

    The following command is the bare minimum for running Postgres on Docker:

    docker run --name basic-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
    

    Where:

    • --name basic-postgres sets the container name to basic-postgres,
    • -e POSTGRES_PASSWORD=mysecretpassword sets the password of the default user postgres,
    • -d runs the container in detached mode or in other words in the background, and
    • postgres uses the postgres image. By default it will get the image from https://hub.docker.com.

    Execute docker ps to check on running Docker containers. We should see our basic-postgres container running. docker ps is like ps -ef on Linux/Unix, which lists all running applications.

    Sample output:

    Screen shot of terminal showing docker ps output after postgres container was started

    Working with the Postgres container

    Just as Postgres running natively on an operating system, Postgres on Docker comes with the psql front-end client for accessing the Postgres database. To access psql in the Postgres container execute the following command:

    docker exec -it basic-postgres psql -U postgres
    

    Where:

    • exec -it executes something interactive (-i) with a TTY (-t),
    • basic-postgres specifies the container, and
    • psql -U postgres is the psql command with its switch to specify the Postgres user.

    Now we are able to execute any psql command.

    Let’s try a few Postgres commands and import the famous “dvdrental” sample database to our Postgres installation.

    List all available databases:

    \l
    

    Create a database named dvdrental:

    create database dvdrental;
    

    List all available databases. We should now see the created dvdrental database.

    \l
    

    Quit from psql:

    \q
    

    Download the dvdrental database backup from postgresqltutorial.com and after it succeeds, unzip it:

    curl -O https://www.postgresqltutorial.com/wp-content/uploads/2019/05/dvdrental.zip
    unzip dvdrental.zip
    

    Execute the following command to import the data. It will restore the dvdrental.tar backup to our Postgres database.

    docker exec -i basic-postgres pg_restore -U postgres -v -d dvdrental < dvdrental.tar
    

    Where:

    • exec -i executes something interactive,
    • basic-postgres specifies which container,
    • pg_restore -U postgres -v -d dvdrental is the pg_restore command with its own arguments:
      • -U postgres says to connect as the postgres user,
      • -v enables verbose mode,
      • -d dvdrental specifies the database to connect to, and
    • < dvdrental.tar says which file’s data the outside shell should pass into the container to pg_restore.

    Log in to the dvdrental database:

    docker exec -it basic-postgres psql -U postgres -d dvdrental
    

    Where:

    • exec -it executes something interactive with a terminal,
    • basic-postgres specifies which container, and
    • psql -U postgres -d dvdrental is the psql command with the postgres user and the dvdrental database specified.

    List all tables by describing the tables in the dvdrental database:

    \dt
    

    List the first 10 actors from the actor table:

    select * from actor limit 10;
    

    Quit from psql:

    \q
    

    Gracefully stop the Docker container:

    docker stop basic-postgres
    

    If you don’t need it anymore you can delete the container:

    docker rm basic-postgres
    

    Sample output:

    Screen shot of terminal showing import of dvdrental sample database into Postgres

    And later:

    Screen shot of terminal showing psql investigation of dvdrental sample database

    Run Postgres image as a “real world” Postgres container

    The basic Postgres container is only good for learning or testing. It requires more features to be able to serve as a working database for a real world application. We will add 2 more features to make it useable:

    • Persistent storage: By default the container filesystem is ephemeral. What this means is whenever we restart a terminated or deleted container, it will get an all-new, fresh filesystem and all previous data will be wiped clean. This is not suitable for database systems. To be a working database, we need to add a persistent filesystem to the container.
    • Port forwarding from host to container: The container network is isolated, making it inaccessible from the outside world. A database is no use if it can’t be accessed. To make it accessible we need to forward a host operating system port to the container port.

    Let’s start building a “real world” Postgres container. Firstly we need to create the persistent storage. In Docker this is known as a volume.

    Execute the following command to create a volume named pg-data:

    docker volume create pg-data
    

    List all Docker volumes and ensure that pg-data was created:

    docker volume ls | grep pg-data
    

    Run a Postgres container with persistent storage and port forwarding:

    docker run --name real-postgres \
    -e POSTGRES_PASSWORD=mysecretpassword \
    -v pg-data:/var/lib/postgresql/data \
    -p 5432:5432 \
    -d \
    postgres
    

    Where:

    • --name real-postgres sets the container name,
    • -e POSTGRES_PASSWORD=mysecretpassword sets the password of the default user postgres
    • -v pg-data:/var/lib/postgresql/data mounts the pg-data volume as the postgres data directory,
    • -p 5432:5432 forwards from port 5432 of host operating system to port 5432 of container,
    • -d runs the container in detached mode or, in other words, in the background, and
    • postgres use the postgres image. By default it will get the image from https://hub.docker.com

    Execute docker ps to check on running containers on Docker. Take note that the real-postgres container has port forwarding information.

    Now we are going to try to access the Postgres container with psql from the host operating system.

    psql -h localhost -p 5432 -U postgres
    

    Sample output:

    Screen shot of terminal showing access to Postgres in Docker with persistent storage

    Cleaning up the running container

    Stop the container:

    docker stop real-postgres
    

    Delete the container:

    docker rm real-postgres
    

    Delete the volume:

    docker volume rm pg-data
    

    Managing Postgres container with Docker Compose

    Managing a container with a long list of arguments to Docker is tedious and error prone. Instead of the Docker CLI command we could use Docker Compose, which is a tool for managing containers from a YAML manifest file.

    Create the following file named docker-compose.yaml:

    version: '3.1'
    services:
      db:
        container_name: real-postgres-2
        image: postgres
        restart: always
        ports:
          - "5432:5432"
        environment:
          POSTGRES_PASSWORD: mysecretpassword
        volumes:
          - pg-data-2:/var/lib/postgresql/data
    volumes:
      pg-data-2:
        external: false
    

    To start the Postgres container with Docker Compose, execute the following command in the same location as docker-compose.yaml:

    docker-compose up -d
    

    Where -d runs the container in detached mode.

    Execute docker ps to check on running Docker containers. Take note that real-postgres-2 was created by Docker Compose.

    To stop Postgres container with Docker Compose, execute the following command in the same location as docker-compose.yaml:

    docker-compose down
    

    Sample output:

    Screen shot of terminal showing Postgres container deployed by Docker Compose

    Conclusion

    That’s all, folks. We have successfully deployed PostgreSQL on Docker.

    Now we are able to reap the benefits of container technology for PostgreSQL, including portability, agility, and better management.


    docker postgres containers

    VisionPort at University of Tokyo, New York office: An Exhibition for Peace on August 6th and 7th

    Samuel Stern

    By Samuel Stern
    July 26, 2022

    3D visualization of Hiroshima with photos pinned throughout
    Ground Zero, Hiroshima, Japan – August 6th, 1945. Visualized by the lab of Professor Hidenori Watanave.

    Technology and education go hand in hand, and the VisionPort platform is being used every day to make that connection.

    We are extremely honored to be able to contribute to the first exhibition at the University of Tokyo’s New York office, “Convergence of Peace Activities: Connecting and Integrating by Technologies”.

    It is said that those who do not learn from history are condemned to repeat it, and in that vein, the exhibition, drawing from the work of Professor Hidenori Watanave, will be using the VisionPort platform to educate viewers on the realities of the bombings of Hiroshima and Nagasaki, on the date of the 77th anniversary of the first nuclear weapon used in war.

    Several women and men in a presentation on the VisionPort

    The team has been collecting and colorizing photographic material from the aftermath of the bombings for over 10 years. The exhibition will combine that work with interviews and writings from survivors on a GIS canvas to allow attendees to see what it looked like and to hear from those who were there.

    The lab will also be presenting the work they have been doing covering the ongoing conflict in Ukraine. Day by day they collect the latest images from the war, identify the locations of the events and use geospatial data to map and present it in an interactive, 3D environment.

    A 3D rendering of bombed buildings in Ukraine
    Invasion of Ukraine, provided by the labs of Professors Hidenori Watanave and Taichi Furuhashi.

    The exhibition will serve to show us where we have been and where we are now, in hopes of being a “convergence,” a place to connect and use all of our available information and technologies so that we may begin a new era of understanding and in turn, peace.

    The exhibition will also feature work done by Co-Op Peace Map, Mainichi Newspaper and the University of Kyoto.

    The educational work that the University of Tokyo is doing with the VisionPort platform is inspiring and we look forward to being there to see it in memoriam of that fateful day.

    Register here and join us for this viewing on August 6th and 7th at the University of Tokyo’s New York office located at 145 W. 57th St., 21st Floor, New York, NY 10019.

    For more information about VisionPort, email sales@visionport.com or visit visionport.com.

    Images and photography provided by the University of Tokyo, the department of Interfaculty Initiative in Information Studies, the lab of Professor Hidenori Watanave, and the lab of Professor Taichi Furuhashi.


    visionport event education

    Windows SSH key agent forwarding confirmation

    Ron Phipps

    By Ron Phipps
    July 26, 2022

    A sunset with silhouetted construction equipment

    At End Point we use SSH keys extensively, primarily for authentication with servers for remote shell access as well as with Git services including GitHub, GitLab, and Bitbucket. Most of the time the servers we are attempting to reach are blocked from direct access and require that we go through an intermediate “jump server”.

    Because of this need to jump from server to server we utilize SSH key forwarding that allows us to use the private key stored on our local system to authenticate with each of the servers in the chain. When we reach our destination server we can use the same private key to authenticate with the Git hosting service and perform git commands without having to enter a password.

    One of the best practices when using SSH key forwarding is to use an option called key confirmation. When key confirmation is turned on, each time a request is made to use the private key that is loaded in the SSH agent a prompt will appear on your local machine to approve the use of the key. This reduces the ability for an attacker to use your private key without approval.

    For the longest time SSH key confirmation was not available on Windows. One of the most popular SSH clients on Windows is PuTTY and its agent (pageant) does not support the option. Many other SSH key compatible Windows applications use PuTTY’s agent for SSH key caching and as a result these applications also lack the ability for key confirmation.

    KeePass and KeeAgent

    To use key confirmation on Windows we need to utilize two programs, KeePass and KeeAgent. KeePass is an open source password manager and KeeAgent is a plugin for KeePass that provides a PuTTY-compatible SSH key agent. It also appears that KeeAgent has support for integration with the Windows shell, Cygwin/MSYS, and Git Bash.

    The instructions below will assume that you already have a SSH key in PuTTY that you’d like to use with key confirmation and that you have previously used PuTTY with key forwarding.

    You should start by installing KeePass. Then install KeeAgent.

    Once both are installed create a new KeePass database, or use your existing database if you are already a KeePass user.

    Add a new entry to the database and name it SSH key. Enter your SSH key password into the Password and Repeat fields.

    KeePass’s Add Entry screen

    Then click on the KeeAgent tab and check ‘Allow KeeAgent to use this entry’.

    In the Private Key section select External File and point it at your PuTTY private key. If you have entered the correct password on the first tab you should see your key comment and fingerprint listed. Then press OK.

    The KeeAgent tab in Add Entry

    Verify that confirmation is enabled by clicking on Tools -> Options and selecting the KeeAgent tab.

    A checked box reading “Always require user confirmation when a client program requests to use a key”

    Press OK.

    Then go to File -> Save. Close KeePass and re-open it. You’ll be asked to enter your KeePass password and then you can verify that the agent is loaded with your key by clicking Tools -> KeeAgent.

    KeeAgent in Agent Mode

    Now when we use PuTTY or another PuTTY agent-compatible program we’ll be presented with a confirmation dialog. Clicking Yes will allow the key to be used.

    KeeAgent’s confirmation dialog

    Notice that the default selected option is No. This is different than the standard openssh-askpass on Linux, which defaults to Yes. If you’re typing along in a fury and the confirmation window pops up and you hit Enter or space, it will decline the use of your SSH key, rather than accepting it.

    If you have enabled SSH key forwarding in the PuTTY options for the connection you’ll be using you can then SSH to other servers using the same key and each time you do so the confirmation will be presented to you.

    If you close KeePass the key agent will be closed and unavailable for future connections. Re-opening KeePass will allow the key to be used again.

    If you use Windows and SSH agent forwarding but have never tried agent confirmation to protect against malicious use of your secret key, give KeePass and KeeAgent a try!


    windows ssh

    CSTE Conference EpiTrax retrospective

    Steve Yoman

    By Steve Yoman
    June 29, 2022

    Banner photo of 4 End Pointers in our conference booth

    Last week we were in Louisville, Kentucky for the CSTE Conference. End Point staffed a conference booth to represent the EpiTrax public health surveillance system to a wonderful group of public health experts.

    You can read some background about the conference and CSTE in our earlier blog post announcing our plans.

    Photo of attendees at a CSTE conference session

    We really enjoyed meeting new friends in person after two years of canceled events due to the pandemic. We spoke with staff from health departments and disease surveillance teams from several state and local jurisdictions, as well as with experts from the CDC and other software and service vendors.

    One of the highlights was going around to meet other people staffing booths at the conference. It charged us up to see and hear about all of the interesting and innovative things going on in the public health space at a time when there is so much that needs to be done. We were particularly struck by the efforts being made in onboarding and distributing ELRs and eCRs, areas where the Electronic Message Staging Area (EMSA, which we deploy and support) can complement and enrich those activities.

    Photo of CSTE conference hall

    The open-source disease surveillance and reporting software EMSA and EpiTrax both enjoyed well-deserved attention as we demonstrated them to numerous groups seeking better solutions to serve people in their jurisdictions. People were very interested in the functionality EpiTrax has for case management, encounters, and NMI reporting.

    Of course there is a lot more we could say about EpiTrax and EMSA. So, if you didn’t find us at the conference or if you are interested in what EpiTrax and EMSA can do, contact us here online. We are happy to give you a demo!


    conference epitrax emsa
    Page 1 of 206 • Next page