https://www.endpointdev.com/blog/tags/docker/2023-01-13T00:00:00+00:00End Point DevDeveloping Rails Apps in a Dev Container with VS Codehttps://www.endpointdev.com/blog/2023/01/developing-rails-apps-in-a-dev-container-with-vs-code/2023-01-13T00:00:00+00:00Kevin Campusano
<p><img src="/blog/2023/01/developing-rails-apps-in-a-dev-container-with-vs-code/icy-cave.webp" alt="Icicles hang down from the opening of a cave, amid water falling into a pool lined with thick ice. Light from the cave’s opening illuminates the bottom corner of the image, opposite the icicles."></p>
<!-- Photo by Seth Jensen -->
<p>One of the great gifts from the advent of <a href="https://www.docker.com/">Docker</a> and <a href="https://www.docker.com/resources/what-container/">containers</a> is the ability to get a good development environment up and running very quickly. Regardless of programming language or tech stack, there is probably an image in <a href="https://hub.docker.com/">DockerHub</a> or elsewhere that you can use to set up a container for development, either verbatim or as a basis for more complex setups.</p>
<p>Moreover, even if your development environment is complex, once you have containerized it, it’s easy to replicate for new team members.</p>
<p><a href="https://code.visualstudio.com/">VS Code</a>, one of the most popular editors/IDEs today, with help from the <a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers">Dev Containers</a> extension, makes the task of setting up a container for software development easier than ever.</p>
<p>To demonstrate that, we’re going to walk through setting up such an environment for developing Ruby on Rails applications.</p>
<h3 id="setting-up-a-ruby-dev-container">Setting up a Ruby Dev Container</h3>
<p>As I alluded to before, all we need is Docker, VS Code, and the extension. Once you have those <a href="https://www.docker.com/get-started/">installed</a>, we can easily create a new Docker container ready for Ruby on Rails development and have VS Code connect to it, resulting in a fully featured development environment.</p>
<h3 id="creating-the-configuration-file">Creating the configuration file</h3>
<p>To get started, create a new directory and open it in VS Code with something like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">$ mkdir ruby-dev-container
$ <span style="color:#038">cd</span> ruby-dev-container
$ code .
</code></pre></div><p>Now, in VS Code, bring up the <a href="https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette">Command Palette</a> with <code>Ctrl + Shift + P</code>. In it, run the “Dev Containers: Add Dev Container Configuration Files…” command. In the resulting menu, select “Show All Definitions…” option and look for Ruby.</p>
<p><img src="/blog/2023/01/developing-rails-apps-in-a-dev-container-with-vs-code/add-dev-container-config-files-1.png" alt="A pop-up reading “Add Dev Container Configuration Files. The cursor is in a search box reading “Select a container configuration template”. Selected is the option reading “Show All Definitions…""></p>
<p><img src="/blog/2023/01/developing-rails-apps-in-a-dev-container-with-vs-code/add-dev-container-config-files-2.png" alt="The Ruby option, selected in the Add Dev Container Configuration Files pop-up"></p>
<p>From here on, the menu will ask you to select a version of Ruby and whether you want to include any additional features in the resulting development container. At the time of writing, 3.1 was the latest Ruby that appeared, so I selected that. Also, we don’t need any additional features, so I selected none.</p>
<blockquote>
<p>There are many options here for different languages and tech stacks. For example, there’s Ruby, which we selected, but there are also ones that come out of the box with <a href="https://sinatrarb.com/">Sinatra</a>, Rails, and even <a href="https://www.postgresql.org/">Postgres</a>. Feel free to peruse! You can learn more about Dev Containers in <a href="https://containers.dev/">the official site</a> and on <a href="https://github.com/devcontainers">GitHub</a>.</p>
</blockquote>
<p>Anyway, after going through that menu and clicking the OK button, the extension will have produced a <code>.devcontainer/devcontainer.json</code> file with these contents:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#b06;font-weight:bold">"name"</span>: <span style="color:#d20;background-color:#fff0f0">"Ruby"</span>,
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">Or</span> <span style="color:#a61717;background-color:#e3d2d2">use</span> <span style="color:#a61717;background-color:#e3d2d2">a</span> <span style="color:#a61717;background-color:#e3d2d2">Dockerfile</span> <span style="color:#a61717;background-color:#e3d2d2">or</span> <span style="color:#a61717;background-color:#e3d2d2">Docker</span> <span style="color:#a61717;background-color:#e3d2d2">Compose</span> <span style="color:#a61717;background-color:#e3d2d2">file.</span> <span style="color:#a61717;background-color:#e3d2d2">More</span> <span style="color:#a61717;background-color:#e3d2d2">info:</span> <span style="color:#a61717;background-color:#e3d2d2">https://containers.dev/guide/dockerfile</span>
<span style="color:#b06;font-weight:bold">"image"</span>: <span style="color:#d20;background-color:#fff0f0">"mcr.microsoft.com/devcontainers/ruby:0-3.1"</span>
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">...</span>
}
</code></pre></div><h3 id="customizing-the-dev-container">Customizing the Dev Container</h3>
<p>As you can see, this is a JSON file that specifies what the dev container will look like. The most important part is the <code>image</code> field which defines the image that the container will be running. In this case, it’s the “Ruby 3.1” image provided by the <a href="https://mcr.microsoft.com/">Microsoft Artifact Registry</a>.</p>
<p>I like to add a few lines to this file to further configure the container. It ends up looking like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#b06;font-weight:bold">"name"</span>: <span style="color:#d20;background-color:#fff0f0">"Ruby"</span>,
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">Or</span> <span style="color:#a61717;background-color:#e3d2d2">use</span> <span style="color:#a61717;background-color:#e3d2d2">a</span> <span style="color:#a61717;background-color:#e3d2d2">Dockerfile</span> <span style="color:#a61717;background-color:#e3d2d2">or</span> <span style="color:#a61717;background-color:#e3d2d2">Docker</span> <span style="color:#a61717;background-color:#e3d2d2">Compose</span> <span style="color:#a61717;background-color:#e3d2d2">file.</span> <span style="color:#a61717;background-color:#e3d2d2">More</span> <span style="color:#a61717;background-color:#e3d2d2">info:</span> <span style="color:#a61717;background-color:#e3d2d2">https://containers.dev/guide/dockerfile</span>
<span style="color:#b06;font-weight:bold">"image"</span>: <span style="color:#d20;background-color:#fff0f0">"mcr.microsoft.com/devcontainers/ruby:0-3.1"</span>,
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">Use</span> <span style="color:#a61717;background-color:#e3d2d2">'forwardPorts'</span> <span style="color:#a61717;background-color:#e3d2d2">to</span> <span style="color:#a61717;background-color:#e3d2d2">make</span> <span style="color:#a61717;background-color:#e3d2d2">a</span> <span style="color:#a61717;background-color:#e3d2d2">list</span> <span style="color:#a61717;background-color:#e3d2d2">of</span> <span style="color:#a61717;background-color:#e3d2d2">ports</span> <span style="color:#a61717;background-color:#e3d2d2">inside</span> <span style="color:#a61717;background-color:#e3d2d2">the</span> <span style="color:#a61717;background-color:#e3d2d2">container</span> <span style="color:#a61717;background-color:#e3d2d2">available</span> <span style="color:#a61717;background-color:#e3d2d2">locally.</span>
<span style="color:#b06;font-weight:bold">"forwardPorts"</span>: [<span style="color:#00d;font-weight:bold">3000</span>],
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">Configure</span> <span style="color:#a61717;background-color:#e3d2d2">tool-specific</span> <span style="color:#a61717;background-color:#e3d2d2">properties.</span>
<span style="color:#b06;font-weight:bold">"customizations"</span>: {
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">Configure</span> <span style="color:#a61717;background-color:#e3d2d2">properties</span> <span style="color:#a61717;background-color:#e3d2d2">specific</span> <span style="color:#a61717;background-color:#e3d2d2">to</span> <span style="color:#a61717;background-color:#e3d2d2">VS</span> <span style="color:#a61717;background-color:#e3d2d2">Code.</span>
<span style="color:#b06;font-weight:bold">"vscode"</span>: {
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">Add</span> <span style="color:#a61717;background-color:#e3d2d2">the</span> <span style="color:#a61717;background-color:#e3d2d2">IDs</span> <span style="color:#a61717;background-color:#e3d2d2">of</span> <span style="color:#a61717;background-color:#e3d2d2">extensions</span> <span style="color:#a61717;background-color:#e3d2d2">you</span> <span style="color:#a61717;background-color:#e3d2d2">want</span> <span style="color:#a61717;background-color:#e3d2d2">installed</span> <span style="color:#a61717;background-color:#e3d2d2">when</span> <span style="color:#a61717;background-color:#e3d2d2">the</span> <span style="color:#a61717;background-color:#e3d2d2">container</span> <span style="color:#a61717;background-color:#e3d2d2">is</span> <span style="color:#a61717;background-color:#e3d2d2">created.</span>
<span style="color:#b06;font-weight:bold">"extensions"</span>: [
<span style="color:#d20;background-color:#fff0f0">"rebornix.Ruby"</span>
]
}
}
}
</code></pre></div><p><code>"forwardPorts": [3000]</code> makes the container’s port 3000 reachable from the local host machine’s browser. That means that whenever we do <code>bin/rails server</code> from inside the container, we will be able to navigate to the app from our browser.</p>
<p>The <code>customizations</code> section installs the <a href="https://marketplace.visualstudio.com/items?itemName=rebornix.Ruby">Ruby VS Code extension</a> in the container. This is surely optional but makes the development experience a little bit more fun. You can add any extensions you’d like here.</p>
<h3 id="running-the-container">Running the Container</h3>
<p>Now, to actually run and connect VS Code to a new container using the image and configs specified in the <code>.devcontainer/devcontainer.json</code> file; bring up the Command Palette again with <code>Ctrl + Shift + P</code> and run the “Dev Containers: Reopen in Container” command.</p>
<p>With that, VS Code will invoke Docker to download the image and create a new container based on it. It will also run and configure the container based on what’s specified in <code>.devcontainer/devcontainer.json</code>, and finally, connect to it.</p>
<blockquote>
<p>When that’s done, you should be able to see the container running with <code>docker ps</code>.</p>
</blockquote>
<p>That will take a while, but once it’s done, you’ll be able to open VS Code’s integrated terminal (with <code>Ctrl + `</code>), which will bring up a bash session in the container. From here, we can finally run all our usual Ruby and Rails commands to set up our project.</p>
<blockquote>
<p>Feel free to explore the container’s environment. <code>ruby -v</code> for example will show that Ruby is ready to go in there.</p>
</blockquote>
<h3 id="creating-the-new-project">Creating the new project</h3>
<p>First, install the rails gem:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">$ gem install rails
</code></pre></div><p>Then, create the new project:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">$ rails new . --minimal
</code></pre></div><p>And finally, run the app:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">$ bin/rails server
</code></pre></div><p>Open a browser and navigate to <a href="http://127.0.0.1:3000/">http://127.0.0.1:3000/</a> to see the classic Rails hello world screen:</p>
<p><img src="/blog/2023/01/developing-rails-apps-in-a-dev-container-with-vs-code/hello-rails.png" alt="Hello Rails. A browser navigated to http://127.0.0.1:3000/, with the webpage displaying the Rails logo, and underneath reading “Rails version: 7.0.4”; “Ruby version: ruby 3.1.3p185 (2022-11-24 revision 1a6b16756e)[x86_64-linux]"></p>
<p>And that’s all! I use Ruby on Rails on a daily basis, so that’s what I’ve chosen to demonstrate here. However, there are many more options of programming languages and technologies when it comes to Dev Containers. All of them share very similar setup processes.</p>
<p>And even if there isn’t an image optimized for development readily available in the Microsoft Artifact Registry, you can always author your own custom Dockerfile and use that for whatever use case you may have.</p>
Kubernetes Volume definition defaults to EmptyDir type with wrong capitalization of hostPathhttps://www.endpointdev.com/blog/2022/10/kubernetes-volume-definition-defaults-emptydir-type-wrong-capitalization-hostpath/2022-10-26T00:00:00+00:00Ron Phipps
<p><img src="/blog/2022/10/kubernetes-volume-definition-defaults-emptydir-type-wrong-capitalization-hostpath/20220411_112819.webp" alt="Cow with light red-brown fur and an inventory ear tag standing in a dry field with scattered desert grass and brush, in front of a fench">
Photo by Garrett Skinner</p>
<p>Kubernetes Host Path volume mounts allow accessing a host system directory inside of a pod, which is helpful when doing development, for example to access the frequently-changing source code of an application being actively developed. This allows a developer to edit the code with their normal set of tools without having to jump through a bunch of hoops to get the code into a pod.</p>
<p>We use this setup at End Point in development where the host system is running MicroK8s and there is a single pod for an application on a single node. In most other cases, host path volume mounts are not recommended. But here it means the developer can edit code on the host machine and the changes are immediately reflected within the pod without having to deploy a new image. If the application server running within the pod is also running in development mode with dynamic reloading, the changes can be viewed with a refresh of the browser accessing the application.</p>
<p>While working on a test environment to run EpiTrax within Kubernetes, the need arose to set up a Host Path volume mount so that the source code on the host machine would be available within the pod. I used this simple Deployment definition:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>apps/v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Deployment<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>epitrax<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">namespace</span>:<span style="color:#bbb"> </span>app<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">labels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>epitrax<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">template</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">securityContext</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">fsGroup</span>:<span style="color:#bbb"> </span>$USERID<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">containers</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>shell<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>epitrax/epitrax<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">command</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"sh"</span>,<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"-c"</span>,<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"tail -f /dev/null"</span>]<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">securityContext</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">runAsNonRoot</span>:<span style="color:#bbb"> </span><span style="color:#080;font-weight:bold">true</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">runAsUser</span>:<span style="color:#bbb"> </span>$USERID<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">runAsGroup</span>:<span style="color:#bbb"> </span>$USERID<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumeMounts</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">mountPath</span>:<span style="color:#bbb"> </span>/opt/jboss/epitrax<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>epitrax-source-directory<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>epitrax-source-directory<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">hostpath</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">type</span>:<span style="color:#bbb"> </span>Directory<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">path</span>:<span style="color:#bbb"> </span>$PWD/projects/epitrax<span style="color:#bbb">
</span></code></pre></div><p>After applying this deployment and shelling into the pod I found that <code>/opt/jboss/epitrax</code> was an empty directory and not a host path volume. Describing the pod showed the following:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#b06;font-weight:bold">Volumes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">epitrax-source-directory</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">Type</span>:<span style="color:#bbb"> </span>EmptyDir (a temporary directory that shares a pod's lifetime)<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">Medium</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">SizeLimit</span>:<span style="color:#bbb"> </span><unset><span style="color:#bbb">
</span></code></pre></div><p>I tried changing many different things, viewed the various logs, and searched the Internet for reports of the same problem, but could not figure out what was wrong.</p>
<p>Eventually I found <a href="https://github.com/kubernetes/kubernetes/issues/46950">a single GitHub issue on the Kubernetes project</a>, which did not explain the trouble but did explain that the volume type always defaults to EmptyDir to match Docker’s behavior.</p>
<p>That’s when I realized the problem: I had used <code>hostpath</code> (all lower case) instead of <code>hostPath</code>. Kubernetes could not find a valid volume type of <code>hostpath</code> so it defaulted to <code>EmptyDir</code>.</p>
<p>Updating the volumes section to the following resolved the issue:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># snip</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>epitrax-source-directory<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">hostPath</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">type</span>:<span style="color:#bbb"> </span>Directory<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">path</span>:<span style="color:#bbb"> </span>$PWD/projects/epitrax<span style="color:#bbb">
</span></code></pre></div><p>Be aware that Kubernetes will not warn or error out if there is an invalid volume type referenced in the volumes section—it will quietly default to EmptyDir!</p>
Knocking on Kubernetes’s Door (Ingress)https://www.endpointdev.com/blog/2022/10/knocking-on-kubernetes-door/2022-10-20T00:00:00+00:00Jeffry Johar
<p><img src="/blog/2022/10/knocking-on-kubernetes-door/blog06-alhambra.webp" alt="The door of Alhambra Palace, Spain. A still pool reflects grand doors, flanked on each side by arches and hedges."><br>
Photo by Alberto Capparelli</p>
<p>According to the Merriam-Webster dictionary, the meaning of ingress is the act of entering or entrance. In the context of Kubernetes, Ingress is a resource that enables clients or users to access the services which reside in a Kubernetes cluster. Thus Ingress is the entrance to a Kubernetes cluster! Let’s get to know more about it and test it out.</p>
<h3 id="prerequisites">Prerequisites</h3>
<p>We are going to deploy Nginx Ingress at Kubernetes on Docker Desktop. Thus the following are the requirements:</p>
<ul>
<li>Docker Desktop with Kubernetes enabled. If you are not sure how to do this, please <a href="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/">refer to my previous blog on Docker Desktop and Kubernetes</a>.</li>
<li>Internet access to download the required YAML and Docker Images.</li>
<li><code>git</code> command to clone a Git repository.</li>
<li>A decent editor such as Vim or Notepad++ to view and edit the YAML.</li>
</ul>
<h3 id="ingress-and-friends">Ingress and friends</h3>
<p>To understand why we need Ingress, we need to know 2 other resources and their shortcomings in exposing Kubernetes services. Those 2 resources are NodePort and LoadBalancer. Then we will go over the details of Ingress.</p>
<h4 id="nodeport">NodePort</h4>
<p>NodePort is a type of Kubernetes service which exposes the Kubernetes application at high-numbered ports. By default the range is from 30000–32767. Each of the worker nodes proxies the port. Thus, access to the service is by using the Kubernetes worker node IPs and the ports. In the following example the NodePort service is exposed at port 30000.</p>
<!-- All diagrams by Jeffry Johar using draw.io -->
<p><img src="/blog/2022/10/knocking-on-kubernetes-door/blog06-nodeport.webp" alt="A diagram of a Kubernetes Cluster. Within are 3 boxes, labeled as worker nodes, with the IP addresses: 192.168.1.1, 192.168.1.2, and 192.168.1.3. Each box contains several purple boxes labeled “Service Type: NodePort at port 30001”. They point to three blue boxes labeled “Pods”. Each worker node box points to a URL corresponding to its IP address: “http://192.168.1.1:30000” and so forth, with port 30000 for each."></p>
<p>To have a single universal access and a secured SSL connection, we need some external load balancer in front of the Kubernetes cluster to do the SSL termination and to load balance the exposed IPs and ports from the worker nodes. This is illustrated in the following diagram:</p>
<p><img src="/blog/2022/10/knocking-on-kubernetes-door/blog06-nodeport-lb.webp" alt="The same diagram as above, but this time all three IP address have arrows pointing bidirectionally to a single circle, labeled Load Balancer + SSL Termination. This circle points to a single URL, “https://someurl.com”."></p>
<h4 id="loadbalancer">LoadBalancer</h4>
<p>LoadBalancer is another type of Kubernetes service which exposes Kubernetes services. Generally it is an OSI layer 4 load balancer which exposes a static IP address. The implementation of LoadBalancer depends on the Cloud or the Infrastructure provider, thus the capability of LoadBalancer varies.</p>
<p>In the following example a LoadBalancer is exposed with the static public IP address 13.215.159.65 provided by a cloud provider. The IP could also be registered in DNS to allow resolution by a host name.</p>
<p><img src="/blog/2022/10/knocking-on-kubernetes-door/blog06-loadbalancer.webp" alt="A similar Kubernetes Cluster box, again with three boxes containing three blue Pods each, but this time every pod points to the same single box, encompassed only by the outer Kubernetes Cluster box. It is labeled “Service Type: LoadBalancer; Static IP: 13.215.159.65”. The outer box points to a URL: “http://13.215.159.65”, which in turn points to “http://someurl.com”."></p>
<h4 id="ingress">Ingress</h4>
<p>Ingress is a Kubernetes resource that serves as an OSI layer 7 load balancer. Unlike NodePort and LoadBalancer, Ingress is not a Kubernetes service. It is another Kubernetes resource that sits in front of a Kubernetes service. It enables routing, SSL termination, and virtual hosting. This is like a full-fledged load balancer inside the Kubernetes cluster!</p>
<p>The following diagram shows that Ingress is able to route the <code>someurl.com/web/</code> and <code>someurl.com/app/</code> endpoints to the intended applications in the Kubernetes cluster, able to terminate SSL certificates, do virtual hosting and route the URL to the intended destination. Please take note that as of this writing, Ingress only supports the http and https protocols.</p>
<p><img src="/blog/2022/10/knocking-on-kubernetes-door/blog06-ingress-controller.webp" alt="An outer Kubernetes Cluster box contains three boxes again. Each box again has three Pods, but they are split into three colors (green, yellow, and red), distributed randomly through the three boxes. The three Pods of each color point to a matching box, in the larger Kubernetes Cluster box, reading “Service Type: ClusterIP; Name: X”, where X is Web, App, and Blog. These three boxes point to another box labeled “Ingress: SSL Termination; Routing; Virtual Hosting. This box points to a URL, “http://13.215.159.65”, which in turn points to a cloud icon with three URLS: “https://someurl.com/web/”, “https://someurl.com/app/”, “https://someurl.com”"></p>
<p>In order to get Ingress in a Kubernetes cluster we need to deploy 2 main things:</p>
<ul>
<li><strong>Ingress Controller</strong> is the engine of the Ingress. It is responsible for providing the Ingress capability to Kubernetes. The Ingress Controller is a separate module from Kubernetes core components. There are multiple Ingress Controllers available to use such as Nginx, Istio, NSX, and many more. See a complete list at the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/">kubernetes.io page on Ingress controllers</a>.</li>
<li><strong>Ingress Resource</strong> is the configuration that manages the Ingress. It is made by applying the Ingress Resource YAML. This is a typical YAML file for Kubernetes resources which requires apiVersion, kind, metadata and spec. Go to <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">kubernetes.io documentation on Ingress</a> to learn more.</li>
</ul>
<h3 id="how-to-deploy-and-use-ingress">How to deploy and use Ingress</h3>
<p>Now we are going to deploy the Nginx Ingress at Kubernetes in Docker Desktop. We will configure it to access an Nginx web server, a variant for Tomcat web application server and our old beloved Apache web server.</p>
<p>Start your Docker Desktop with Kubernetes Enable. Right click at the Docker Desktop icon at the top right area of your screen (near the left in this cropped screenshot) to see that both Docker and Kubernetes are running:</p>
<p><img src="/blog/2022/10/knocking-on-kubernetes-door/blog06-docker-desktop.webp" alt="Docker Desktop running on MacOS, with macOS’s top bar Docker menu open. There are two green dots next to lines saying “Docker Desktop is running” and “Kubernetes is running”."></p>
<p>Clone my repository to get the required deployments YAML files:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">git clone https://github.com/aburayyanjeffry/nginx-ingress.git
</code></pre></div><p>Let’s go through the downloaded files.</p>
<ul>
<li><code>01-ingress-controller.yaml</code> : This is the main deployment YAML for the Ingress controller. It will create a new namespace named “ingress-nginx”. Then it will create the required service account, role, rolebinding, clusterrole, clusterolebinding, service, configmap, and deployments in the “ingress-nginx” namespace. This YAML is from the official ingress-nginx documentation. To learn more about this deployment see <a href="https://kubernetes.github.io/ingress-nginx/deploy/#quick-start">the docs</a>.</li>
<li><code>02-nginx-webserver.yaml</code>, <code>03-tomcat-webappserver.yaml</code>, <code>04-httpd-webserver.yaml</code>: These are the deployment YAML files for the sample applications. They are the typical Kubernetes configs which contain the services and deployments.</li>
<li><code>05-ingress-resouce.yaml</code> : This is the configuration of the Ingress. It is using the test domain <code>*.localdev.me</code>. This is a domain that is available in most modern operating systems. It can be used for testing without the need to edit the <code>/etc/hosts</code> file. Ingress is configured to route as the following diagram:</li>
</ul>
<p><img src="/blog/2022/10/knocking-on-kubernetes-door/blog06-ingress.webp" alt="An icon of several people points to three URLS: “http://demo.localdev.me”, “http://demo.localdev.me/tomcat/”, and “http://httpd.localdev.me”. These three point through an Nginx Ingress box to three logos: Nginx, Tomcat, and Apache, respectively. The Nginx Ingress box and the logos all lie within a larger Kubernetes Cluster box."></p>
<p>Deploy the Ingress Controller. Execute the following to deploy the Ingress Controller:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">kubectl apply -f 01-ingress-controller.yaml
</code></pre></div><p>Execute the following to check on the deployment. The pod must be running and the Deployment must be ready:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">kubectl get all -n ingress-nginx
</code></pre></div><p><img src="/blog/2022/10/knocking-on-kubernetes-door/blog06-kubectl-ns-ing.webp" alt="The results of the above command. Highlighted is a line giving the following values: name: “pod/ingress-nginx-controller-6bf7bc7f94-gfgdw”; ready: “1/1”; status: “Running”; restarts: “0”; age: “21s”. Two sections down, another line is highlighted, with the values: name: “deployment.apps/ingress-nginx-controller”; ready: “1/1”; up-to-date: “1”; available: “1”; age: “21s”"></p>
<p>Deploy the sample applications. Execute the following to deploy the sample applications:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">kubectl apply -f 02-nginx-webserver.yaml
kubectl apply -f 03-tomcat-webappserver.yaml
kubectl apply -f 04-httpd-webserver.yaml
</code></pre></div><p>Execute the following to check on the deployments. All pods must be running and all deployments must be ready:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">kubectl get all
</code></pre></div><p><img src="/blog/2022/10/knocking-on-kubernetes-door/blog06-kubectl-ns.webp" alt="The output of the above command. Highlighted are lines from a table with the following values for name: “pod/myhttpd-xxxxxx”, “pod/mynginx-xxxxxx”, and “pod/mytomcat-xxxxxx”. They share values for ready, status, restarts, and age: “1/1”, “Running”, “0”, and “13s”, respectively. A later section is highlighted. The names are: “deployment.apps/myhttpd”, “deployment.apps/mynginx”, and “deployment.apps/mytomcat”. They share values for ready, up-to-date, available, and age: “1/1”, “1”, “1”, and “13s”, respectively."></p>
<p>Deploy the Ingress resources. Execute the following to deploy the Ingress resouces:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">kubectl apply -f 05-ingress-resouce.yaml
</code></pre></div><p>Execute the following to check on the Ingress resources:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">kubectl get ing
</code></pre></div><p><img src="/blog/2022/10/knocking-on-kubernetes-door/blog06-kubectl-ing.webp" alt="The outpu of the above command. The table only includes one line, with the following values: name: “myingress”; class: “nginx”; hosts: “demo.localdev.me,httpd.localdev.me”, address: blank; ports: “80”; age: “3s”"></p>
<p>Access the following URLs in your web browser. All URLs should bring you to the intended services:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">http://demo.localdev.me
http://demo.localdev.me/tomcat/
http://httpd.localdev.me
</code></pre></div><p><img src="/blog/2022/10/knocking-on-kubernetes-door/blog06-apps.webp" alt="Three browser windows displaying the above URLs. They display welcome pages for Nginx, Tomcat, and Apache, respectively."></p>
<h3 id="conclusion">Conclusion</h3>
<p>That’s all, folks. We have gone over the what, why, and how about the Kubernetes Ingress.</p>
<p>It is a powerful OSI layer 7 load balancer ready to be used with the Kubernetes cluster. There are free and open source solutions and there are also the paid ones, <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/">all listed here</a>.</p>
Kubernetes From The Ground Up With AWS EC2https://www.endpointdev.com/blog/2022/10/kubernetes-from-the-ground-up-with-aws-ec2/2022-10-06T00:00:00+00:00Jeffry Johar
<p><img src="/blog/2022/10/kubernetes-from-the-ground-up-with-aws-ec2/ship.webp" alt="A docked fishing ship faces the camera. A man stands on a dinghy next to it."><br>
Photo by Darry Lin <!-- https://www.pexels.com/@darrylin/ --></p>
<p>One way to learn Kubernetes infrastructure is to build it from scratch. This way of learning was introduced by the founding father of Kubernetes himself: <a href="https://twitter.com/kelseyhightower">Mr. Kelsey Hightower</a>. The lesson is known as <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way">“Kubernetes The Hard Way”</a>.</p>
<p>For this blog entry I would like to take a less demanding approach than Kubernetes The Hard Way, while still being educational. I would like to highlight only the major steps in creating a Kubernetes cluster and what is covered in <a href="https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/">CKA (Certified Kubernetes Administrator) exams</a>. Thus we are going to use the <code>kubeadm</code> tools to build the Kubernetes cluster.</p>
<p>The steps of creating a Kubernetes cluster are hidden to you if you are using a Kubernetes as a service such as AWS EKS, GCP GKE or the enterprise suites of Kubernetes such as Red Hat Openshift or VMware Tanzu. All of these products let you use Kubernetes without the need to worry about creating it.</p>
<h3 id="prerequisites">Prerequisites</h3>
<p>For this tutorial we will need the following from AWS:</p>
<ul>
<li>An active AWS account</li>
<li>EC2 instances with Amazon Linux 2 as the OS</li>
<li>AWS Keys for SSH to access control node and managed nodes</li>
<li>Security group which allows SSH and HTTP</li>
<li>A decent editor such as Vim or Notepad++ to create the inventory and the playbook</li>
</ul>
<h3 id="ec2-instances-provisioning">EC2 Instances provisioning</h3>
<p>Provisioning of the the control plane, a.k.a. the master node:</p>
<ol>
<li>Go to AWS Console → EC2 → Launch Instances.</li>
<li>Set the Name tag to <code>Master</code>.</li>
<li>Select the Amazon Linux 2 AMI.</li>
<li>Select a key pair. If there are no available key pairs, please create one according to Amazon’s instructions.</li>
<li>Allow SSH and 6443 TCP ports.</li>
<li>Set Number of Instances to 1.</li>
<li>Click Launch Instance.</li>
</ol>
<p>Provisioning of the worker nodes, a.k.a. the minions:</p>
<ol>
<li>Go to AWS Console → EC2 → Launch Instances.</li>
<li>Set the Name tag to <code>Node</code>.</li>
<li>Select the Amazon Linux 2 AMI.</li>
<li>Select a key pair. If there are no available key pairs, please create one according to Amazon’s instructions.</li>
<li>Allow SSH TCP port.</li>
<li>Set Number of Instances to 2.</li>
<li>Click Launch Instance.</li>
</ol>
<h3 id="installing-the-container-runtime">Installing the container runtime</h3>
<p>All Kubernetes nodes require some sort of container runtime engine. For these nodes we are going to use Docker. Log in to all EC2 instances and execute the following:</p>
<ol>
<li>
<p>Install Docker.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">sudo yum update -y
sudo amazon-linux-extras install docker -y
sudo usermod -a -G docker ec2-user
sudo service docker start
sudo systemctl enable docker.service
sudo su - ec2-user
</code></pre></div></li>
<li>
<p>Verify the Docker installation.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker ps
</code></pre></div><p>We should get an empty Docker status:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
</code></pre></div></li>
<li>
<p>Install TC (Traffic Controller). This is required by the kubeadm tool.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">sudo yum install tc -y
</code></pre></div></li>
</ol>
<h3 id="kubernetes-control-plane-setup">Kubernetes control plane setup</h3>
<ol>
<li>
<p>Add the Kubernetes repository. Log in to the node and paste the following:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">cat <<'EOF' | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
</code></pre></div></li>
<li>
<p>Install the Kubernetes binaries for Control Plane (<code>kubelet</code>, <code>kubeadm</code>, <code>kubectl</code>) and enable it.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
</code></pre></div></li>
<li>
<p>Initiate the Control Plane. The <code>--ignore-preflight-errors</code> switch is required because we are using a system which has fewer than 2 CPUs and less than 2 GB of RAM. The <code>--pod-network-cidr</code> value is the default value for flannel (a networking add-on).</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">sudo kubeadm init --ignore-preflight-errors=NumCPU,Mem --pod-network-cidr=10.244.0.0/16
</code></pre></div><p>There are 3 important points from the output of this command. They are the successful note on the cluster initalization, the kubeconfig setup and the worker node joining string. The following is a sample output:</p>
<p><img src="/blog/2022/10/kubernetes-from-the-ground-up-with-aws-ec2/kubeadm01.webp" alt="The output of kubeadm, with the three important points highlighted. They read: 1: “Your Kubernetes control-lane as been initialized successfully!”, 2: “mkdir -p $HOME/.kube;sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config;sudo chown $(id -u):$(id -g) $HOME/.kube/config”, 3: “kubeadm join 172.XX.XX.XX:6643 –token XXXX –discover-token-ca-cert-hash XX”"></p>
</li>
<li>
<p>Create the configuration file for kubectl a.k.a. kubeconfig to connect to the Kubernetes cluster. The scripts are from previous output:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre></div></li>
<li>
<p>Install the pod network add-on. We are going to use <a href="https://github.com/flannel-io/flannel">flannel</a>.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
</code></pre></div></li>
</ol>
<h3 id="kubernetes-worker-nodes-setup">Kubernetes worker nodes setup</h3>
<p>Execute the following in all worker nodes:</p>
<ol>
<li>
<p>Add the Kubernetes repository. Log in to the node and paste the following</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">cat <<'EOF' | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm
EOF
</code></pre></div></li>
<li>
<p>Install the Kubernetes binaries for worker nodes (kubelet, kubeadm) and enable kubelet.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">sudo yum install -y kubelet kubeadm --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
</code></pre></div></li>
<li>
<p>Execute the join command with <code>sudo</code>. This command is from step #3 in the Kubernetes Control Plane Setup section.</p>
<p><img src="/blog/2022/10/kubernetes-from-the-ground-up-with-aws-ec2/kubeadm-join.webp" alt="A command and its results: sudo kubeadm join 172.XX.XX.XX:6443 –token XXXX –discovery-token-ca-cert-hash sha256:4XXX"></p>
</li>
</ol>
<h3 id="hello-kubernetes-">Hello, Kubernetes :)</h3>
<p>We have successfully created a Kubernetes cluster. Let’s check on the cluster and try to deploy some sample applications.</p>
<ol>
<li>Get the latest status of the nodes. You might need to wait a minute or more for all nodes to become <code>Ready</code>.</li>
</ol>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">kubectl get nodes
</code></pre></div><p>Sample output:</p>
<p><img src="/blog/2022/10/kubernetes-from-the-ground-up-with-aws-ec2/kubeadm02.webp" alt="Results of the kubectl get nodes. 3 nodes appear, each with the Ready status."></p>
<ol start="2">
<li>Deploy a sample Nginx web server</li>
</ol>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">kubectl create deployment mynginx --image=nginx
</code></pre></div><ol start="3">
<li>Scale the Deployment to have 6 replicas and check on where the pods run. The pods should be assigned randomly to the available worker nodes.</li>
</ol>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">kubectl scale --replicas=6 deployment/mynginx
kubectl get pods -o wide
</code></pre></div><p>Sample output:</p>
<p><img src="/blog/2022/10/kubernetes-from-the-ground-up-with-aws-ec2/kubeadm03.webp" alt="Results of the kubectl get nodes and kubectl get pods. The two worker nodes are highlighted, pointing to their coinciding output from the command “kubectl get pods -o wide”."></p>
<h3 id="conclusion">Conclusion</h3>
<p>That’s all, folks. I hope this blog entry has shed some insights on what it takes to create a Kubernetes cluster. Have a nice day :)</p>
Running PostgreSQL on Dockerhttps://www.endpointdev.com/blog/2022/07/running-postgresql-on-docker/2022-07-27T00:00:00+00:00Jeffry Johar
<p><img src="/blog/2022/07/running-postgresql-on-docker/elephant.webp" alt="An elephant in a jungle"></p>
<!-- Photo licensed under CC0 (public domain) from https://pxhere.com/en/photo/1366104 -->
<h3 id="introduction">Introduction</h3>
<p>PostgreSQL, or Postgres, is an open-source relational database. It is officially supported on all the major operating systems: Windows, Linux, BSD, MacOS, and others.</p>
<p>Besides running as an executable binary in an operating system, Postgres is able to run as a containerized application on Docker! In this article we are going to walk through the Postgres implementation on Docker.</p>
<h3 id="prerequisites">Prerequisites</h3>
<ul>
<li>Docker or Docker Desktop. Please refer to <a href="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/">my previous article</a> for help with Docker installation.</li>
<li>Internet access is required to pull or download the Postgres container image from the Docker Hub.</li>
<li>A decent text editor, such as Vim or Notepad++, to create the configuration YAML files.</li>
</ul>
<h3 id="get-to-know-the-official-postgres-image">Get to know the official Postgres Image</h3>
<p>Go to <a href="https://hub.docker.com">Docker Hub</a> and search for “postgres”.</p>
<p><img src="/blog/2022/07/running-postgresql-on-docker/docker01.webp" alt="Docker Hub website search screen shot"></p>
<p>There are a lot of images for PostgreSQL at Docker Hub. If you don’t have any special requirements, it is best to select the official image. This is the image maintained by the Docker PostgreSQL Community.</p>
<p><img src="/blog/2022/07/running-postgresql-on-docker/docker02.webp" alt="Docker Hub website search result for postgres"></p>
<p>The <a href="https://hub.docker.com/_/postgres">page that search result links to</a> describes the Postgres image, how it was made and how to use it. From this page we know the image name and the required parameters. This is essential information for starting a Docker image, as we will see in the following steps.</p>
<h3 id="run-the-postgres-image-as-a-basic-postgres-container">Run the Postgres image as a basic Postgres container</h3>
<p>The following command is the bare minimum for running Postgres on Docker:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker run --name basic-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
</code></pre></div><p>Where:</p>
<ul>
<li><code>--name basic-postgres</code> sets the container name to basic-postgres,</li>
<li><code>-e POSTGRES_PASSWORD=mysecretpassword</code> sets the password of the default user <code>postgres</code>,</li>
<li><code>-d</code> runs the container in detached mode or in other words in the background, and</li>
<li><code>postgres</code> uses the postgres image. By default it will get the image from <a href="https://hub.docker.com">https://hub.docker.com</a>.</li>
</ul>
<p>Execute <code>docker ps</code> to check on running Docker containers. We should see our basic-postgres container running. <code>docker ps</code> is like <code>ps -ef</code> on Linux/Unix, which lists all running applications.</p>
<p>Sample output:</p>
<p><img src="/blog/2022/07/running-postgresql-on-docker/term01.webp" alt="Screen shot of terminal showing docker ps output after postgres container was started"></p>
<h3 id="working-with-the-postgres-container">Working with the Postgres container</h3>
<p>Just as Postgres running natively on an operating system, Postgres on Docker comes with the psql front-end client for accessing the Postgres database. To access psql in the Postgres container execute the following command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker exec -it basic-postgres psql -U postgres
</code></pre></div><p>Where:</p>
<ul>
<li><code>exec -it</code> executes something interactive (<code>-i</code>) with a TTY (<code>-t</code>),</li>
<li><code>basic-postgres</code> specifies the container, and</li>
<li><code>psql -U postgres</code> is the psql command with its switch to specify the Postgres user.</li>
</ul>
<p>Now we are able to execute any psql command.</p>
<p>Let’s try a few Postgres commands and import the famous “dvdrental” sample database to our Postgres installation.</p>
<p>List all available databases:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">\l
</code></pre></div><p>Create a database named <code>dvdrental</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">create database dvdrental;
</code></pre></div><p>List all available databases. We should now see the created <code>dvdrental</code> database.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">\l
</code></pre></div><p>Quit from psql:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">\q
</code></pre></div><p>Download the dvdrental database backup from <a href="https://www.postgresqltutorial.com/">postgresqltutorial.com</a> and after it succeeds, unzip it:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">curl -O https://www.postgresqltutorial.com/wp-content/uploads/2019/05/dvdrental.zip
unzip dvdrental.zip
</code></pre></div><p>Execute the following command to import the data. It will restore the <code>dvdrental.tar</code> backup to our Postgres database.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker exec -i basic-postgres pg_restore -U postgres -v -d dvdrental < dvdrental.tar
</code></pre></div><p>Where:</p>
<ul>
<li><code>exec -i</code> executes something interactive,</li>
<li><code>basic-postgres</code> specifies which container,</li>
<li><code>pg_restore -U postgres -v -d dvdrental</code> is the pg_restore command with its own arguments:
<ul>
<li><code>-U postgres</code> says to connect as the postgres user,</li>
<li><code>-v</code> enables verbose mode,</li>
<li><code>-d dvdrental</code> specifies the database to connect to, and</li>
</ul>
</li>
<li><code>< dvdrental.tar</code> says which file’s data the outside shell should pass into the container to pg_restore.</li>
</ul>
<p>Log in to the dvdrental database:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker exec -it basic-postgres psql -U postgres -d dvdrental
</code></pre></div><p>Where:</p>
<ul>
<li><code>exec -it</code> executes something interactive with a terminal,</li>
<li><code>basic-postgres</code> specifies which container, and</li>
<li><code>psql -U postgres -d dvdrental</code> is the psql command with the postgres user and the dvdrental database specified.</li>
</ul>
<p>List all tables by describing the tables in the dvdrental database:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">\dt
</code></pre></div><p>List the first 10 actors from the actor table:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">select * from actor limit 10;
</code></pre></div><p>Quit from psql:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">\q
</code></pre></div><p>Gracefully stop the Docker container:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker stop basic-postgres
</code></pre></div><p>If you don’t need it anymore you can delete the container:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker rm basic-postgres
</code></pre></div><p>Sample output:</p>
<p><img src="/blog/2022/07/running-postgresql-on-docker/psql01.webp" alt="Screen shot of terminal showing import of dvdrental sample database into Postgres"></p>
<p>And later:</p>
<p><img src="/blog/2022/07/running-postgresql-on-docker/psql02.webp" alt="Screen shot of terminal showing psql investigation of dvdrental sample database"></p>
<h3 id="run-postgres-image-as-a-real-world-postgres-container">Run Postgres image as a “real world” Postgres container</h3>
<p>The basic Postgres container is only good for learning or testing. It requires more features to be able to serve as a working database for a real world application. We will add 2 more features to make it useable:</p>
<ul>
<li><strong>Persistent storage:</strong> By default the container filesystem is ephemeral. What this means is whenever we restart a terminated or deleted container, it will get an all-new, fresh filesystem and all previous data will be wiped clean. This is not suitable for database systems. To be a working database, we need to add a persistent filesystem to the container.</li>
<li><strong>Port forwarding from host to container:</strong> The container network is isolated, making it inaccessible from the outside world. A database is no use if it can’t be accessed. To make it accessible we need to forward a host operating system port to the container port.</li>
</ul>
<p>Let’s start building a “real world” Postgres container. Firstly we need to create the persistent storage. In Docker this is known as a volume.</p>
<p>Execute the following command to create a volume named <code>pg-data</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker volume create pg-data
</code></pre></div><p>List all Docker volumes and ensure that <code>pg-data</code> was created:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker volume ls | grep pg-data
</code></pre></div><p>Run a Postgres container with persistent storage and port forwarding:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker run --name real-postgres \
-e POSTGRES_PASSWORD=mysecretpassword \
-v pg-data:/var/lib/postgresql/data \
-p 5432:5432 \
-d \
postgres
</code></pre></div><p>Where:</p>
<ul>
<li><code>--name real-postgres</code> sets the container name,</li>
<li><code>-e POSTGRES_PASSWORD=mysecretpassword</code> sets the password of the default user <code>postgres</code></li>
<li><code>-v pg-data:/var/lib/postgresql/data</code> mounts the pg-data volume as the postgres data directory,</li>
<li><code>-p 5432:5432</code> forwards from port 5432 of host operating system to port 5432 of container,</li>
<li><code>-d</code> runs the container in detached mode or, in other words, in the background, and</li>
<li><code>postgres</code> use the postgres image. By default it will get the image from <a href="https://hub.docker.com">https://hub.docker.com</a></li>
</ul>
<p>Execute <code>docker ps</code> to check on running containers on Docker. Take note that the real-postgres container has port forwarding information.</p>
<p>Now we are going to try to access the Postgres container with psql from the host operating system.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">psql -h localhost -p 5432 -U postgres
</code></pre></div><p>Sample output:</p>
<p><img src="/blog/2022/07/running-postgresql-on-docker/term02.webp" alt="Screen shot of terminal showing access to Postgres in Docker with persistent storage"></p>
<h3 id="cleaning-up-the-running-container">Cleaning up the running container</h3>
<p>Stop the container:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker stop real-postgres
</code></pre></div><p>Delete the container:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker rm real-postgres
</code></pre></div><p>Delete the volume:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker volume rm pg-data
</code></pre></div><h3 id="managing-postgres-container-with-docker-compose">Managing Postgres container with Docker Compose</h3>
<p>Managing a container with a long list of arguments to Docker is tedious and error prone. Instead of the Docker CLI command we could use Docker Compose, which is a tool for managing containers from a YAML manifest file.</p>
<p>Create the following file named <code>docker-compose.yaml</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#b06;font-weight:bold">version</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">'3.1'</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">services</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">db</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">container_name</span>:<span style="color:#bbb"> </span>real-postgres-2<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>postgres<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">restart</span>:<span style="color:#bbb"> </span>always<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#d20;background-color:#fff0f0">"5432:5432"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">environment</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">POSTGRES_PASSWORD</span>:<span style="color:#bbb"> </span>mysecretpassword<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- pg-data-2:/var/lib/postgresql/data<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">volumes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">pg-data-2</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">external</span>:<span style="color:#bbb"> </span><span style="color:#080;font-weight:bold">false</span><span style="color:#bbb">
</span></code></pre></div><p>To start the Postgres container with Docker Compose, execute the following command in the same location as <code>docker-compose.yaml</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker-compose up -d
</code></pre></div><p>Where <code>-d</code> runs the container in detached mode.</p>
<p>Execute <code>docker ps</code> to check on running Docker containers. Take note that <code>real-postgres-2</code> was created by Docker Compose.</p>
<p>To stop Postgres container with Docker Compose, execute the following command in the same location as <code>docker-compose.yaml</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker-compose down
</code></pre></div><p>Sample output:</p>
<p><img src="/blog/2022/07/running-postgresql-on-docker/term03.webp" alt="Screen shot of terminal showing Postgres container deployed by Docker Compose"></p>
<h3 id="conclusion">Conclusion</h3>
<p>That’s all, folks. We have successfully deployed PostgreSQL on Docker.</p>
<p>Now we are able to reap the benefits of container technology for PostgreSQL, including portability, agility, and better management.</p>
How to deploy a Django App with Aurora Serverless and AWS Copilothttps://www.endpointdev.com/blog/2022/06/how-to-deploy-django-app-with-aurora-serverless-and-copilot/2022-06-26T00:00:00+00:00Jeffry Johar
<p><img src="/blog/2022/06/how-to-deploy-django-app-with-aurora-serverless-and-copilot/aurora-banner.webp" alt="Photo of an aurora"><br>
Photo by Виктор Куликов</p>
<!-- Photo licensed under Legal Simplicity (public domain) from https://www.pexels.com/photo/white-tent-on-green-grass-field-under-aurora-borealis-during-night-time-8601966/ -->
<p>AWS Copilot has the capability to provision an external database for its containerized work load. The database options are DynamoDB (NoSQL), Aurora Serverless (SQL), and S3 Buckets. For this blog we are going to provision and use Aurora Serverless with a containerized Django app. Aurora Serverless comes with 2 options for its engine: MySQL or PostgreSQL.</p>
<p>Watch <a href="https://www.youtube.com/watch?v=FzxqIdIZ9wc">Amazon’s 2-minute introduction video</a> to get the basic idea of Aurora Serverless.</p>
<p>We are going to work with the same Django application from <a href="/blog/2022/06/how-to-deploy-containerized-django-app-with-aws-copilot/">my last article on AWS Copilot</a>.</p>
<p>In my last article, the Django application was deployed with SQLite as the database. The application’s data is stored in SQLite which resides internally inside the container. The problem with this setup is the data is not persistent. Whenever we redeploy the application, the container will get a new filesystem. Thus all old data will be removed automatically.</p>
<p>Now we are moving away the application’s data externally so that the life of the data does not depend on the container. We are going to put the data on the Aurora Serverless with PostgreSQL as the engine.</p>
<p><img src="/blog/2022/06/how-to-deploy-django-app-with-aurora-serverless-and-copilot/django-sqlite.webp" alt="Diagram of Django app with SQLite database"></p>
<p style="text-align: center; font-weight: bold">Django with SQLite as the internal database</p>
<br>
<p><img src="/blog/2022/06/how-to-deploy-django-app-with-aurora-serverless-and-copilot/django-aurora.webp" alt="Diagram of Django app with AWS Aurora database"></p>
<p style="text-align: center; font-weight: bold">Django with Aurora Serverless as the external database</p>
<br>
<h3 id="the-prerequisites">The Prerequisites</h3>
<p>Docker, AWS CLI, and AWS Copilot CLI are required. Please refer to <a href="/blog/2022/06/how-to-deploy-containerized-django-app-with-aws-copilot/">my last article</a> for how to install them.</p>
<h3 id="the-django-project">The Django Project</h3>
<p>Create a Django project by using a Python Docker Image. You can clone my Git project to get the Dockerfile, docker-compose.yaml and requirements.txt that I’m using:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ git clone https://github.com/aburayyanjeffry/django-copilot.git django-aurora
</code></pre></div><p>Go to the <code>django-aurora</code> directory and execute <code>docker-compose</code> to create a Django project named “mydjango”.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ cd django-aurora
$ docker-compose run web django-admin startproject mydjango
</code></pre></div><h3 id="the-deployment-with-aws-copilot">The Deployment with AWS Copilot</h3>
<p>Execute the following command to create a AWS Copilot application with the name of “mydjango”, a load balancer container with the service name “django-web” which is made from the Dockerfile in the current directory.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ copilot init \
-a mydjango \
-t "Load Balanced Web Service" -n django-web \
-d ./Dockerfile
</code></pre></div><p>Answer N to the following question. We want to defer the deployment until we have set up the database.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">All right, you're all set for local development.
Would you like to deploy a test environment? [? for help] (y/N) N
</code></pre></div><p>We need to create an environment for our application. Execute the following to create an environment named <code>test</code> for the “mydjango” application with the default configuration.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ copilot env init \
--name test \
--profile default \
--app mydjango \
--default-config
</code></pre></div><p>Now we are going to generate a config for our Aurora Serverless database. Basically this is the CloudFormation template that will be used for Aurora Serverless.</p>
<p>Execute the following to generate the configuration for an Aurora cluster named “mydjango-db” that we will use for the “django-web” application. The Aurora cluster will be using the PostgreSQL engine and the database name will be “mydb”.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ copilot storage init \
-n mydjango-db \
-t Aurora -w \
django-web \
--engine PostgreSQL \
--initial-db mydb
</code></pre></div><p>Take note of the injected environment variable name. This is where the database info and credentials are stored, and we will use this variable in later steps.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">✔ Wrote CloudFormation template at copilot/django-web/addons/mydjango-db.yml
Recommended follow-up actions:
- Update django-web's code to leverage the injected environment variable MYDJANGODB_SECRET.
</code></pre></div><p>Edit mydjango/settings.py to include the following. We will pass the injected environment variable we got previously to the function for getting the DBINFO variables.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python"><span style="color:#080;font-weight:bold">from</span> <span style="color:#b06;font-weight:bold">pathlib</span> <span style="color:#080;font-weight:bold">import</span> Path
<span style="color:#080;font-weight:bold">import</span> <span style="color:#b06;font-weight:bold">os</span>
<span style="color:#080;font-weight:bold">import</span> <span style="color:#b06;font-weight:bold">json</span>
...
ALLOWED_HOSTS = [<span style="color:#d20;background-color:#fff0f0">'*'</span>]
...
DBINFO = json.loads(os.environ.get(<span style="color:#d20;background-color:#fff0f0">'MYDJANGODB_SECRET'</span>, <span style="color:#d20;background-color:#fff0f0">'</span><span style="color:#33b;background-color:#fff0f0">{}</span><span style="color:#d20;background-color:#fff0f0">'</span>))
DATABASES = {
<span style="color:#d20;background-color:#fff0f0">'default'</span>: {
<span style="color:#d20;background-color:#fff0f0">'ENGINE'</span>: <span style="color:#d20;background-color:#fff0f0">'django.db.backends.postgresql'</span>,
<span style="color:#d20;background-color:#fff0f0">'HOST'</span>: DBINFO[<span style="color:#d20;background-color:#fff0f0">'host'</span>],
<span style="color:#d20;background-color:#fff0f0">'PORT'</span>: DBINFO[<span style="color:#d20;background-color:#fff0f0">'port'</span>],
<span style="color:#d20;background-color:#fff0f0">'NAME'</span>: DBINFO[<span style="color:#d20;background-color:#fff0f0">'dbname'</span>],
<span style="color:#d20;background-color:#fff0f0">'USER'</span>: DBINFO[<span style="color:#d20;background-color:#fff0f0">'username'</span>],
<span style="color:#d20;background-color:#fff0f0">'PASSWORD'</span>: DBINFO[<span style="color:#d20;background-color:#fff0f0">'password'</span>],
}
}
</code></pre></div><p>Deploy the application:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ copilot deploy --name django-web
</code></pre></div><p>Open the terminal of the service:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ copilot svc exec
</code></pre></div><p>Execute the following commands to migrate the initial database and to create a superuser account:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ python manage.py migrate
$ python manage.py createsuperuser
</code></pre></div><p>Execute the following command to check on the environment variable. Take note of the <code>MYDJANGODB_SECRET</code> variable. It is the variable that holds the database information.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ env | grep MYDJANGODB_SECRET
</code></pre></div><h3 id="how-to-query-aurora-serverless">How to query Aurora Serverless</h3>
<p>We can use the <a href="https://console.aws.amazon.com/rds/home">Query Editor at AWS Console</a> for RDS to query Aurora Serverless.</p>
<p>Click the DB base on the DB identifier from the injected environment variable and click Modify.</p>
<p><img src="/blog/2022/06/how-to-deploy-django-app-with-aurora-serverless-and-copilot/rds-01-modify.webp" alt="Screenshot of Amazon RDS main control panel"></p>
<p>Click the check box for Data API.</p>
<p><img src="/blog/2022/06/how-to-deploy-django-app-with-aurora-serverless-and-copilot/rds-02-api.webp" alt="Screenshot of Amazon RDS Web Service Data API checkbox"></p>
<p>Select Apply Immediately.</p>
<p><img src="/blog/2022/06/how-to-deploy-django-app-with-aurora-serverless-and-copilot/rds-03-immediately.webp" alt="Screenshot of Amazon RDS Apply Immediately option"></p>
<p>Click Query Editor and fill in the Database information from the injected environment variable.</p>
<p><img src="/blog/2022/06/how-to-deploy-django-app-with-aurora-serverless-and-copilot/rds-04-dbinfo.webp" alt="Screenshot showing environment variable data extracted into the AWS RDS connection setup panel"></p>
<p>Now you may use the Query Editor to query the database. Execute the following query to list all tables in the database:</p>
<p><img src="/blog/2022/06/how-to-deploy-django-app-with-aurora-serverless-and-copilot/rds-05-query.webp" alt="Screenshot of Amazon RDS Query Editor and results"></p>
<h3 id="the-end">The End</h3>
<p>That’s all, folks. We have deployed a containerized Django application and an Aurora Serverless with AWS Copilot. For further info on AWS Copilot <a href="https://aws.github.io/copilot-cli/">visit its website</a>.</p>
How to deploy a containerized Django app with AWS Copilothttps://www.endpointdev.com/blog/2022/06/how-to-deploy-containerized-django-app-with-aws-copilot/2022-06-21T00:00:00+00:00Jeffry Johar
<p><img src="/blog/2022/06/how-to-deploy-containerized-django-app-with-aws-copilot/pilots.webp" alt="Photo of 2 pilots in an airplane cockpit"></p>
<!-- Photo licensed under CC0 (public domain) from https://pxhere.com/en/photo/609377 -->
<p>Generally there are 2 major options at AWS when it comes to deployment of containerized applications. You can either go for EKS or ECS.</p>
<p>EKS (Elastic Kubernetes Service) is the managed Kubernetes service by AWS. ECS (Elastic Container Service), on the other hand, is AWS’s own way to manage your containerized application. You can learn more about EKS and ECS <a href="https://aws.amazon.com/blogs/containers/amazon-ecs-vs-amazon-eks-making-sense-of-aws-container-services/">on the AWS website</a>.</p>
<p>For this post we will use ECS.</p>
<h3 id="the-chosen-one-and-the-sidekick">The chosen one and the sidekick</h3>
<p>With ECS chosen, now you have to find a preferably easy way to deploy your containerized application on it.</p>
<p>There are quite a number of resources from AWS that are needed for your application to live on ECS, such as VPC (Virtual Private Cloud), Security Group (firewall), EC2 (virtual machine), Load Balancer, and others. Creating these resources manually is cumbersome so AWS has came out with a tool that can automate the creation of all of them. The tool is known as AWS Copilot and we are going to learn how to use it.</p>
<h3 id="install-docker">Install Docker</h3>
<p>Docker or Docker Desktop is required for building the Docker image later. Please refer to my previous article on <a href="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/">how to install Docker Desktop on macOS</a>, or <a href="https://docs.docker.com/get-docker/">follow Docker’s instructions</a> for Linux and Windows.</p>
<h3 id="set-up-aws-cli">Set up AWS CLI</h3>
<p>We need to set up the Docker AWS CLI (command-line interface) for authentication and authorization to AWS.</p>
<p>Execute the following command to install the AWS CLI on macOS:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ curl -O "https://awscli.amazonaws.com/AWSCLIV2.pkg"
$ sudo installer -pkg AWSCLIV2.pkg -target /
</code></pre></div><p>For other OSes see <a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">Amazon’s docs</a>.</p>
<p>Execute the following command and enter the <a href="https://docs.aws.amazon.com/powershell/latest/userguide/pstools-appendix-sign-up.html">AWS Account and Access Keys</a>.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ aws configure
</code></pre></div><h3 id="install-aws-copilot-cli">Install AWS Copilot CLI</h3>
<p>Now it’s time for the main character: AWS Copilot.</p>
<p>Install AWS Copilot with Homebrew for macOS:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ brew install aws/tap/copilot-cli
</code></pre></div><p>See <a href="https://aws.github.io/copilot-cli/docs/getting-started/install/">AWS Copilot Installation</a> for other platforms.</p>
<h3 id="the-django-project">The Django project</h3>
<p>Create a Django project by using a Python Docker Image. You can clone my Git project to get the <code>Dockerfile</code>, <code>docker-compose.yaml</code> and <code>requirements.txt</code> that I’m using.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ git clone https://github.com/aburayyanjeffry/django-copilot.git
</code></pre></div><p>Go to the <code>django-pilot</code> directory and execute <code>docker-compose</code> to create a Django project named “mydjango”.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ cd django-copilot
$ docker-compose run web django-admin startproject mydjango .
</code></pre></div><p>Edit <code>mydjango/settings.py</code> to allow all hostnames for its URL. This is required because by default AWS will generate a random URL for the application. Find the following variable and set the value as follows:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python">ALLOWED_HOSTS = [<span style="color:#d20;background-color:#fff0f0">'*'</span>]
</code></pre></div><h3 id="the-deployment-with-aws-copilot">The Deployment with AWS Copilot</h3>
<p>Create an AWS Copilot “Application”. This is a grouping of services such as web app or database, environments (development, QA, production), and CI/CD pipelines. Execute the following command to create an Application with the name of “mydjango”.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ copilot init -a mydjango
</code></pre></div><p>Select the Workload type. Since this Django is an internet-facing app we will choose the “Load Balanced Web Service”.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">Which workload type best represents your architecture? [Use arrows to move, type to filter, ? for more help]
Request-Driven Web Service (App Runner)
> Load Balanced Web Service (Internet to ECS on Fargate)
Backend Service (ECS on Fargate)
Worker Service (Events to SQS to ECS on Fargate)
Scheduled Job (Scheduled event to State Machine to Fargate)
</code></pre></div><p>Give the Workload a name. We are going to name it “mydjango-web”.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">Workload type: Load Balanced Web Service
What do you want to name this service? [? for help] mydjango-web
</code></pre></div><p>Select the Dockerfile in the current directory.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">Which Dockerfile would you like to use for mydjango-web? [Use arrows to move, type to filter, ? for more help]
> ./Dockerfile
Enter custom path for your Dockerfile
Use an existing image instead
</code></pre></div><p>Accept to create a test environment.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">All right, you're all set for local development.
Would you like to deploy a test environment? [? for help] (y/N) y
</code></pre></div><p>Wait and see. At the end of the deployment you will get the URL of your application. Open it in a browser.</p>
<p><img src="/blog/2022/06/how-to-deploy-containerized-django-app-with-aws-copilot/sample.webp" alt="Sample output of AWS copilot init run"></p>
<p><img src="/blog/2022/06/how-to-deploy-containerized-django-app-with-aws-copilot/browser.webp" alt="Sample view from a browser of a Django app default debug page stating “The install worked successfully! Congratulations!""></p>
<p>Now let’s migrate some data, create a superuser, and try to log in. The Django app comes with a SQLite database. Execute the following command to get a terminal for the Django app:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ copilot svc exec
</code></pre></div><p>Once in the terminal, execute the following to migrate the initial data and to create the superuser:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ python manage.py migrate
$ python manage.py createsuperuser
</code></pre></div><p><img src="/blog/2022/06/how-to-deploy-containerized-django-app-with-aws-copilot/sample-db.webp" alt="Output from Django database migration run"></p>
<p>Now you may access the admin page and login by using the created credentials.</p>
<p><img src="/blog/2022/06/how-to-deploy-containerized-django-app-with-aws-copilot/login01.webp" alt="Django login page screenshot"></p>
<p>You should see the Django admin:</p>
<p><img src="/blog/2022/06/how-to-deploy-containerized-django-app-with-aws-copilot/login02.webp" alt="Django admin page screenshot after successful login"></p>
<h3 id="a-mini-cheat-sheet">A mini cheat sheet</h3>
<table>
<thead>
<tr>
<th>AWS Copilot commands</th>
<th>Remarks</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>copilot app ls</code></td>
<td>To list available Applications</td>
</tr>
<tr>
<td><code>copilot app show -n appname</code></td>
<td>To get the details of an Application</td>
</tr>
<tr>
<td><code>copilot app delete -n appname</code></td>
<td>To delete an Application</td>
</tr>
<tr>
<td><code>copilot svc ls</code></td>
<td>To list available Services</td>
</tr>
<tr>
<td><code>copilot svc show -n svcname</code></td>
<td>To get the details of a Service</td>
</tr>
<tr>
<td><code>copilot svc delete -n svcname</code></td>
<td>To delete a Service</td>
</tr>
</tbody>
</table>
<h3 id="the-end">The End</h3>
<p>That’s all, folks.</p>
<p>AWS Copilot is a tool to automate the deployment of AWS infrastructure for our containerized application needs. It takes away most of the worries about infrastructure and enables us to focus sooner on the application development.</p>
<p>For further info on AWS Copilot <a href="https://aws.github.io/copilot-cli/">visit its website</a>.</p>
Getting started with Docker and Kubernetes on macOShttps://www.endpointdev.com/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/2022-06-20T00:00:00+00:00Jeffry Johar
<style>
img {
max-height: 70vh;
}
</style>
<p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/shipping.webp" alt="Shipping infrastructure at a dock"></p>
<!-- Photo licensed under CC0 (public domain) from https://pxhere.com/en/photo/1222170 -->
<p>What is the best way to master American English? One school of thought says that the best way to learn a language is to live in the country of the origin. For American English that would be the USA. Why is that so? Because we as the learners get to talk to native speakers daily. By doing this, we get to know how the natives use the language and its grammar in the real world.</p>
<p>The same goes for learning Docker and Kubernetes. The best way to learn Docker and Kubernetes is to get them in our MacBooks, laptops, and PCs. This way we can learn and try locally what works and what doesn’t work in our local host at any time, any day.</p>
<p>Lucky for us earthlings who enjoy GUIs, Docker now has Docker Desktop. As its name suggests, it is nicely built for the desktop. It comes with GUI and CLI to manage our Docker and Kubernetes needs. Please take note of the Docker Desktop license. It is free for personal use, education, and open source projects, and has <a href="https://www.docker.com/pricing/">a fee for enterprise usage</a>. With that out of the way, let’s get things started.</p>
<h3 id="docker-desktop-installation">Docker Desktop Installation</h3>
<p>The official Docker Desktop for can be found <a href="https://docs.docker.com/desktop/">on Docker’s website</a>. It covers installation for macOS, Linux, and Windows. For this post we are going to install Docker Desktop for macOS using Brew. Execute the following command to proceed with the Docker Desktop installation:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">brew install --cask docker
</code></pre></div><p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-01.webp" alt="Installing Docker Desktop with Brew"></p>
<p>Then run it at Finder ➝ Application ➝ Docker.</p>
<p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-02.webp" alt="Docker in the Finder list of Applications"></p>
<p>Upon a successful installation, Docker Desktop will appear as the following:</p>
<p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-03.webp" alt="Screenshot of Docker Desktop"></p>
<p>Click the Docker icon at the top menu bar to ensure the Docker Desktop is running.</p>
<p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-04.webp" alt="The Docker icon in the top menu bar of macOS"></p>
<h3 id="run-the-first-containerized-application">Run the first containerized application</h3>
<p>For the first application we are going to run the latest version of nginx official image from hub.docker.com. Open the terminal and execute the following to run the nginx image as a background service at port 80:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">docker run -d -p 80:80 nginx:latest
</code></pre></div><p>Run the following command to check on the application. This is the Docker equivalent to the standard <code>ps</code> Unix command to list processes.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">docker ps
</code></pre></div><p>curl the application at localhost port 80:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">curl http://localhost
</code></pre></div><p>Sample output:</p>
<p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-05.webp" alt="curl outputting the default nginx HTML"></p>
<h3 id="stop-the-application">Stop the application</h3>
<p>Execute the following command to get the application information and take note of the container ID.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">docker ps
</code></pre></div><p>Execute the following to stop the application by its container ID. In the following example the container ID is <code>f7c19b95fcc2</code>.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">docker stop {container ID}
</code></pre></div><p>Run <code>docker ps</code> again to ensure that the stopped application is not being display as a running application.</p>
<p>Sample Output:</p>
<p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-06.webp" alt="Docker ps output container list"></p>
<h3 id="enable-kubernetes">Enable Kubernetes</h3>
<p>Click the Docker icon at the top menu bar and click Preferences:</p>
<p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-07.webp" alt="The preferences under Docker’s menu icon"></p>
<p>Enable Kubernetes and click Apply and Restart:</p>
<p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-08.webp" alt="Kubernetes enable button in Docker Desktop preferences"></p>
<p>Click the Docker icon at the top menu bar and ensure the Kubernetes is running:</p>
<p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-09.webp" alt="Docker icon menu, now showing that Kubernetes is running"></p>
<p>Open the terminal and check on the Kubernetes nodes. The status should be <code>Ready</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">kubectl get nodes
</code></pre></div><p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-10.webp" alt="Docker Desktop node running"></p>
<h3 id="deploy-the-first-application-on-kubernetes">Deploy the first application on Kubernetes</h3>
<p>We are going to deploy the same official latest nginx images at Kubernetes. Execute the following command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">kubectl run mynginx --image=nginx:latest
</code></pre></div><p>Execute the following command to check on the application. Its status should be <code>Running</code>. On a slow machine this will take some time and we might need to do this multiple times.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">kubectl get pod
</code></pre></div><p>Execute the following command to create the Kubernetes service resource for the application. A service in Kubernetes serves as an internal named load balancer.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">kubectl expose pod mynginx --port <span style="color:#00d;font-weight:bold">80</span>
</code></pre></div><p>Execute the following command to redirect the localhost network to the Kubernetes network. By doing this we can curl or access the application from <code>localhost:{port}</code>. This is a foreground process, so it needs to be left open.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">kubectl port-forward service/mynginx 8080:80
</code></pre></div><p>Open another terminal to curl localhost:8080 or open the address in a web browser.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">curl http://localhost:8080
</code></pre></div><p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-11.webp" alt="Curl showing nginx’s default html output"></p>
<p>In a browser:</p>
<p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-12.webp" alt="Nginx’s default output in a browser"></p>
<h3 id="clean-up-the-kubernetes-resources">Clean up the Kubernetes resources</h3>
<p>Ctrl-C at the port-forwarding process in the terminal and list all the running Kubernetes resources. We should see our application in a pod and its services:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">kubectl get all
</code></pre></div><p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-13.webp" alt="kubectl with mynginx pod’s age and service highlighted as 21 minutes"></p>
<p>Now we need to delete these resources.</p>
<p>To delete the service:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">kubectl delete service/mynginx
</code></pre></div><p>To delete the application:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">kubectl delete pod/mynginx
</code></pre></div><p>Now list back all resources. The mynginx-related resources should not be displayed.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">kubectl get all
</code></pre></div><p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-14.webp" alt="kubectl listing the kubernetes service"></p>
<h3 id="to-stop-docker-desktop">To stop Docker Desktop</h3>
<p>If we are done with Docker Desktop we can stop its services by going to the top menu bar and selecting the Quit Docker Desktop option. This will stop Docker and Kubernetes services.</p>
<p><img src="/blog/2022/06/getting-started-with-docker-and-kubernetes-on-macos/image-15.webp" alt="A Quite Docker Desktop button under the Docker icon menu in the top menu bar"></p>
<p>That’s all, folks. We now have the tools to learn and explore Docker and Kubernetes in our own local host.</p>
<p>Now we may proceed with the official documentation and other tutorials to continue on the path to learn Docker and Kubernetes.</p>
Docker and containers boot camphttps://www.endpointdev.com/blog/2022/05/container-boot-camp/2022-05-16T00:00:00+00:00Phineas Jensen
<p><img src="/blog/2022/05/container-boot-camp/pexels-samuel-w%C3%B6lfl-1427541.webp" alt="Shipping containers stacked at a port">
<a href="https://www.pexels.com/photo/intermodal-container-stacked-on-port-1427541/">Photo by Samuel Wölfl</a></p>
<p>In the modern landscape of web development, it’s almost impossible to avoid seeing or using <em>containers</em>: isolated, virtualized, user space for programs to run in. Containers make it easy to develop and deploy various components of applications without respect to the specific system and dependencies they run on.</p>
<p>If that’s confusing, worry not; this post and the tutorials in this boot camp aim to clarify things for new developers and experienced developers who haven’t gotten around to using containers yet.</p>
<blockquote>
<p>Linux containers are all based on the virtualization, isolation, and resource management mechanisms provided by the Linux kernel, notably Linux namespaces and cgroups.</p>
<p>—Wikipedia, <a href="https://en.wikipedia.org/wiki/OS-level_virtualization">OS-level virtualization</a></p>
</blockquote>
<h3 id="introduction">Introduction</h3>
<p>The terminology surrounding containers can get pretty confusing, but the basic idea is this: A <em>container</em> is just a sandboxed process which is limited by the operating system in its ability to see and interact with other processes and parts of the system. This can:</p>
<ul>
<li>provide security benefits (e.g. a container may only be given access to certain parts of the filesystem),</li>
<li>help with performance (e.g. by limiting the amount of RAM or CPU given to a container), and</li>
<li>help solve version and dependency problems (e.g. containers can be used to run multiple incompatible versions of one program on the same system).</li>
</ul>
<p>Contrast this with virtual machines, which virtualize hardware to run an entire operating system including its kernel. Unlike virtual machines, containers are isolated processes running on a single operating system. Because of this, containers are lighter weight and faster than VMs.</p>
<p>From a developer’s perspective, containers are usually run from <em>images</em>, which are special packages of files needed to run a program. For example, a container image might contain the libraries, binaries, and source necessary to run a Node.js web application server, while another image might contain everything needed to run the PostgreSQL 14 database.</p>
<h3 id="docker">Docker</h3>
<p>Docker is the most popular container management suite, and you’ll most likely end up using it at some point in your career if you haven’t already. As such, it’s the best place to start learning about containers. Start with the official <a href="https://docs.docker.com/get-started/">getting started</a> guide published by Docker Inc. which gives a great overview of containers, images, and container composition (i.e. using multiple containers together).</p>
<p>For our End Point staff, we point out a few things about this tutorial:</p>
<ul>
<li>It makes repeated reference to the non-free Docker Desktop tool, which can be useful but shouldn’t take priority in learning. For everything done in Docker Desktop, there are equivalent instructions for the CLI (command-line interface). Make sure to learn those!</li>
<li>It tells you to create a Docker Hub user for one part where you publish a “getting started” image that you create as part of the tutorial. Everything else is possible without creating an account or publishing an image, so feel free to just read this section without creating an account.</li>
</ul>
<p>After completing the tutorial, you should be familiar with the basics of creating and orchestrating Docker containers. Of course, there is a lot more to learn and other projects may require much more complicated setup. As you continue to use Docker, keep these reference pages handy:</p>
<ul>
<li><a href="https://docs.docker.com/engine/reference/commandline/cli/">Docker CLI</a></li>
<li><a href="https://docs.docker.com/engine/reference/builder/">Dockerfile reference</a></li>
<li><a href="https://docs.docker.com/compose/compose-file/">compose file specification</a></li>
<li><a href="https://docs.docker.com/compose/reference/">docker-compose CLI</a></li>
</ul>
<h3 id="container-standards-and-tools">Container standards and tools</h3>
<p>While Docker popularized containers, a number of other tools have since been created for building, managing, and deploying containers. These are some of the most enduring:</p>
<ul>
<li><a href="https://kubernetes.io/">Kubernetes</a> (also called K8s), a system for managing large-scale deployments of containers</li>
<li><a href="https://containerd.io/">containerd</a>, a container runtime (but <em>not</em> an image-building tool) which was created for Docker</li>
<li><a href="https://podman.io/">Podman</a>, a container engine that doesn’t require a running daemon like Docker does; it provides Docker compatibility as well as support for other systems</li>
</ul>
<p>These systems are based on open standards such as the <a href="https://github.com/opencontainers">Open Container Initiative</a>’s image and runtime specifications and the Container Runtime Interface (an API standard for working with containers designed for Kubernetes). If you’re interested in understanding more, see this post on the <a href="https://www.tutorialworks.com/difference-docker-containerd-runc-crio-oci/">differences between these tools and systems</a>.</p>
<h3 id="end-points-container-expertise">End Point’s container expertise</h3>
<p>Containers are one of our <a href="/expertise/containers-virtualization/">areas of expertise</a>, and we’ve written <a href="/blog/tags/containers/">a number of articles about containers</a>. Here are a few highlights:</p>
<ul>
<li><a href="/blog/2022/01/kubernetes-101/">Kubernetes 101: Deploying a web application and database</a> — an excellent, hands-on introduction to Kubernetes which clearly explains core concepts</li>
<li><a href="/blog/2020/08/containerizing-magento-with-docker-compose-elasticsearch-mysql-and-magento/">Containerizing Magento with Docker Compose: Elasticsearch, MySQL and Magento</a> - a hands-on guide to using multiple containers to run Magento, a complex full-stack application</li>
<li><a href="/blog/2020/06/linux-development-in-windows-10-docker-wsl-2/">Linux Development in Windows 10 with Docker and WSL 2</a> - an intro for Windows developers</li>
<li><a href="/blog/2016/02/creating-composite-docker-containers/">Creating Composite Docker Containers with Docker Compose</a> - a more complex, real-world example of docker-compose setup</li>
</ul>
<h3 id="try-it">Try it!</h3>
<p>If you are new to containers and read this far without actually working through the hands-on tutorial mentioned above, do it now!</p>
<p>Using Docker on your own computer is the best way to learn, so do the exercises described in the <a href="https://docs.docker.com/get-started/">Docker getting started guide</a> and gain your own experience with these tools.</p>
Using SSH tunnels to get around network limitationshttps://www.endpointdev.com/blog/2022/01/using-ssh-tunnels-network-limitations/2022-01-26T00:00:00+00:00Zed Jensen
<p><img src="/blog/2022/01/using-ssh-tunnels-network-limitations/banner.jpg" alt="Cliff dwelling in Arizona"></p>
<!-- Picture by Zed Jensen, 2021 -->
<p>SSH is an extremely useful way to use computers that you aren’t in front of physically.</p>
<p>It can also be used to overcome some unique networking challenges, particularly those where one computer needs to connect to another in an unorthodox way. Let me show you a couple of uses of SSH tunnels that have come in handy for me personally.</p>
<h3 id="serving-content-without-a-public-ip-address">Serving content without a public IP address</h3>
<p>In the past, I wrote about <a href="/blog/2020/07/automating-minecraft-server/">maintaining a Minecraft server</a> to play on with my friends. In that case I was dealing with being physically separated from the server hardware I intended to use, but once I got that machine back, I realized that I still had a problem: My ISP and their networking gear didn’t support port forwarding, meaning that I couldn’t connect to my server from outside my home network. But even if I could have, the public IP address I was assigned changed regularly.</p>
<p>One solution I found was to use a reverse SSH tunnel to forward traffic from a publicly-visible virtual server to my local server.</p>
<p>To set this up, you just need your local machine and a server with a publicly visible IP address. I used a virtual machine from <a href="https://upcloud.com/">UpCloud</a>, which costs $5 per month, but you could use any other server, as long as it has its own IP address. Setup is fairly simple.</p>
<p>First, on the local machine, we create a new SSH public & private key pair just for this connection. This serves as a way to authenticate without a password, but without using a personal SSH key that has access to many more places.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">ssh-keygen -t ed25519 -f ~/.ssh/ed25519 -q -N ""
</code></pre></div><p>Next, we create a new OS user <code>proxy</code> on the server. Creating a user specifically for this purpose lets us make sure that our password-free SSH key doesn’t give access to any sensitive data on the remote server. On Linux:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">useradd -m proxy
</code></pre></div><p>Then, we add the public key generated earlier to <code>authorized_keys</code> in the new proxy user’s <code>~/.ssh</code> folder.</p>
<p>Finally, on the local machine, we run this command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">ssh -fN -i ~/.ssh/id_ed25519 -R 0.0.0.0:25565:localhost:25565 proxy@myserver
</code></pre></div><p>Here’s what each part of that does:</p>
<ul>
<li><code>-f</code> tells ssh to run in the background.</li>
<li><code>-N</code> doesn’t run a command on the remote host, which is perfect since we’re just forwarding traffic.</li>
<li><code>-i</code> specifies the SSH key we’re using.</li>
<li><code>-R</code> is a bit more complicated. <code>0.0.0.0:25565</code> specifies which traffic to intercept on the remote host and <code>localhost:25565</code> where to forward it to. Note that <code>0.0.0.0</code> means to listen on all IP addresses, such as 127.0.0.1, your internal network NAT address, and any others. <code>25565</code> is a TCP port number (UDP isn’t supported by SSH out of the box) and will vary based on the application you’re using.</li>
<li><code>proxy@myserver</code> is the user and hostname of the remote server.</li>
</ul>
<p>And that’s it! The reverse tunnel will now forward traffic from our publicly visible server to the specified port to our machine.</p>
<p>Another good use for this setup is when you have an IPv4 address at the edge of a network but only IPv6 internally. This is becoming more common as IPv4 addresses become more scarce.</p>
<p>Also note that for more permanent situations, a VPN like WireGuard or OpenVPN might be a better choice than an SSH tunnel; however, for lower-volume traffic SSH works just fine, and it is often quicker to set up.</p>
<h3 id="splitting-a-multi-container-docker-app-between-multiple-machines">Splitting a multi-container Docker app between multiple machines</h3>
<p>There are many other uses for SSH tunnels. One of our clients has an application with a lot of different moving parts that all need to communicate with each other for the entire application to function. For instance, there’s a database container, a container for part of the backend that needs CUDA support, and several others, in addition to a React frontend. Applications like this can of course slow your computer down quite a bit. What if you only need to make modifications to the frontend or another light part of the application? I asked around and found that a couple of coworkers had already found a workaround.</p>
<p>The solution is simple but effective: If you have another computer available — in my case, a desktop computer that I don’t usually use for work — you can run the performance-intensive containers on that machine. This allows you to work on the lighter parts of the application without experiencing slowdown (or fan noise, or other things like that) on your laptop.</p>
<p>Because the different parts of the application were already using Docker, it wasn’t hard to run the different pieces on separate machines, but they needed to know to talk to each other across the network. SSH tunnels let us do that without modifying our Docker configuration!</p>
<p>For the rest of this example I’ll refer to the two computers in this scenario as “the laptop” and “the desktop”, running the frontend and backend portions respectively, but keep in mind that you could do this with other setups as well.</p>
<h4 id="1-collecting-some-info">1. Collecting some info</h4>
<p>Before we can set everything up, we need to know which ports our application is communicating on. Examining the Dockerfiles for the backend containers in this application, I found that it was using ports 9933, 9934, and 10004. Normally, our frontend application would be communicating with these containers by talking to <code>http://localhost:9933</code> and so on, but once we know which ports it’s going to use, we can set up SSH to forward the traffic to our desktop machine instead.</p>
<p>The other important piece of info we’ll need is the (private home network) IP address of our desktop machine. In this case, mine is <code>192.168.0.55</code>, so we’ll use that.</p>
<h4 id="2-setting-up-the-ssh-config">2. Setting up the SSH config</h4>
<p>The most important step to making the different machines talk to each other is going to be in our SSH config, <code>~/.ssh/config</code>.</p>
<p>We’ll start by coming up with an alias hostname for our desktop, like <code>backend</code>, and creating a section for it in our config file. We then add a <code>LocalForward</code> line for each port we want to forward like so:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">Host backend
HostName 192.168.0.55
User zed
ForwardAgent yes
LocalForward 9933 127.0.0.1:9933
LocalForward 9934 127.0.0.1:9934
LocalForward 10004 127.0.0.1:10004
</code></pre></div><h4 id="3-connecting-the-computers">3. Connecting the computers</h4>
<p>For the different parts of the application to talk to each other, there needs to be an active SSH connection. I usually open an SSH connection like normal (make sure to use the hostname we set in step 2!), start a tmux session, and then run the backend portions of the app in the tmux session:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">[zed@laptop ~]$ ssh desktop-machine
[zed@desktop ~]$ get-stuff-started
</code></pre></div><p>However, if you’d prefer, you can also open a reverse SSH connection that will run in the background until killed:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">[zed@laptop ~]$ ssh -fN desktop-machine
[zed@laptop ~]$
</code></pre></div><h4 id="4-test-it">4. Test it!</h4>
<p>Now that we’ve got an SSH session running to forward traffic to the desktop machine, you can try spinning up the React app and see if it connects properly.</p>
<h3 id="conclusion">Conclusion</h3>
<p>There are many other uses for SSH tunnels. Feel free to let us know in the comments how you’ve used them!</p>
Database integration testing with .NEThttps://www.endpointdev.com/blog/2022/01/database-integration-testing-with-dotnet/2022-01-12T00:00:00+00:00Kevin Campusano
<p><img src="/blog/2022/01/database-integration-testing-with-dotnet/banner.jpg" alt="Sunset over lake in mountains"></p>
<!-- Image by Zed Jensen, 2021 -->
<p><a href="https://rubyonrails.org/">Ruby on Rails</a> is great. We use it at End Point for many projects with great success. One of Rails’ cool features is how easy it is to write database integration tests. Out of the box, Rails projects come with all the configuration necessary to set up a database that’s exclusive for the automated test suite. This database includes all the tables and other objects that exist within the regular database that the app uses during its normal execution. So, it is very easy to write automated tests that cover the application components that interact with the database.</p>
<p><a href="https://dotnet.microsoft.com/en-us/apps/aspnet">ASP.NET Core</a> is also great! However, it doesn’t have this feature out of the box. Let’s see if we can’t do it ourselves.</p>
<h3 id="the-sample-project">The sample project</h3>
<p>As a sample project we will use a REST API that I wrote for <a href="/blog/2021/07/dotnet-5-web-api/">another article</a>. Check it out if you want to learn more about the ins and outs of developing REST APIs with .NET. You can find the source code <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api">on GitHub</a>.</p>
<p>The API is very straightforward. It provides a few endpoints for <a href="https://developer.mozilla.org/en-US/docs/Glossary/CRUD">CRUDing</a> some database tables. It also provides an endpoint which, when given some vehicle information, will calculate a monetary value for that vehicle. That’s a feature that would be interesting for us to cover with some tests.</p>
<p>The logic for that feature is backed by a specific class and it depends heavily on database interactions. As such, that class is a great candidate for writing a few automated integration tests against. The class in question is <code>QuotesService</code> which is defined in <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/Services/QuoteService.cs">Services/QuoteService.cs</a>. The class provides features for fetching records from the database (the <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/Services/QuoteService.cs#L23"><code>GetAllQuotes</code></a> method) as well as creating new records based on data from the incoming request and a set of rules stored in the database itself (the <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/Services/QuoteService.cs#L53"><code>CalculateQuote</code></a> method).</p>
<p>In order to add automated tests, the first step is to organize our project so that it supports them. Let’s do that next.</p>
<h3 id="organizing-the-source-code-to-allow-for-automated-testing">Organizing the source code to allow for automated testing</h3>
<p>In general, the source code of most real world .NET applications is organized as one or more “projects” under one “solution”. A solution is a collection of related projects, and a project is something that produces a deployment artifact. An artifact is a library (i.e. a <code>*.dll</code> file) or something that can be executed like a console or web app.</p>
<p>Our sample app is a stand-alone <a href="https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-web-api?view=aspnetcore-6.0&tabs=visual-studio-code">“webapi” project</a>, meaning that it’s not within a solution. For automated tests, however, we need to create a new project for tests, parallel to our main one. Now that we have two projects instead of one, we need to reorganize the sample app’s source code to comply with the “projects in a solution” structure I mentioned earlier.</p>
<p>Let’s start by moving all the files in the root directory into a new <code>VehicleQuotes</code> directory. That’s one project. Then, we create a new automated tests project by running the following, still from the root directory:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">dotnet new xunit -o VehicleQuotes.Tests
</code></pre></div><p>That creates a new automated tests project named <code>VehicleQuotes.Tests</code> (under a new aptly-named <code>VehicleQuotes.Tests</code> directory) which uses the <a href="https://xunit.net">xUnit.net</a> test framework. There are other options when it comes to test frameworks in .NET, such as <a href="https://docs.microsoft.com/en-us/dotnet/core/testing/unit-testing-with-mstest">MSTest</a> and <a href="https://nunit.org/">NUnit</a>. We’re going to use xUnit.net, but the others should work just as well for our purposes.</p>
<p>Now, we need to create a new solution to contain those two projects. Solutions come in the form of <code>*.sln</code> files and we can create ours like so:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">dotnet new sln -o vehicle-quotes
</code></pre></div><p>That should’ve created a new <code>vehicle-quotes.sln</code> file for us. We should now have a file structure like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">.
├── vehicle-quotes.sln
├── VehicleQuotes
│ ├── VehicleQuotes.csproj
│ └── ...
└── VehicleQuotes.Tests
├── VehicleQuotes.Tests.csproj
└── ...
</code></pre></div><p>Like I said, the <code>*.sln</code> file indicates that this is a solution. The <code>*.csproj</code> files identify the individual projects that make up the solution.</p>
<p>Now, we need to tell dotnet that those two projects belong in the same solution. These commands do that:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">dotnet sln add ./VehicleQuotes/VehicleQuotes.csproj
dotnet sln add ./VehicleQuotes.Tests/VehicleQuotes.Tests.csproj
</code></pre></div><p>Finally, we update the <code>VehicleQuotes.Tests</code> project so that it references the <code>VehicleQuotes</code> project. That way, the test suite will have access to all the classes defined in the REST API. Here’s the command for that:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">dotnet add ./VehicleQuotes.Tests/VehicleQuotes.Tests.csproj reference ./VehicleQuotes/VehicleQuotes.csproj
</code></pre></div><p>With all that setup out of the way, we can now start writing some tests.</p>
<blockquote>
<p>You can learn more about project organization in the <a href="https://docs.microsoft.com/en-us/dotnet/core/tutorials/testing-with-cli">official online documentation</a>.</p>
</blockquote>
<h3 id="creating-a-dbcontext-instance-to-talk-to-the-database">Creating a DbContext instance to talk to the database</h3>
<p>The <code>VehicleQuotes.Tests</code> automated tests project got created with a default test file named <code>UnitTest1.cs</code>. You can delete it or ignore it, since we will not use it.</p>
<p>In general, it’s a good idea for the test project to mimic the directory structure of the project that it will be testing. Also, we already decided that we would focus our test efforts on the <code>QuoteService</code> class from the <code>VehicleQuotes</code> project. That class is defined in <code>VehicleQuotes/Services/QuoteService.cs</code>, so let’s create a similarly located file within the test project which will contain the test cases for that class. Here: <code>VehicleQuotes.Tests/Services/QuoteServiceTests.cs</code>. These would be the contents:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-csharp" data-lang="csharp"><span style="color:#888">// VehicleQuotes.Tests/Services/QuoteServiceTests.cs
</span><span style="color:#888"></span>
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">Xunit</span>;
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Tests.Services</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">QuoteServiceTests</span>
{
<span style="color:#369"> [Fact]</span>
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">void</span> GetAllQuotesReturnsEmptyWhenThereIsNoDataStored()
{
<span style="color:#888">// Given
</span><span style="color:#888"></span>
<span style="color:#888">// When
</span><span style="color:#888"></span>
<span style="color:#888">// Then
</span><span style="color:#888"></span> }
}
}
</code></pre></div><p>This is the basic structure for tests using xUnit.net. Any method annotated with a <code>[Fact]</code> attribute will be picked up and run by the test framework. In this case, I’ve created one such method called <code>GetAllQuotesReturnsEmptyWhenThereIsNoDataStored</code> which should give away its intention. This test case will validate that <code>QuoteService</code>’s <code>GetAllQuotes</code> method returns an empty set when called with no data in the database.</p>
<p>Before we can write this test case, though, the suite needs access to the test database. Our app uses <a href="https://docs.microsoft.com/en-us/ef/core/">Entity Framework Core</a> for database interaction, which means that the database is accessed via a <code>DbContext</code> class. Looking at the source code of our sample app, we can see that the <code>DbContext</code> being used is <code>VehicleQuotesContext</code>, defined in <code>VehicleQuotes/Data/VehicleQuotesContext.cs</code>. Let’s add a utility method to the <code>QuoteServiceTests</code> class which can be used to create new instances of <code>VehicleQuotesContext</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-csharp" data-lang="csharp"><span style="color:#888">// VehicleQuotes.Tests/Services/QuoteServiceTests.cs
</span><span style="color:#888"></span>
<span style="color:#888">// ...
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">Microsoft.EntityFrameworkCore</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Services</span>;
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Tests.Services</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">QuoteServiceTests</span>
{
<span style="color:#080;font-weight:bold">private</span> VehicleQuotesContext CreateDbContext()
{
<span style="color:#888;font-weight:bold">var</span> options = <span style="color:#080;font-weight:bold">new</span> DbContextOptionsBuilder<VehicleQuotesContext>()
.UseNpgsql(<span style="color:#d20;background-color:#fff0f0">"Host=db;Database=vehicle_quotes_test;Username=vehicle_quotes;Password=password"</span>)
.UseSnakeCaseNamingConvention()
.Options;
<span style="color:#888;font-weight:bold">var</span> context = <span style="color:#080;font-weight:bold">new</span> VehicleQuotesContext(options);
context.Database.EnsureCreated();
<span style="color:#080;font-weight:bold">return</span> context;
}
<span style="color:#888">// ...
</span><span style="color:#888"></span> }
}
</code></pre></div><p>As you can see, we need to go through three steps to create the <code>VehicleQuotesContext</code> instance and get a database that’s ready for testing:</p>
<p>First, we create a <code>DbContextOptionsBuilder</code> and use that to obtain the <code>options</code> object that the <code>VehicleQuotesContext</code> needs as a constructor parameter. We needed to include the <code>Microsoft.EntityFrameworkCore</code> namespace in order to have access to the <code>DbContextOptionsBuilder</code>. For this, I just copied and slightly modified this statement from the <code>ConfigureServices</code> method in the REST API’s <code>VehicleQuotes/Startup.cs</code> file:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-csharp" data-lang="csharp"><span style="color:#888">// VehicleQuotes/Startup.cs
</span><span style="color:#888"></span>
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">void</span> ConfigureServices(IServiceCollection services)
{
<span style="color:#888">// ...
</span><span style="color:#888"></span>
services.AddDbContext<VehicleQuotesContext>(options =>
options
.UseNpgsql(Configuration.GetConnectionString(<span style="color:#d20;background-color:#fff0f0">"VehicleQuotesContext"</span>))
.UseSnakeCaseNamingConvention()
.UseLoggerFactory(LoggerFactory.Create(builder => builder.AddConsole()))
.EnableSensitiveDataLogging()
);
<span style="color:#888">// ...
</span><span style="color:#888"></span>}
</code></pre></div><p>This is a method that runs when the application is starting up to set up all the services that the app uses to work. Here, it’s setting up the <code>DbContext</code> to enable database interaction. For the test suite, I took this statement as a starting point and removed the logging configurations and specified a hardcoded connection string that specifically points to a new <code>vehicle_quotes_test</code> database that will be used for testing.</p>
<p>If you’re following along, then you need a PostgreSQL instance that you can use to run the tests. In my case, I have one running that is reachable via the connection string I specified: <code>Host=db;Database=vehicle_quotes_test;Username=vehicle_quotes;Password=password</code>.</p>
<p>If you have Docker, a quick way to get a Postgres database up and running is with this command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sh" data-lang="sh">docker run -d <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> --name vehicle-quotes-db <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -p 5432:5432 <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> --network host <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -e <span style="color:#369">POSTGRES_DB</span>=vehicle_quotes <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -e <span style="color:#369">POSTGRES_USER</span>=vehicle_quotes <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -e <span style="color:#369">POSTGRES_PASSWORD</span>=password <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> postgres
</code></pre></div><p>That’ll spin up a new Postgres instance that’s reachable via <code>localhost</code>.</p>
<p>Secondly, now that we have the options parameter ready, we can quite simply instantiate a new <code>VehicleQuotesContext</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-csharp" data-lang="csharp"><span style="color:#888;font-weight:bold">var</span> context = <span style="color:#080;font-weight:bold">new</span> VehicleQuotesContext(options);
</code></pre></div><p>Finally, we call the <code>EnsureCreated</code> method so that the database that we specified in the connection string is actually created.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-csharp" data-lang="csharp">context.Database.EnsureCreated();
</code></pre></div><p>This is the database that our test suite will use.</p>
<h3 id="defining-the-test-database-connection-string-in-the-appsettingsjson-file">Defining the test database connection string in the appsettings.json file</h3>
<p>One quick improvement that we can do to the code we’ve written so far is move the connection string for the test database into a separate configuration file, instead of having it hardcoded. Let’s do that next.</p>
<p>We need to create a new <code>appsettings.json</code> file under the <code>VehicleQuotes.Tests</code> directory. Then we have to add the connection string like so:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#b06;font-weight:bold">"ConnectionStrings"</span>: {
<span style="color:#b06;font-weight:bold">"VehicleQuotesContext"</span>: <span style="color:#d20;background-color:#fff0f0">"Host=db;Database=vehicle_quotes_test;Username=vehicle_quotes;Password=password"</span>
}
}
</code></pre></div><p>This is the standard way of configuring connection strings in .NET. Now, to actually fetch this value from within our test suite code, we make the following changes:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">// ...
<span style="color:#000;background-color:#dfd">+using Microsoft.Extensions.Hosting;
</span><span style="color:#000;background-color:#dfd">+using Microsoft.Extensions.Configuration;
</span><span style="color:#000;background-color:#dfd">+using Microsoft.Extensions.DependencyInjection;
</span><span style="color:#000;background-color:#dfd"></span>
namespace VehicleQuotes.Tests.Services
{
public class QuoteServiceTests
{
private VehicleQuotesContext CreateDbContext()
{
<span style="color:#000;background-color:#dfd">+ var host = Host.CreateDefaultBuilder().Build();
</span><span style="color:#000;background-color:#dfd">+ var config = host.Services.GetRequiredService<IConfiguration>();
</span><span style="color:#000;background-color:#dfd"></span>
var options = new DbContextOptionsBuilder<VehicleQuotesContext>()
<span style="color:#000;background-color:#fdd">- .UseNpgsql("Host=db;Database=vehicle_quotes_test;Username=vehicle_quotes;Password=password")
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+ .UseNpgsql(config.GetConnectionString("VehicleQuotesContext"))
</span><span style="color:#000;background-color:#dfd"></span> .UseSnakeCaseNamingConvention()
.Options;
var context = new VehicleQuotesContext(options);
context.Database.EnsureCreated();
return context;
}
// ...
}
}
</code></pre></div><p>First we add a few <code>using</code> statements. We need <code>Microsoft.Extensions.Hosting</code> so that we can have access to the <code>Host</code> class through which we obtain access to the application’s execution context. This allows us to access the built-in configuration service. We also need <code>Microsoft.Extensions.Configuration</code> to have access to the <code>IConfiguration</code> interface which is how we reference the configuration service which allows us access to the <code>appsettings.json</code> config file. And we also need the <code>Microsoft.Extensions.DependencyInjection</code> namespace which allows us to tap into the built-in dependency injection mechanism, through which we can access the default configuration service I mentioned before. Specifically, that namespace is where the <code>GetRequiredService</code> extension method lives.</p>
<p>All this translates into the few code changes that you see in the previous diff: first getting the app’s host, then getting the configuration service, then using that to fetch our connection string.</p>
<blockquote>
<p>You can refer to <a href="https://docs.microsoft.com/en-us/dotnet/core/extensions/configuration">the official documentation</a> to learn more about configuration in .NET.</p>
</blockquote>
<h3 id="writing-a-simple-test-case-that-fetches-data">Writing a simple test case that fetches data</h3>
<p>Now that we have a way to access the database from within the test suite, we can finally write an actual test case. Here’s the <code>GetAllQuotesReturnsEmptyWhenThereIsNoDataStored</code> one that I alluded to earlier:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-csharp" data-lang="csharp"><span style="color:#888">// ...
</span><span style="color:#888"></span>
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Tests.Services</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">QuoteServiceTests</span>
{
<span style="color:#888">// ...
</span><span style="color:#888"></span><span style="color:#369">
</span><span style="color:#369"> [Fact]</span>
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> <span style="color:#080;font-weight:bold">void</span> GetAllQuotesReturnsEmptyWhenThereIsNoDataStored()
{
<span style="color:#888">// Given
</span><span style="color:#888"></span> <span style="color:#888;font-weight:bold">var</span> dbContext = CreateDbContext();
<span style="color:#888;font-weight:bold">var</span> service = <span style="color:#080;font-weight:bold">new</span> QuoteService(dbContext, <span style="color:#080;font-weight:bold">null</span>);
<span style="color:#888">// When
</span><span style="color:#888"></span> <span style="color:#888;font-weight:bold">var</span> result = <span style="color:#080;font-weight:bold">await</span> service.GetAllQuotes();
<span style="color:#888">// Then
</span><span style="color:#888"></span> Assert.Empty(result);
}
}
}
</code></pre></div><p>This one is a very simple test. We obtain a new <code>VehicleQuotesContext</code> instance that we can use to pass as a parameter when instantiating the component that we want to test: the <code>QuoteService</code>. We then call the <code>GetAllQuotes</code> method and assert that it returned an empty set. The test database was just created, so there should be no data in it, hence the empty resource set.</p>
<p>To run this test, we do <code>dotnet test</code>. I personally like a more verbose output so I like to use this variant of the command: <code>dotnet test --logger "console;verbosity=detailed"</code>. Here’s what the output looks like.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plaintext" data-lang="plaintext">$ dotnet test --logger "console;verbosity=detailed"
Determining projects to restore...
All projects are up-to-date for restore.
VehicleQuotes -> /app/VehicleQuotes/bin/Debug/net5.0/VehicleQuotes.dll
VehicleQuotes.Tests -> /app/VehicleQuotes.Tests/bin/Debug/net5.0/VehicleQuotes.Tests.dll
Test run for /app/VehicleQuotes.Tests/bin/Debug/net5.0/VehicleQuotes.Tests.dll (.NETCoreApp,Version=v5.0)
Microsoft (R) Test Execution Command Line Tool Version 16.11.0
Copyright (c) Microsoft Corporation. All rights reserved.
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.
/app/VehicleQuotes.Tests/bin/Debug/net5.0/VehicleQuotes.Tests.dll
[xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.3+1b45f5407b (64-bit .NET 5.0.12)
[xUnit.net 00:00:01.03] Discovering: VehicleQuotes.Tests
[xUnit.net 00:00:01.06] Discovered: VehicleQuotes.Tests
[xUnit.net 00:00:01.06] Starting: VehicleQuotes.Tests
[xUnit.net 00:00:03.25] Finished: VehicleQuotes.Tests
Passed VehicleQuotes.Tests.Services.QuoteServiceTests.GetAllQuotesReturnsEmptyWhenThereIsNoDataStored [209 ms]
Test Run Successful.
Total tests: 1
Passed: 1
Total time: 3.7762 Seconds
</code></pre></div><h3 id="resetting-the-state-of-the-database-after-each-test">Resetting the state of the database after each test</h3>
<p>Now we need to write a test that actually writes data into the database. However, every test case needs to start with the database in its original state. In other words, the changes that one test case does to the test database should not be seen, affect, or be expected by any subsequent test. That will make it so our test cases are isolated and repeatable. That’s not possible with our current implementation, though.</p>
<blockquote>
<p>You can read more about the FIRST principles of testing <a href="https://medium.com/@tasdikrahman/f-i-r-s-t-principles-of-testing-1a497acda8d6">here</a>.</p>
</blockquote>
<p>Luckily, that’s a problem that’s easily solved with Entity Framework Core. All we need to do is call a method that ensures that the database is deleted just before it ensures that it is created. Here’s what it looks like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"> private VehicleQuotesContext CreateDbContext()
{
var host = Host.CreateDefaultBuilder().Build();
var config = host.Services.GetRequiredService<IConfiguration>();
var options = new DbContextOptionsBuilder<VehicleQuotesContext>()
.UseNpgsql(config.GetConnectionString("VehicleQuotesContext"))
.UseSnakeCaseNamingConvention()
.Options;
var context = new VehicleQuotesContext(options);
<span style="color:#000;background-color:#dfd">+ context.Database.EnsureDeleted();
</span><span style="color:#000;background-color:#dfd"></span> context.Database.EnsureCreated();
return context;
}
</code></pre></div><p>And that’s all. Now every test case that calls <code>CreateDbContext</code> in order to obtain a <code>DbContext</code> instance will effectively trigger a database reset. Feel free to <code>dotnet test</code> again to validate that the test suite is still working.</p>
<p>Now, depending on the size of the database, this can be quite expensive. For integration tests, performance is not as big of a concern as for unit tests. This is because integration tests should be fewer in number and less frequently run.</p>
<p>We can make it better though. Instead of deleting and recreating the database before each test case, we’ll take a page out of Ruby on Rails’ book and run each test case within a database transaction which gets rolled back after the test is done. For now though, let’s write another test case: this time, one where we insert new records into the database.</p>
<blockquote>
<p>If you want to hear a more in-depth discussion about automated testing in general, I go into further detail on the topic in this article: <a href="/blog/2020/09/automated-testing-with-symfony/">An introduction to automated testing for web applications with Symfony</a>.</p>
</blockquote>
<h3 id="writing-another-simple-test-case-that-stores-data">Writing another simple test case that stores data</h3>
<p>Now let’s write another test that exercises <code>QuoteService</code>’s <code>GetAllQuotes</code> method. This time though, let’s add a new record to the database before calling it so that the method’s result is not empty. Here’s what the test looks like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-csharp" data-lang="csharp"><span style="color:#888">// ...
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Models</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.Linq</span>;
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Tests.Services</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">QuoteServiceTests</span>
{
<span style="color:#888">// ...
</span><span style="color:#888"></span><span style="color:#369">
</span><span style="color:#369"> [Fact]</span>
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> <span style="color:#080;font-weight:bold">void</span> GetAllQuotesReturnsTheStoredData()
{
<span style="color:#888">// Given
</span><span style="color:#888"></span> <span style="color:#888;font-weight:bold">var</span> dbContext = CreateDbContext();
<span style="color:#888;font-weight:bold">var</span> quote = <span style="color:#080;font-weight:bold">new</span> Quote
{
OfferedQuote = <span style="color:#00d;font-weight:bold">100</span>,
Message = <span style="color:#d20;background-color:#fff0f0">"test_quote_message"</span>,
Year = <span style="color:#d20;background-color:#fff0f0">"2000"</span>,
Make = <span style="color:#d20;background-color:#fff0f0">"Toyota"</span>,
Model = <span style="color:#d20;background-color:#fff0f0">"Corolla"</span>,
BodyTypeID = dbContext.BodyTypes.Single(bt => bt.Name == <span style="color:#d20;background-color:#fff0f0">"Sedan"</span>).ID,
SizeID = dbContext.Sizes.Single(s => s.Name == <span style="color:#d20;background-color:#fff0f0">"Compact"</span>).ID,
ItMoves = <span style="color:#080;font-weight:bold">true</span>,
HasAllWheels = <span style="color:#080;font-weight:bold">true</span>,
HasAlloyWheels = <span style="color:#080;font-weight:bold">true</span>,
HasAllTires = <span style="color:#080;font-weight:bold">true</span>,
HasKey = <span style="color:#080;font-weight:bold">true</span>,
HasTitle = <span style="color:#080;font-weight:bold">true</span>,
RequiresPickup = <span style="color:#080;font-weight:bold">true</span>,
HasEngine = <span style="color:#080;font-weight:bold">true</span>,
HasTransmission = <span style="color:#080;font-weight:bold">true</span>,
HasCompleteInterior = <span style="color:#080;font-weight:bold">true</span>,
CreatedAt = DateTime.Now
};
dbContext.Quotes.Add(quote);
dbContext.SaveChanges();
<span style="color:#888;font-weight:bold">var</span> service = <span style="color:#080;font-weight:bold">new</span> QuoteService(dbContext, <span style="color:#080;font-weight:bold">null</span>);
<span style="color:#888">// When
</span><span style="color:#888"></span> <span style="color:#888;font-weight:bold">var</span> result = <span style="color:#080;font-weight:bold">await</span> service.GetAllQuotes();
<span style="color:#888">// Then
</span><span style="color:#888"></span> Assert.NotEmpty(result);
Assert.Single(result);
Assert.Equal(quote.ID, result.First().ID);
Assert.Equal(quote.OfferedQuote, result.First().OfferedQuote);
Assert.Equal(quote.Message, result.First().Message);
}
}
}
</code></pre></div><p>First we include the <code>VehicleQuotes.Models</code> namespace so that we can use the <code>Quotes</code> model class. In our REST API, this is the class that represents the data from the <code>quotes</code> table. This is the main table that <code>GetAllQuotes</code> queries. We also include the <code>System.Linq</code> namespace, which allows us to use various collection extension methods (like <code>Single</code> and <code>First</code>) which we leverage throughout the test case to query lookup tables and assert on the test results.</p>
<p>Other than that, the test case itself is pretty self-explanatory. We start by obtaining an instance of <code>VehicleQuotesContext</code> via the <code>CreateDbContext</code> method. Remember that this also resets the whole database so that the test case can run over a clean slate. Then, we create a new <code>Quote</code> object and use our <code>VehicleQuotesContext</code> to insert it as a record into the database. We do this so that the later call to <code>QuoteService</code>’s <code>GetAllQuotes</code> method actually finds some data to return this time. Finally, the test case validates that the result contains a record and that its data is identical to what we set manually.</p>
<p>Neat! At this point we have what I think is the bare minimum infrastructure when it comes to serviceable and effective database integration tests, namely, access to a test database. We can take it one step further, though, and make things more reusable and a little bit better performing.</p>
<h3 id="refactoring-into-a-fixture-for-reusability">Refactoring into a fixture for reusability</h3>
<p>We can use the test fixture functionality offered by xUnit.net in order to make the database interactivity aspect of our test suite into a reusable component. That way, if we had other test classes focused on other components that interact with the database, we could just plug that code in. We can define a fixture by creating a new file called, for example, <code>VehicleQuotes.Tests/Fixtures/DatabaseFixture.cs</code> with these contents:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-csharp" data-lang="csharp"><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">Microsoft.EntityFrameworkCore</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">Microsoft.Extensions.Hosting</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">Microsoft.Extensions.Configuration</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">Microsoft.Extensions.DependencyInjection</span>;
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Tests.Fixtures</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">DatabaseFixture</span> : IDisposable
{
<span style="color:#080;font-weight:bold">public</span> VehicleQuotesContext DbContext { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">private</span> <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> DatabaseFixture()
{
DbContext = CreateDbContext();
}
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">void</span> Dispose()
{
DbContext.Dispose();
}
<span style="color:#080;font-weight:bold">private</span> VehicleQuotesContext CreateDbContext()
{
<span style="color:#888;font-weight:bold">var</span> host = Host.CreateDefaultBuilder().Build();
<span style="color:#888;font-weight:bold">var</span> config = host.Services.GetRequiredService<IConfiguration>();
<span style="color:#888;font-weight:bold">var</span> options = <span style="color:#080;font-weight:bold">new</span> DbContextOptionsBuilder<VehicleQuotesContext>()
.UseNpgsql(config.GetConnectionString(<span style="color:#d20;background-color:#fff0f0">"VehicleQuotesContext"</span>))
.UseSnakeCaseNamingConvention()
.Options;
<span style="color:#888;font-weight:bold">var</span> context = <span style="color:#080;font-weight:bold">new</span> VehicleQuotesContext(options);
context.Database.EnsureDeleted();
context.Database.EnsureCreated();
<span style="color:#080;font-weight:bold">return</span> context;
}
}
}
</code></pre></div><p>All this class does is define the <code>CreateDbContext</code> method that we’re already familiar with but puts it in a nice reusable package. Upon instantiation, as seen in the constructor, it stores a reference to the <code>VehicleQuotesContext</code> in its <code>DbContext</code> property.</p>
<p>With that, our <code>QuoteServiceTests</code> test class can use it if we make the following changes to it:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"> using System;
using Xunit;
<span style="color:#000;background-color:#fdd">-using Microsoft.EntityFrameworkCore;
</span><span style="color:#000;background-color:#fdd"></span> using VehicleQuotes.Services;
<span style="color:#000;background-color:#fdd">-using Microsoft.Extensions.Hosting;
</span><span style="color:#000;background-color:#fdd">-using Microsoft.Extensions.Configuration;
</span><span style="color:#000;background-color:#fdd">-using Microsoft.Extensions.DependencyInjection;
</span><span style="color:#000;background-color:#fdd"></span> using VehicleQuotes.Models;
using System.Linq;
<span style="color:#000;background-color:#dfd">+using VehicleQuotes.Tests.Fixtures;
</span><span style="color:#000;background-color:#dfd"></span>
namespace VehicleQuotes.Tests.Services
{
<span style="color:#000;background-color:#fdd">- public class QuoteServiceTests
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+ public class QuoteServiceTests : IClassFixture<DatabaseFixture>
</span><span style="color:#000;background-color:#dfd"></span> {
<span style="color:#000;background-color:#dfd">+ private VehicleQuotesContext dbContext;
</span><span style="color:#000;background-color:#dfd"></span>
<span style="color:#000;background-color:#dfd">+ public QuoteServiceTests(DatabaseFixture fixture)
</span><span style="color:#000;background-color:#dfd">+ {
</span><span style="color:#000;background-color:#dfd">+ dbContext = fixture.DbContext;
</span><span style="color:#000;background-color:#dfd">+ }
</span><span style="color:#000;background-color:#dfd"></span>
<span style="color:#000;background-color:#fdd">- private VehicleQuotesContext CreateDbContext()
</span><span style="color:#000;background-color:#fdd">- {
</span><span style="color:#000;background-color:#fdd">- var host = Host.CreateDefaultBuilder().Build();
</span><span style="color:#000;background-color:#fdd">- var config = host.Services.GetRequiredService<IConfiguration>();
</span><span style="color:#000;background-color:#fdd"></span>
<span style="color:#000;background-color:#fdd">- var options = new DbContextOptionsBuilder<VehicleQuotesContext>()
</span><span style="color:#000;background-color:#fdd">- .UseNpgsql(config.GetConnectionString("VehicleQuotesContext"))
</span><span style="color:#000;background-color:#fdd">- .UseSnakeCaseNamingConvention()
</span><span style="color:#000;background-color:#fdd">- .Options;
</span><span style="color:#000;background-color:#fdd"></span>
<span style="color:#000;background-color:#fdd">- var context = new VehicleQuotesContext(options);
</span><span style="color:#000;background-color:#fdd"></span>
<span style="color:#000;background-color:#fdd">- context.Database.EnsureDeleted();
</span><span style="color:#000;background-color:#fdd">- context.Database.EnsureCreated();
</span><span style="color:#000;background-color:#fdd"></span>
<span style="color:#000;background-color:#fdd">- return context;
</span><span style="color:#000;background-color:#fdd">- }
</span><span style="color:#000;background-color:#fdd"></span>
[Fact]
public async void GetAllQuotesReturnsEmptyWhenThereIsNoDataStored()
{
// Given
<span style="color:#000;background-color:#fdd">- var dbContext = CreateDbContext();
</span><span style="color:#000;background-color:#fdd"></span>
// ...
}
[Fact]
public async void GetAllQuotesReturnsTheStoredData()
{
// Given
<span style="color:#000;background-color:#fdd">- var dbContext = CreateDbContext();
</span><span style="color:#000;background-color:#fdd"></span>
// ...
}
}
}
</code></pre></div><p>Here we’ve updated the <code>QuoteServiceTests</code> class definition so that it inherits from <a href="https://github.com/xunit/xunit/blob/main/src/xunit.v3.core/IClassFixture.cs"><code>IClassFixture<DatabaseFixture></code></a>. This is how we tell xUnit.net that our tests use the new fixture that we created. Next, we define a constructor that receives a <code>DatabaseFixture</code> object as a parameter. That’s how xUnit.net allows our test class to access the capabilities provided by the fixture. In this case, we take the fixture’s <code>DbContext</code> instance, and store it for later use in all of the test cases that need database interaction. We also removed the <code>CreateDbContext</code> method because now that’s defined within the fixture. We also removed a few <code>using</code> statements that became unnecessary.</p>
<p>One important aspect to note about this fixture is that it is initialized once per whole test suite run, not once per test case. Specifically, the code within the <code>DatabaseFixture</code>’s constructor gets executed once, before all of the test cases. Similarly, the code in <code>DatabaseFixture</code>’s <code>Dispose</code> method get executed once at the end, after all test cases have been run.</p>
<p>This means that our test database deletion and recreation step now happens only once for the entire test suite. This is not good with our current implementation because that means that individual test cases no longer run with a fresh, empty database. This can be good for performance though, as long as we update our test cases to run within database transactions. Let’s do just that.</p>
<h3 id="using-transactions-to-reset-the-state-of-the-database">Using transactions to reset the state of the database</h3>
<p>Here’s how we update out test class so that each test case runs within a transaction:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"> // ...
namespace VehicleQuotes.Tests.Services
{
<span style="color:#000;background-color:#fdd">- public class QuoteServiceTests : IClassFixture<DatabaseFixture>
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+ public class QuoteServiceTests : IClassFixture<DatabaseFixture>, IDisposable
</span><span style="color:#000;background-color:#dfd"></span> {
private VehicleQuotesContext dbContext;
public QuoteServiceTests(DatabaseFixture fixture)
{
dbContext = fixture.DbContext;
<span style="color:#000;background-color:#dfd">+ dbContext.Database.BeginTransaction();
</span><span style="color:#000;background-color:#dfd"></span> }
<span style="color:#000;background-color:#dfd">+ public void Dispose()
</span><span style="color:#000;background-color:#dfd">+ {
</span><span style="color:#000;background-color:#dfd">+ dbContext.Database.RollbackTransaction();
</span><span style="color:#000;background-color:#dfd">+ }
</span><span style="color:#000;background-color:#dfd"></span>
# ...
}
}
</code></pre></div><p>The first thing to note here is that we added a call to <code>BeginTransaction</code> in the test class constructor. xUnit.net creates a new instance of the test class for each test case. This means that this constructor is run before each and every test case. We use that opportunity to begin a database transaction.</p>
<p>The other interesting point is that we’ve updated the class to implement the <a href="https://docs.microsoft.com/en-us/dotnet/standard/garbage-collection/implementing-dispose"><code>IDisposable</code> interface’s <code>Dispose</code> method</a>. xUnit.net will run this code after each test case, so we rollback the transaction.</p>
<p>Put those two together and we’ve updated our test suite so that every test case runs within the context of its own database transaction. Try it out with <code>dotnet test</code> and see what happens.</p>
<blockquote>
<p>To learn more about database transactions with Entity Framework Core, you can look at <a href="https://docs.microsoft.com/en-us/ef/core/saving/transactions">the official docs</a>.</p>
<p>You can learn more about xUnit.net’s test class fixtures in <a href="https://github.com/xunit/samples.xunit/tree/main/ClassFixtureExample">the samples repository</a>.</p>
</blockquote>
<p>Alright, that’s all for now. It is great to see that implementing automated database integration tests is actually fairly straightforward using .NET, xUnit.net, and Entity Framework. Even if it isn’t quite as easy as it is in Rails, it is perfectly doable.</p>
Kubernetes 101: Deploying a web application and databasehttps://www.endpointdev.com/blog/2022/01/kubernetes-101/2022-01-08T00:00:00+00:00Kevin Campusano
<p><img src="/blog/2022/01/kubernetes-101/2021-11-26_092823-sm.webp" alt="Groups of birds on a telephone pole"></p>
<!-- Photo by Seth Jensen -->
<p>The DevOps world seems to have been taken over by <a href="https://kubernetes.io/">Kubernetes</a> during the past few years. And rightfully so, I believe, as it is a great piece of software that promises and delivers when it comes to managing deployments of complex systems.</p>
<p>Kubernetes is hard though. But it’s all good, I’m not a DevOps engineer. As a software developer, I shouldn’t care about any of that. Or should I? Well… Yes. I know that very well after being thrown head first into a project that heavily involves Kubernetes, without knowing the first thing about it.</p>
<p>Even if I wasn’t in the role of a DevOps engineer, as a software developer, I had to work with it in order to set up dev environments, troubleshoot system issues, and make sound design and architectural decisions.</p>
<p>After a healthy amount of struggle, I eventually gained some understanding on the subject. In this blog post I’ll share what I learned. My hope is to put out there the things I wish I knew when I first encountered and had to work with Kubernetes.</p>
<p>So, I’m going to introduce the basic concepts and building blocks of Kubernetes. Then, I’m going to walk you through the process of containerizing a sample application, developing all the Kubernetes configuration files necessary for deploying it into a Kubernetes cluster, and actually deploying it into a local development cluster. We will end up with an application and its associated database running completely on and being managed by Kubernetes.</p>
<p>In short: If you know nothing about Kubernetes, and are interested in learning, read on. This post is for you.</p>
<h3 id="what-is-kubernetes">What is Kubernetes?</h3>
<p>Simply put, <a href="https://kubernetes.io/">Kubernetes</a> (or K8s) is software for managing <a href="https://en.wikipedia.org/wiki/Computer_cluster">computer clusters</a>. That is, groups of computers that are working together in order to process some workload or offer a service. Kubernetes does this by leveraging <a href="https://www.docker.com/resources/what-container">application containers</a>. Kubernetes will help you out in automating the deployment, scaling, and management of containerized applications.</p>
<p>Once you’ve designed an application’s complete execution environment and associated components, using Kubernetes you can specify all that declaratively via configuration files. Then, you’ll be able to deploy that application with a single command. Once deployed, Kubernetes will give you tools to check on the health of your application, recover from issues, keep it running, scale it, etc.</p>
<p>There are a few basic concepts that we need to be familiar with in order to effectively work with Kubernetes. I think the <a href="https://kubernetes.io/docs/concepts/">official documentation</a> does a great job in explaining this, but I’ll try to summarize.</p>
<h4 id="nodes-pods-and-containers">Nodes, pods, and containers</h4>
<p>First up are <a href="https://kubernetes.io/docs/concepts/containers/">containers</a>. If you’re interested in Kubernetes, chances are that you’ve already been exposed to some sort of container technology like <a href="https://www.docker.com/">Docker</a>. If not, no worries. For our purposes here, we can think of a container as an isolated process with its own resources and file system in which an application can run.</p>
<p>A container has all the software dependencies that an application needs to run, including the application itself. From the application’s perspective, the container is its execution environment: the “machine” in which it’s running. In more practical terms, a container is a form of packaging, delivering, and executing an application. What’s the advantage? Instead of installing the application and its dependencies directly into the machine that’s going to run it, having it containerized allows for a container runtime (like Docker) to just run it as a self-contained unit. This makes it possible for the application to run anywhere that has the container runtime installed, with minimal configuration.</p>
<p>Something very closely related to containers is the concept of <a href="https://kubernetes.io/docs/concepts/containers/images/">images</a>. You can think of images as the blueprint for containers. An image is the spec, and the container is the instance that’s actually running.</p>
<p>When deploying applications into Kubernetes, this is how it runs them: via containers. In other words, for Kubernetes to be able to run an application, it needs to be delivered within a container.</p>
<p>Next is the concept of a <a href="https://kubernetes.io/docs/concepts/architecture/">node</a>. This is very straightforward and not even specific to Kubernetes. A node is a computer within the cluster. That’s it. Like I said before, Kubernetes is built to manage computer clusters. A node is just one computer, either virtual or physical, within that cluster.</p>
<p>Then there are <a href="https://kubernetes.io/docs/concepts/workloads/pods/">pods</a>. Pods are the main executable units in Kubernetes. When we deploy an application or service into a Kubernetes cluster, it runs within a pod. Kubernetes works with containerized applications though, so it is the pods that take care of running said containers within them.</p>
<p>These three work very closely together within Kubernetes. To summarize: containers run within pods which in turn exist within nodes in the cluster.</p>
<p>There are other key components to talk about like <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">deployments</a>, <a href="https://kubernetes.io/docs/concepts/services-networking/service/">services</a>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/">replica sets</a>, and <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/">persistent volumes</a>. But I think that’s enough theory for now. We’ll learn more about all these as we get our hands dirty working though our example. So let’s get started with our demo and we’ll be discovering and discussing them organically as we go through it.</p>
<h3 id="installing-and-setting-up-kubernetes">Installing and setting up Kubernetes</h3>
<p>The first thing we need is a Kubernetes environment. There are many Kubernetes implementations out there. <a href="https://cloud.google.com/kubernetes-engine">Google</a>, <a href="https://azure.microsoft.com/en-us/services/kubernetes-service/">Microsoft</a>, and <a href="https://aws.amazon.com/eks">Amazon</a> offer Kubernetes solutions on their respective cloud platforms, for example. There are also implementations that one can install and run on their own, like <a href="https://kind.sigs.k8s.io/docs/">kind</a>, <a href="https://minikube.sigs.k8s.io/docs/">minikube</a>, and <a href="https://microk8s.io/">MicroK8s</a>. We are going to use MicroK8s for our demo, for no particular reason other than “this is the one I know”.</p>
<p>When done installing, MicroK8s will have set up a whole Kubernetes cluster, with your machine as its one and only node.</p>
<h4 id="installing-microk8s">Installing MicroK8s</h4>
<p>So, if you’re in Ubuntu and have <a href="https://snapcraft.io/docs/installing-snapd">snapd</a>, installing MicroK8s is easy. The <a href="https://microk8s.io/docs">official documentation</a> explains it best. You install it with a command like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ sudo snap install microk8s --classic --channel=1.21
</code></pre></div><p>MicroK8s will create a user group which is best to add your user account to so you can execute commands that would otherwise require admin privileges. You can do so with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ sudo usermod -a -G microk8s $USER
$ sudo chown -f -R $USER ~/.kube
</code></pre></div><p>With that, our very own Kubernetes cluster, courtesy of MicroK8s, should be ready to go. Check its status with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ microk8s status --wait-ready
</code></pre></div><p>You should see a “MicroK8s is running” message along with some specifications on your cluster. Including the available add-ons, which ones are enabled and which ones are disabled.</p>
<p>You can also shut down your cluster anytime with <code>microk8s stop</code>. Use <code>microk8s start</code> to bring it back up.</p>
<h4 id="introducing-kubectl">Introducing kubectl</h4>
<p>MicroK8s also comes with <a href="https://kubectl.docs.kubernetes.io/guides/introduction/kubectl/">kubectl</a>. This is our gateway into Kubernetes, as this is the command line tool that we use to interact with it. By default, MicroK8s makes it so we can call it using <code>microk8s kubectl ...</code>. That is, namespaced. This is useful if you have multiple Kubernetes implementations running at the same time, or another, separate kubectl. I don’t, so I like to create an alias for it, so that I can call it without having to use the <code>microk8s</code> prefix. You can do it like so:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ sudo snap alias microk8s.kubectl kubectl
</code></pre></div><p>Now that all that’s done, we can start talking to our Kubernetes cluster. We can ask it for example to tell us which are the nodes in the cluster with this command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl get nodes
</code></pre></div><p>That will result in something like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">NAME STATUS ROLES AGE VERSION
pop-os Ready <none> 67d v1.21.4-3+e5758f73ed2a04
</code></pre></div><p>The only node in the cluster is your own machine. In my case, my machine is called “pop-os” so that’s what shows up. You can get more information out of this command by using <code>kubectl get nodes -o wide</code>.</p>
<h4 id="installing-add-ons">Installing add-ons</h4>
<p>MicroK8s supports many add-ons that we can use to enhance our Kubernetes installation. We are going to need a few of them so let’s install them now. They are:</p>
<ol>
<li>The <a href="https://microk8s.io/docs/addon-dashboard">dashboard</a>, which gives us a nice web GUI which serves as a window into our cluster. In there we can see everything that’s running, read logs, run commands, etc.</li>
<li><a href="https://microk8s.io/docs/addon-dns">dns</a>, which sets up DNS for within the cluster. In general it’s a good idea to enable this one because other add-ons use it.</li>
<li>storage, which allows the cluster to access the host machine’s disk for storage. The application that we will deploy needs a persistent database so we need this plugin to make it happen.</li>
<li>registry, which sets up a <a href="https://kubernetes.io/docs/concepts/containers/images/">container image</a> registry that Kubernetes can access. Kubernetes runs containerized applications, containers are based on images. So, having this add-on allows us to define an image for our application and make it available to Kubernetes.</li>
</ol>
<p>To install these, just run the following commands:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ microk8s enable dashboard
$ microk8s enable dns
$ microk8s enable storage
$ microk8s enable registry
</code></pre></div><p>Those are all the add-ons that we’ll use.</p>
<h4 id="introducing-the-dashboard">Introducing the Dashboard</h4>
<p>The dashboard is one we can play with right now. In order to access it, first run this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ microk8s dashboard-proxy
</code></pre></div><p>That will start up a proxy into the dashboard. The command will give you a URL and login token that you can use to access the dashboard. It results in an output like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">Checking if Dashboard is running.
Dashboard will be available at https://127.0.0.1:10443
Use the following token to login:
<YOUR LOGIN TOKEN>
</code></pre></div><p>Now you can navigate to that URL in your browser and you’ll find a screen like this:</p>
<p><img src="/blog/2022/01/kubernetes-101/dashboard-login.webp" alt="Dashboard login"></p>
<p>Make sure the “Token” option is selected and take the login token generated by the <code>microk8s dashboard-proxy</code> command from before and paste it in the field in the page. Click the “Sign In” button and you’ll be able to see the dashboard, allowing you access to many aspects of your cluster. It should look like this:</p>
<p><img src="/blog/2022/01/kubernetes-101/dashboard-home.webp" alt="Dashboard home"></p>
<p>Feel free to play around with it a little bit. You don’t have to understand everything yet. As we work through our example, we’ll see how the dashboard and the other add-ons come into play.</p>
<blockquote>
<p>There’s also a very useful command line tool called <a href="https://k9scli.io/">K9s</a>, which helps in interacting with our cluster. We will not be discussing it further in this article but feel free to explore it if you need or want a command line alternative to the built-in dashboard.</p>
</blockquote>
<h3 id="deploying-applications-into-a-kubernetes-cluster">Deploying applications into a Kubernetes cluster</h3>
<p>With all that setup out of the way, we can start using our K8s cluster for what it was designed: running applications.</p>
<h4 id="deployments">Deployments</h4>
<p>Pods are very much the stars of the show when it comes to Kubernetes. However, most of the time we don’t create them directly. We usually do so through “<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">deployments</a>”. Deployments are a more abstract concept in Kubernetes. They basically control pods and make sure they behave as specified. You can think of them as wrappers for pods which make our lives easier than if we had to handle pods directly. Let’s go ahead and create a deployment so things will be clearer.</p>
<p>In Kubernetes, there are various ways of managing objects like deployments. For this post, I’m going to focus exclusively on the configuration-file-driven declarative approach as that’s the one better suited for real world scenarios.</p>
<blockquote>
<p>You can learn more about the different ways of interacting with Kubernetes objects in <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/object-management/">the official documentation</a>.</p>
</blockquote>
<p>So, simply put, if we want to create a deployment, then we need to author a file that defines it. A simple deployment specification looks like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># nginx-deployment.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>apps/v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Deployment<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>nginx-deployment<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">labels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>nginx<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">replicas</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">3</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">selector</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">matchLabels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>nginx<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">template</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">labels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>nginx<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">containers</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>nginx<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>nginx:1.14.2<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">containerPort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">80</span><span style="color:#bbb">
</span></code></pre></div><blockquote>
<p>This example is taken straight from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">the official documentation</a>.</p>
</blockquote>
<p>Don’t worry if most of that doesn’t make sense at this point. I’ll explain it in detail later. First, let’s actually do something with it.</p>
<p>Save that in a new file. You can call it <code>nginx-deployment.yaml</code>. Once that’s done, you can actually create the deployment (and its associated objects) in your K8s cluster with this command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl apply -f nginx-deployment.yaml
</code></pre></div><p>Which should result in the following message:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">deployment.apps/nginx-deployment created
</code></pre></div><p>And that’s it for creating deployments! (Or any other type of object in Kubernetes for that matter.) We define the object in a file and then invoke <code>kubectl</code>’s <code>apply</code> command. Pretty simple.</p>
<blockquote>
<p>If you want to delete the deployment, then this command will do it:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl delete -f nginx-deployment.yaml
deployment.apps "nginx-deployment" deleted
</code></pre></div></blockquote>
<h4 id="using-kubectl-to-explore-a-deployment">Using kubectl to explore a deployment</h4>
<p>Now, let’s inspect our cluster to see what this command has done for us.</p>
<p>First, we can ask it directly for the deployment with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl get deployments
</code></pre></div><p>Which outputs:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 2m54s
</code></pre></div><p>Now you can see that the deployment that we just created is right there with the name that we gave it.</p>
<p>As I said earlier, deployments are used to manage pods, and that’s just what the <code>READY</code>, <code>UP-TO-DATE</code> and <code>AVAILABLE</code> columns allude to with those values of <code>3</code>. This deployment has three pods because, in our YAML file, we specified we wanted three replicas with the <code>replicas: 3</code> line. Each “replica” is a pod. For our example, that means that we will have three instances of <a href="https://www.nginx.com/">NGINX</a> running side by side.</p>
<p>We can see the pods that have been created for us with this command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl get pods
</code></pre></div><p>Which gives us something like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">NAME READY STATUS RESTARTS AGE
nginx-deployment-66b6c48dd5-fs5rq 1/1 Running 0 55m
nginx-deployment-66b6c48dd5-xmnl2 1/1 Running 0 55m
nginx-deployment-66b6c48dd5-sfzxm 1/1 Running 0 55m
</code></pre></div><p>The exact names will vary, as the IDs are auto-generated. But as you can see, this command gives us some basic information about our pods. Remember that pods are the ones that actually run our workloads via containers. The <code>READY</code> field is particularly interesting in this sense because it tells us how many containers are running in the pods vs how many are supposed to run. So, <code>1/1</code> means that the pod has one container ready out of 1. In other words, the pod is fully ready.</p>
<h4 id="using-the-dashboard-to-explore-a-deployment">Using the dashboard to explore a deployment</h4>
<p>Like I said before, the dashboard offers us a window into our cluster. Let’s see how we can use it to see the information that we just saw via <code>kubectl</code>. Navigate into the dashboard via your browser and you should now see that some new things have appeared:</p>
<p><img src="/blog/2022/01/kubernetes-101/dashboard-home-with-deployment.webp" alt="Dashboard home with deployment"></p>
<p>We now have new “CPU Usage” and “Memory Usage” sections that give us insight into the utilization of our machine’s resources.</p>
<p>There’s also “Workload status” that has some nice graphs giving us a glance at the status of our deployments, pods, and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/">replica sets</a>.</p>
<blockquote>
<p>Don’t worry too much about replica sets right now, as we seldom interact with them directly. Suffice it to say, replica sets are objects that deployments rely on to make sure that the number of specified replica pods is maintained. As always, there’s more info in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/">the official documentation</a>.</p>
</blockquote>
<p>Scroll down a little bit more and you’ll find the “Deployments” and “Pods” sections, which contain the information that we’ve already seen via <code>kubectl</code> before.</p>
<p><img src="/blog/2022/01/kubernetes-101/dashboard-home-deployments-and-pods.webp" alt="Dashboard home: deployments and pods"></p>
<p>Feel free to click around and explore the capabilities of the dashboard.</p>
<h4 id="dissecting-the-deployment-configuration-file">Dissecting the deployment configuration file</h4>
<p>Now that we have a basic understanding of deployments and pods and how to create them, let’s look more closely into the configuration file that defines it. This is what we had:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># nginx-deployment.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>apps/v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Deployment<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>nginx-deployment<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">labels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>nginx<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">replicas</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">3</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">selector</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">matchLabels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>nginx<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">template</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">labels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>nginx<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">containers</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>nginx<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>nginx:1.14.2<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">containerPort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">80</span><span style="color:#bbb">
</span></code></pre></div><p>This example is very simple, but it touches on the key aspects of deployment configuration. We will be building more complex deployments as we work through this article, but this is a great start. Let’s start at the top:</p>
<ul>
<li><code>apiVersion</code>: Under the hood, a Kubernetes cluster exposes its functionality via a REST API. We seldom interact with this API directly because we have <code>kubectl</code> that takes care of it for us. <code>kubectl</code> takes our commands, translates them into HTTP requests that the K8s REST API can understand, sends them, and gives us back the results. So, this <code>apiVersion</code> field specifies which version of the K8s REST API we are expecting to talk to.</li>
<li><code>kind</code>: It represents the type of object that the configuration file defines. All objects in Kubernetes can be managed via YAML configuration files and <code>kubectl apply</code>. So, this field specifies which one we are managing at any given time.</li>
<li><code>metadata.name</code>: Quite simply, the name of the object. It’s how we and Kubernetes refer to it.</li>
<li><code>metadata.labels</code>: These help us further categorize cluster objects. These have no real effect in the system so they are useful for user help more than anything else.</li>
<li><code>spec</code>: This contains the actual functional specification for the behavior of the deployment. More details below.</li>
<li><code>spec.replicas</code>: The number of replica pods that the deployment should create. We already talked a bit about this before.</li>
<li><code>spec.selector.labels</code>: This is one case when labels are actually important. Remember that when we create deployments, replica sets and pods are created with it. Within the K8s cluster, they each are their own individual objects though. This field is the mechanism that K8s uses to associate a given deployment with its replica set and pods. In practice, that means that whatever labels are in this field need to match the labels in <code>spec.template.metadata.labels</code>. More on that one below.</li>
<li><code>spec.template</code>: Specifies the configuration of the pods that will be part of the deployment.</li>
<li><code>spec.template.metadata.labels</code>: Very similar to <code>metadata.labels</code>. The only difference is that those labels are added to the deployment while these ones are added to the pods. The only notable thing is that these labels are key for the deployment to know which pods it should care about (as explained in above in <code>spec.selector.labels</code>).</li>
<li><code>spec.template.spec</code>: This section specifies the actual functional configuration of the pods.</li>
<li><code>spec.template.spec.containers</code>: This section specifies the configuration of the containers that will be running inside the pods. It’s an array so there can be many. In our example we have only one.</li>
<li><code>spec.template.spec.containers[0].name</code>: The name of the container.</li>
<li><code>spec.template.spec.containers[0].image</code>: The image that will be used to build the container.</li>
<li><code>spec.template.spec.containers[0].ports[0].containerPort</code>: A port through which the container will accept traffic from the outside. In this case, <code>80</code>.</li>
</ul>
<blockquote>
<p>You can find a detailed description of all the fields supported by deployment configuration files <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#deployment-v1-apps">in the official API reference documentation</a>. And much more!</p>
</blockquote>
<h4 id="connecting-to-the-containers-in-the-pods">Connecting to the containers in the pods</h4>
<p>Kubernetes allows us to connect to the containers running inside a pod. This is pretty easy to do with <code>kubectl</code>. All we need to know is the name of the pod and the container that we want to connect to. If the pod is running only one container (like our NGINX one does) then we don’t need the container name. We can find out the names of our pods with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-66b6c48dd5-85nwq 1/1 Running 0 25s
nginx-deployment-66b6c48dd5-x5b4x 1/1 Running 0 25s
nginx-deployment-66b6c48dd5-wvkhc 1/1 Running 0 25s
</code></pre></div><p>Pick one of those and we can open a bash session in it with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl exec -it nginx-deployment-66b6c48dd5-85nwq -- bash
</code></pre></div><p>Which results in a prompt like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">root@nginx-deployment-66b6c48dd5-85nwq:/#
</code></pre></div><p>We’re now connected to the container in one of our NGINX pods. There isn’t a lot to do with this right now, but feel free to explore it. It’s got its own processes and file system which are isolated from the other replica pods and your actual machine.</p>
<p>We can also connect to containers via the dashboard. Go back to the dashboard in your browser, log in again if the session expired, and scroll down to the “Pods” section. Each pod in the list has an action menu with an “Exec” command. See it here:</p>
<p><img src="/blog/2022/01/kubernetes-101/dashboard-pod-exec.webp" alt="Dashboard pod exec"></p>
<p>Click it and you’ll be taken to a screen with a console just like the one we obtained via <code>kubectl exec</code>:</p>
<p><img src="/blog/2022/01/kubernetes-101/dashboard-pod-bash.webp" alt="Dashboard pod bash"></p>
<p>The dashboard is quite useful, right?</p>
<h4 id="services">Services</h4>
<p>So far, we’ve learned quite a bit about deployments. How to specify and create them, how to explore them via command line and the dashboard, how to interact with the pods, etc. We haven’t seen a very important part yet, though: actually accessing the application that has been deployed. That’s where <a href="https://kubernetes.io/docs/concepts/services-networking/service/">services</a> come in. We use services to expose an application running in a set of pods to the world outside the cluster.</p>
<p>Here’s what a configuration file for a service that exposes access to our NGINX deployment could look like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># nginx-service.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Service<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>nginx-service<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">type</span>:<span style="color:#bbb"> </span>NodePort<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">selector</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>nginx<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"http"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">port</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">80</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">targetPort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">80</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">nodePort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">30080</span><span style="color:#bbb">
</span></code></pre></div><p>Same as with the deployment’s configuration file, this one also has a <code>kind</code> field that specifies what it is; and a name given to it via the <code>metadata.name</code> field. The <code>spec</code> section is where things get interesting.</p>
<ul>
<li><code>spec.type</code> specifies, well… The type of the service. Kubernetes supports many <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types">types of services</a>. For now, we want a <code>NodePort</code>. This type of service makes sure to expose itself as a static port (given by <code>spec.ports[0].nodePort</code>) on every node in the cluster. In our setup, we only have one node, which is our own machine.</li>
<li><code>spec.ports</code> defines which ports of the pods’ containers the service will expose.</li>
<li><code>spec.ports[0].name</code>: The name of the port. To be used elsewhere to reference the specific port.</li>
<li><code>spec.ports[0].port</code>: The port that will be exposed by the service.</li>
<li><code>spec.ports[0].targetPort</code>: The port that the service will target in the container.</li>
<li><code>spec.ports[0].nodePort</code>: The port that the service will expose in all the nodes of the cluster.</li>
</ul>
<p>Same as with deployments, we can create such a service with the <code>kubectl apply</code> command. If you save the contents from the YAML above into a <code>nginx-service.yaml</code> file, you can run the following to create it:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl apply -f nginx-service.yaml
</code></pre></div><p>And to inspect it and validate that it was in fact created:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 68d
nginx-service NodePort 10.152.183.22 <none> 80:30080/TCP 27s
</code></pre></div><p>The dashboard also has a section for services. It looks like this:</p>
<p><img src="/blog/2022/01/kubernetes-101/dashboard-services.webp" alt="Dashboard: services"></p>
<h4 id="accessing-an-application-via-a-service">Accessing an application via a service</h4>
<p>We can access our service in a few different ways. We can use its “cluster IP” which we obtain from the output of the <code>kubectl get services</code> command. As given by the example above, that would be <code>10.152.183.22</code> in my case. Browsing to that IP gives us the familiar NGINX default welcome page:</p>
<p><img src="/blog/2022/01/kubernetes-101/nginx-via-cluster-ip.webp" alt="NGINX via Cluster IP"></p>
<p>Another way is by using the “NodePort”. Remember that the “NodePort” specifies the port in which the service will be available on every node of the cluster. With our current MicroK8s setup, our own machine is a node in the cluster, so we can also access the NGINX that’s running in our Kubernetes cluster using <code>localhost:30080</code>. <code>30080</code> is given by the <code>spec.ports[0].nodePort</code> field in the service configuration file from before. Try it out:</p>
<p><img src="/blog/2022/01/kubernetes-101/nginx-via-nodeport.webp" alt="NGINX via NodePort"></p>
<p>How cool is that? We have identical, replicated NGINX instances running in a Kubernetes cluster that’s installed locally in our machine.</p>
<h3 id="deploying-our-own-custom-application">Deploying our own custom application</h3>
<p>Alright, by deploying NGINX, we’ve learned a lot about nodes, pods, deployments, services, and how they all work together to run and serve an application from a Kubernetes cluster. Now, let’s take all that knowledge and try to do the same for a completely custom application of our own.</p>
<h4 id="what-are-we-building">What are we building</h4>
<p>The application that we are going to deploy into our cluster is a simple one with only two components: a REST API written with <a href="https://dotnet.microsoft.com/">.NET 5</a> and a <a href="https://www.postgresql.org/">Postgres</a> database. You can find the source code <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api">in GitHub</a>. It’s an API for supporting a hypothetical front end application for capturing used vehicle information and calculating their value in dollars.</p>
<blockquote>
<p>If you’re interested in learning more about the process of actually writing that app, it’s all documented in another blog post: <a href="/blog/2021/07/dotnet-5-web-api/">Building REST APIs with .NET 5, ASP.NET Core, and PostgreSQL</a>.</p>
</blockquote>
<blockquote>
<p>If you’re following along, now would be a good time to download the source code of the web application that we’re going to be playing with. You can find it on <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api">GitHub</a>. From now on, we’ll use that as the root directory of all the files we create and modify.</p>
</blockquote>
<blockquote>
<p>Also, be sure to delete or put aside the <code>k8s</code> directory. We’ll be building that throughout the rest of this post.</p>
</blockquote>
<h3 id="deploying-the-database">Deploying the database</h3>
<p>Let’s begin with the Postgres database. Similar as before, we start by setting up a deployment with one pod and one container. We can do so with a deployment configuration YAML file like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># db-deployment.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>apps/v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Deployment<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-db<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">selector</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">matchLabels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>vehicle-quotes-db<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">replicas</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">1</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">template</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">labels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>vehicle-quotes-db<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">containers</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-db<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>postgres:13<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">containerPort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">5432</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"postgres"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">env</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>POSTGRES_DB<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">value</span>:<span style="color:#bbb"> </span>vehicle_quotes<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>POSTGRES_USER<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">value</span>:<span style="color:#bbb"> </span>vehicle_quotes<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>POSTGRES_PASSWORD<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">value</span>:<span style="color:#bbb"> </span>password<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">resources</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">limits</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">memory</span>:<span style="color:#bbb"> </span>4Gi<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">cpu</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"2"</span><span style="color:#bbb">
</span></code></pre></div><p>This deployment configuration YAML file is similar to the one we used for NGINX before, but it introduces a few new elements:</p>
<ul>
<li><code>spec.template.spec.containers[0].ports[0].name</code>: We can give specific names to ports which we can reference later, elsewhere in the K8s configurations, which is what this field is for.</li>
<li><code>spec.template.spec.containers[0].env</code>: This is a list of environment variables that will be defined in the container inside the pod. In this case, we’ve specified a few variables that are necessary to configure the Postgres instance that will be running. We’re using <a href="https://hub.docker.com/_/postgres">the official Postgres image from Dockerhub</a>, and it calls for these variables. Their purpose is straightforward: they specify database name, username, and password.</li>
<li><code>spec.template.spec.containers[0].resources</code>: This field defines the hardware resources that the container needs in order to function. We can specify upper limits with <code>limits</code> and lower ones with <code>requests</code>. You can learn more about resource management in <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/">the official documentation</a>. In our case, we’ve kept it simple and used <code>limits</code> to prevent the container from using more than 4Gi of memory and 2 CPU cores.</li>
</ul>
<p>Now, let’s save that YAML into a new file called <code>db-deployment.yaml</code> and run the following:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl apply -f db-deployment.yaml
</code></pre></div><p>Which should output:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">deployment.apps/vehicle-quotes-db created
</code></pre></div><p>After a few seconds, you should be able to see the new deployment and pod via <code>kubectl</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl get deployment -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
...
default vehicle-quotes-db 1/1 1 1 9m20s
</code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
...
default vehicle-quotes-db-5fb576778-gx7j6 1/1 Running 0 9m22s
</code></pre></div><p>Remember you can also see them in the dashboard:</p>
<p><img src="/blog/2022/01/kubernetes-101/dashboard-db-deployment-and-pod.webp" alt="Dashboard DB deployment and pod"></p>
<h4 id="connecting-to-the-database">Connecting to the database</h4>
<p>Let’s try connecting to the Postgres instance that we just deployed. Take note of the pod’s name and try:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl exec -it <DB_POD_NAME> -- bash
</code></pre></div><p>You’ll get a bash session on the container that’s running the database. For me, given the pod’s auto-generated name, it looks like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">root@vehicle-quotes-db-5fb576778-gx7j6:/#
</code></pre></div><p>From here, you can connect to the database using the <a href="https://www.postgresql.org/docs/13/app-psql.html">psql</a> command line client. Remember that we told the Postgres instance to create a <code>vehicle_quotes</code> user. We set it up via the container environment variables on our deployment configuration. As a result, we can do <code>psql -U vehicle_quotes</code> to connect to the database. Put together, it all looks like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl exec -it vehicle-quotes-db-5fb576778-gx7j6 -- bash
root@vehicle-quotes-db-5fb576778-gx7j6:/# psql -U vehicle_quotes
psql (13.3 (Debian 13.3-1.pgdg100+1))
Type "help" for help.
vehicle_quotes=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
----------------+----------------+----------+------------+------------+-----------------------------------
postgres | vehicle_quotes | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | vehicle_quotes | UTF8 | en_US.utf8 | en_US.utf8 | =c/vehicle_quotes +
| | | | | vehicle_quotes=CTc/vehicle_quotes
template1 | vehicle_quotes | UTF8 | en_US.utf8 | en_US.utf8 | =c/vehicle_quotes +
| | | | | vehicle_quotes=CTc/vehicle_quotes
vehicle_quotes | vehicle_quotes | UTF8 | en_US.utf8 | en_US.utf8 |
(4 rows)
</code></pre></div><p>Pretty cool, don’t you think? We have a database running on our cluster now with minimal effort. There’s a slight problem though…</p>
<h4 id="persistent-volumes-and-claims">Persistent volumes and claims</h4>
<p>The problem in our database is that any changes are lost if the pod or container were to shut down or reset for some reason. This is because all the database files live inside the container’s file system, so when the container is gone, the data is also gone.</p>
<p>In Kubernetes, pods are supposed to be treated as ephemeral entities. The idea is that pods should easily be brought down and replaced by new pods and users and clients shouldn’t even notice. This is all Kubernetes working as expected. That is to say, pods should be as stateless as possible to work well with this behavior.</p>
<p>However, a database is, by definition, not stateless. So what we need to do to solve this problem is have some available disk space from outside the cluster that can be used by our database to store its files. Something persistent that won’t go away if the pod or container goes away. That’s where <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/">persistent volumes and persistent volume claims</a> come in.</p>
<p>We will use a persistent volume (PV) to define a directory in our host machine that we will allow our Postgres container to use to store data files. Then, a persistent volume claim (PVC) is used to define a “request” for some of that available disk space that a specific container can make. In short, a persistent volume says to K8s “here’s some storage that the cluster can use” and a persistent volume claim says “here’s a portion of that storage that’s available for containers to use”.</p>
<h4 id="configuration-files-for-the-pv-and-pvc">Configuration files for the PV and PVC</h4>
<p>Start by tearing down our currently broken Postgres deployment:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl delete -f db-deployment.yaml
</code></pre></div><p>Now let’s add two new YAML configuration files. One is for the persistent volume:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># db-persistent-volume.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>PersistentVolume<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-postgres-data-persisent-volume<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">labels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">type</span>:<span style="color:#bbb"> </span>local<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">claimRef</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">namespace</span>:<span style="color:#bbb"> </span>default<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-postgres-data-persisent-volume-claim<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">storageClassName</span>:<span style="color:#bbb"> </span>manual<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">capacity</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">storage</span>:<span style="color:#bbb"> </span>5Gi<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">accessModes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- ReadWriteOnce<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">hostPath</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">path</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"/home/kevin/projects/vehicle-quotes-postgres-data"</span><span style="color:#bbb">
</span></code></pre></div><p>In this config file, we already know about the <code>kind</code> and <code>metadata</code> fields. A few of the other elements are interesting though:</p>
<ul>
<li><code>spec.claimRef</code>: Contains identifying information about the claim that’s associated with the PV. Used to bind the PVC with a specific PVC. Notice how it matches the name defined in the PVC config file from below.</li>
<li><code>spec.capacity.storage</code>: Is pretty straightforward in that it specifies the size of the persistent volume.</li>
<li><code>spec.accessModes</code>: Defines how the PV can be accessed. In this case, we’re using <code>ReadWriteOnce</code> so that it can only be used by a single node in the cluster which is allowed to read from and write into the PV.</li>
<li><code>spec.hostPath.path</code>: Specifies the directory in the host machine’s file system where the PV will be mounted. Simply put, the containers in the cluster will have access to the specific directory defined here. I’ve used <code>/home/kevin/projects/vehicle-quotes-postgres-data</code> because that makes sense on my own machine. If you’re following along, make sure to set it to something that makes sense in your environment.</li>
</ul>
<blockquote>
<p><code>hostPath</code> is just one type of persistent volume which works well for development deployments. Managed Kubernetes implementations like the ones from Google or Amazon have their own types which are more appropriate for production.</p>
</blockquote>
<p>We also need another config file for the persistent volume claim:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># db-persistent-volume-claim.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>PersistentVolumeClaim<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-postgres-data-persisent-volume-claim<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumeName</span>:<span style="color:#bbb"> </span>vehicle-quotes-postgres-data-persisent-volume<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">storageClassName</span>:<span style="color:#bbb"> </span>manual<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">accessModes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- ReadWriteOnce<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">resources</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">requests</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">storage</span>:<span style="color:#bbb"> </span>5Gi<span style="color:#bbb">
</span></code></pre></div><p>Like I said, PVCs are essentially usage requests for PVs. So, the config file is simple in that it’s mostly specified to match the PV.</p>
<ul>
<li><code>spec.volumeName</code>: The name of the PV that this PVC is going to access. Notice how it matches the name that we defined in the PV’s config file.</li>
<li><code>spec.resources.requests</code>: Defines how much space this PVC requests from the PV. In this case, we’re just requesting all the space that the PV has available to it, as given by its config file: <code>5Gi</code>.</li>
</ul>
<h4 id="configuring-the-deployment-to-use-the-pvc">Configuring the deployment to use the PVC</h4>
<p>After saving those files, all that’s left is to update the database deployment configuration to use the PVC. Here’s what the updated config file would look like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"> apiVersion: apps/v1
kind: Deployment
metadata:
name: vehicle-quotes-db
spec:
selector:
matchLabels:
app: vehicle-quotes-db
replicas: 1
template:
metadata:
labels:
app: vehicle-quotes-db
spec:
containers:
- name: vehicle-quotes-db
image: postgres:13
ports:
- containerPort: 5432
name: "postgres"
<span style="color:#000;background-color:#dfd">+ volumeMounts:
</span><span style="color:#000;background-color:#dfd">+ - mountPath: "/var/lib/postgresql/data"
</span><span style="color:#000;background-color:#dfd">+ name: vehicle-quotes-postgres-data-storage
</span><span style="color:#000;background-color:#dfd"></span> env:
- name: POSTGRES_DB
value: vehicle_quotes
- name: POSTGRES_USER
value: vehicle_quotes
- name: POSTGRES_PASSWORD
value: password
resources:
limits:
memory: 4Gi
cpu: "2"
<span style="color:#000;background-color:#dfd">+ volumes:
</span><span style="color:#000;background-color:#dfd">+ - name: vehicle-quotes-postgres-data-storage
</span><span style="color:#000;background-color:#dfd">+ persistentVolumeClaim:
</span><span style="color:#000;background-color:#dfd">+ claimName: vehicle-quotes-postgres-data-persisent-volume-claim
</span></code></pre></div><p>First, notice the <code>volumes</code> section at the bottom of the file. Here’s where we define the volume that will be available to the container, give it a name and specify which PVC it will use. The <code>spec.template.volumes[0].persistentVolumeClaim.claimName</code> needs to match the name of the PVC that we defined in <code>db-persistent-volume-claim.yaml</code>.</p>
<p>Then, up in the <code>containers</code> section, we define a <code>volumeMounts</code> element. We use that to specify which directory within the container will map to our PV. In this case, we’ve set the container’s <code>/var/lib/postgresql/data</code> directory to use the volume that we defined at the bottom of the file. That volume is backed by our persistent volume claim, which is in turn backed by our persistent volume. The significance of the <code>/var/lib/postgresql/data</code> directory is that this is where Postgres stores database files by default.</p>
<p>In summary: We created a persistent volume that defines some disk space in our machine that’s available to the cluster; then we defined a persistent volume claim that represents a request of some of that space that a container can have access to; after that we defined a volume within our pod configuration in our deployment to point to that persistent volume claim; and finally we defined a volume mount in our container that uses that volume to store the Postgres database files.</p>
<p>By setting it up this way, we’ve made it so that regardless of how many Postgres pods come and go, the database files will always be persisted, because the files now live outside of the container. They are stored in our host machine instead.</p>
<blockquote>
<p>There’s another limitation that’s important to note. Just using the approach that we discussed, it’s not possible to deploy multiple replicas of Postgres which work in tandem and operate on the same data. Even though the data files can be defined outside of the cluster and persisted that way, there can only be one single Postgres instance running against it at any given time.</p>
<p>In production, the high availability problem is better solved leveraging the features provided by the database software itself. <a href="https://www.postgresql.org/docs/9.1/different-replication-solutions.html">Postgres offers various options in that area</a>. Or, if you are deploying to the cloud, the best strategy may be to use a relational database service managed by your cloud provider. Examples are <a href="https://aws.amazon.com/rds/">Amazon’s RDS</a> and <a href="https://azure.microsoft.com/en-us/products/azure-sql/database/">Microsoft’s Azure SQL Database</a>.</p>
</blockquote>
<h4 id="applying-changes">Applying changes</h4>
<p>Now let’s see it in action. Run the following three commands to create the objects:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl apply -f db-persistent-volume.yaml
$ kubectl apply -f db-persistent-volume-claim.yaml
$ kubectl apply -f db-deployment.yaml
</code></pre></div><p>After a while, they will show up in the dashboard. You already know how to look for deployments and pods. For persistent volumes, click the “Persistent Volumes” option under the “Cluster” section in the sidebar:</p>
<p><img src="/blog/2022/01/kubernetes-101/dashboard-persistent-volume.webp" alt="Dashboard persistent volume"></p>
<p>Persistent volume claims can be found in the “Persistent Volume Claims” option under the “Config and Storage” section in the sidebar:</p>
<p><img src="/blog/2022/01/kubernetes-101/dashboard-persistent-volume-claim.webp" alt="Dashboard persistent volume claim"></p>
<p>Now, try connecting to the database (using <code>kubectl exec -it <VEHICLE_QUOTES_DB_POD_NAME> -- bash</code> and then <code>psql -U vehicle_quotes</code>) and creating some tables. Something simple like this would work:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sql" data-lang="sql"><span style="color:#080;font-weight:bold">CREATE</span><span style="color:#bbb"> </span><span style="color:#080;font-weight:bold">TABLE</span><span style="color:#bbb"> </span>test<span style="color:#bbb"> </span>(test_field<span style="color:#bbb"> </span><span style="color:#038">varchar</span>);<span style="color:#bbb">
</span></code></pre></div><p>Now, close <code>psql</code> and the <code>bash</code> in the pod and delete the objects:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl delete -f db-deployment.yaml
$ kubectl delete -f db-persistent-volume-claim.yaml
$ kubectl delete -f db-persistent-volume.yaml
</code></pre></div><p>Create them again:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl apply -f db-persistent-volume.yaml
$ kubectl apply -f db-persistent-volume-claim.yaml
$ kubectl apply -f db-deployment.yaml
</code></pre></div><p>Connect to the database again and you should see that the table is still there:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">vehicle_quotes=# \c vehicle_quotes
You are now connected to database "vehicle_quotes" as user "vehicle_quotes".
vehicle_quotes=# \dt
List of relations
Schema | Name | Type | Owner
--------+------+-------+----------------
public | test | table | vehicle_quotes
(1 row)
</code></pre></div><p>That’s just what we wanted: the database is persisting independently of what happens to the pods and containers.</p>
<h4 id="exposing-the-database-as-a-service">Exposing the database as a service</h4>
<p>Lastly, we need to expose the database as a service so that the rest of the cluster can access it without having to use explicit pod names. We don’t need this for our testing, but we do need it for later when we deploy our web app, so that it can reach the database. As you’ve seen, services are easy to create. Here’s the YAML config file:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># db-service.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Service<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-db-service<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">type</span>:<span style="color:#bbb"> </span>NodePort<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">selector</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>vehicle-quotes-db<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"postgres"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">protocol</span>:<span style="color:#bbb"> </span>TCP<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">port</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">5432</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">targetPort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">5432</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">nodePort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">30432</span><span style="color:#bbb">
</span></code></pre></div><p>Save that into a new <code>db-service.yaml</code> file and don’t forget to <code>kubectl apply -f db-service.yaml</code>.</p>
<h3 id="deploying-the-web-application">Deploying the web application</h3>
<p>Now that we’ve got the database sorted out, let’s turn our attention to the app itself. As you’ve seen, Kubernetes runs apps as containers. That means that we need images to build those containers. A custom web application is no exception. We need to build a custom image that contains our application so that it can be deployed into Kubernetes.</p>
<h4 id="building-the-web-application-image">Building the web application image</h4>
<p>The first step for building a container image is writing a <a href="https://docs.docker.com/engine/reference/builder/">Dockerfile</a>. Since our application is a <a href="https://dotnet.microsoft.com/apps/aspnet/apis">Web API</a> built using .NET 5, I’m going to use a slightly modified version of the Dockerfile used by <a href="https://code.visualstudio.com/">Visual Studio Code</a>’s <a href="https://github.com/microsoft/vscode-remote-try-dotnetcore/blob/main/.devcontainer/Dockerfile">development container demo for .NET</a>. These development containers are excellent for, well… development. You can see the original in the link above, but here’s mine:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-dockerfile" data-lang="dockerfile"><span style="color:#888"># [Choice] .NET version: 5.0, 3.1, 2.1</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">ARG</span> <span style="color:#369">VARIANT</span>=<span style="color:#d20;background-color:#fff0f0">"5.0"</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">FROM</span><span style="color:#d20;background-color:#fff0f0"> mcr.microsoft.com/vscode/devcontainers/dotnet:0-${VARIANT}</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#888"># [Option] Install Node.js</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">ARG</span> <span style="color:#369">INSTALL_NODE</span>=<span style="color:#d20;background-color:#fff0f0">"false"</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#888"># [Option] Install Azure CLI</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">ARG</span> <span style="color:#369">INSTALL_AZURE_CLI</span>=<span style="color:#d20;background-color:#fff0f0">"false"</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#888"># Install additional OS packages.</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> apt-get update && <span style="color:#038">export</span> <span style="color:#369">DEBIAN_FRONTEND</span>=noninteractive <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> && apt-get -y install --no-install-recommends postgresql-client-common postgresql-client<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#888"># Run the remaining commands as the "vscode" user</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">USER</span><span style="color:#d20;background-color:#fff0f0"> vscode</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#888"># Install EF and code generator development tools</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> dotnet tool install --global dotnet-ef<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> dotnet tool install --global dotnet-aspnet-codegenerator<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> <span style="color:#038">echo</span> <span style="color:#d20;background-color:#fff0f0">'export PATH="$PATH:/home/vscode/.dotnet/tools"'</span> >> /home/vscode/.bashrc<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">WORKDIR</span><span style="color:#d20;background-color:#fff0f0"> /app</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#888"># Prevent the container from closing automatically</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">ENTRYPOINT</span> [<span style="color:#d20;background-color:#fff0f0">"tail"</span>, <span style="color:#d20;background-color:#fff0f0">"-f"</span>, <span style="color:#d20;background-color:#fff0f0">"/dev/null"</span>]<span style="color:#a61717;background-color:#e3d2d2">
</span></code></pre></div><blockquote>
<p>There are <a href="https://github.com/microsoft/vscode-dev-containers/tree/main/containers">many other</a> development containers for other languages and frameworks. Take a look at the <a href="https://github.com/microsoft/vscode-dev-containers">microsoft/vscode-dev-containers GitHub repo</a> to learn more.</p>
</blockquote>
<p>An interesting thing about this Dockerfile is that we install the <code>psql</code> command line client so that we can connect to our Postgres database from within the web application container. The rest is stuff specific to .NET and the particular image we’re basing this Dockerfile on, so don’t sweat it too much.</p>
<p>If you’ve downloaded the source code, this Dockerfile should already be there as <code>Dockerfile.dev</code>.</p>
<h4 id="making-the-image-accessible-to-kubernetes">Making the image accessible to Kubernetes</h4>
<p>Now that we have a Dockerfile, we can use it to build an image that Kubernetes is able to use to build a container to deploy. So that Kubernetes can see it, we need to build it in a specific way and push it into a registry that’s accessible to Kubernetes. Remember how we ran <code>microk8s enable registry</code> to install the registry add-on when we were setting up MicroK8s? That will pay off now, as that’s the registry to which we’ll push our image.</p>
<p>First, we build the image:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ docker build . -f Dockerfile.dev -t localhost:32000/vehicle-quotes-dev:registry
</code></pre></div><p>That will take some time to download and set up everything. Once that’s done, we push the image to the registry:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ docker push localhost:32000/vehicle-quotes-dev:registry
</code></pre></div><p>That will also take a little while.</p>
<h4 id="deploying-the-web-application-1">Deploying the web application</h4>
<p>The next step is to create a deployment for the web app. Like usual, we start with a deployment YAML configuration file. Let’s call it <code>web-deployment.yaml</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># web-deployment.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>apps/v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Deployment<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-web<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">selector</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">matchLabels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>vehicle-quotes-web<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">replicas</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">1</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">template</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">labels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>vehicle-quotes-web<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">containers</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-web<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>localhost:32000/vehicle-quotes-dev:registry<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">containerPort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">5000</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"http"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">containerPort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">5001</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"https"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumeMounts</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">mountPath</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"/app"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-source-code-storage<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">env</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>POSTGRES_DB<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">value</span>:<span style="color:#bbb"> </span>vehicle_quotes<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>POSTGRES_USER<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">value</span>:<span style="color:#bbb"> </span>vehicle_quotes<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>POSTGRES_PASSWORD<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">value</span>:<span style="color:#bbb"> </span>password<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>CUSTOMCONNSTR_VehicleQuotesContext<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">value</span>:<span style="color:#bbb"> </span>Host=$(VEHICLE_QUOTES_DB_SERVICE_SERVICE_HOST);Database=$(POSTGRES_DB);Username=$(POSTGRES_USER);Password=$(POSTGRES_PASSWORD)<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">resources</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">limits</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">memory</span>:<span style="color:#bbb"> </span>2Gi<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">cpu</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"1"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-source-code-storage<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">persistentVolumeClaim</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">claimName</span>:<span style="color:#bbb"> </span>vehicle-quotes-source-code-persisent-volume-claim<span style="color:#bbb">
</span></code></pre></div><p>This deployment configuration should look very familiar to you by now as it is very similar to the ones we’ve already seen. There are a few notable elements though:</p>
<ul>
<li>Notice how we specified <code>localhost:32000/vehicle-quotes-dev:registry</code> as the container image. This is the exact same name of the image that we built and pushed into the registry before.</li>
<li>In the environment variables section, the one named <code>CUSTOMCONNSTR_VehicleQuotesContext</code> is interesting for a couple of reasons:
<ul>
<li>First, the value is a Postgres connection string being built off of other environment variables using the following format: <code>$(ENV_VAR_NAME)</code>. That’s a neat feature of Kubernetes config files that allows us to reference variables to build other ones.</li>
<li>Second, the <code>VEHICLE_QUOTES_DB_SERVICE_SERVICE_HOST</code> environment variable used within that connection string is not defined anywhere in our configuration files. That’s an automatic environment variable that Kubernetes injects on all containers when there are services available. In this case, it contains the hostname of the <code>vehicle-quotes-db-service</code> that we created a few sections ago. The automatic injection of this <code>*_SERVICE_HOST</code> variable always happens as long as the service is already created by the time that the pod gets created. We have already created the service so we should be fine using the variable here. As usual, there’s more info in the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables">official documentation</a>.</li>
</ul>
</li>
</ul>
<p>As you may have noticed, this deployment has a persistent volume. That’s to store the application’s source code. Or, more accurately, to make the source code, which lives in our machine, available to the container. This is a development setup after all, so we want to be able to edit the code from the comfort of our own file system, and have the container inside the cluster be aware of that.</p>
<p>Anyway, let’s create the associated persistent volume and persistent volume claim. Here’s the PV (save it as <code>web-persistent-volume.yaml</code>):</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># web-persistent-volume.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>PersistentVolume<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-source-code-persisent-volume<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">labels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">type</span>:<span style="color:#bbb"> </span>local<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">claimRef</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">namespace</span>:<span style="color:#bbb"> </span>default<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-source-code-persisent-volume-claim<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">storageClassName</span>:<span style="color:#bbb"> </span>manual<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">capacity</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">storage</span>:<span style="color:#bbb"> </span>1Gi<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">accessModes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- ReadWriteOnce<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">hostPath</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">path</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"/home/kevin/projects/vehicle-quotes"</span><span style="color:#bbb">
</span></code></pre></div><p>And here’s the PVC (save it as <code>web-persistent-volume-claim.yaml</code>):</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># web-persistent-volume-claim.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>PersistentVolumeClaim<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-source-code-persisent-volume-claim<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumeName</span>:<span style="color:#bbb"> </span>vehicle-quotes-source-code-persisent-volume<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">storageClassName</span>:<span style="color:#bbb"> </span>manual<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">accessModes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- ReadWriteOnce<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">resources</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">requests</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">storage</span>:<span style="color:#bbb"> </span>1Gi<span style="color:#bbb">
</span></code></pre></div><p>The only notable element here is the PV’s <code>hostPath</code>. I have it pointing to the path where I downloaded the app’s source code from <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api">GitHub</a>. Make sure to do the same on your end.</p>
<p>Finally, tie it all up with a service that will expose the development build of our REST API. Here’s the config file:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># web-service.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Service<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-web-service<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">type</span>:<span style="color:#bbb"> </span>NodePort<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">selector</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>vehicle-quotes-web<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"http"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">protocol</span>:<span style="color:#bbb"> </span>TCP<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">port</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">5000</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">targetPort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">5000</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">nodePort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">30000</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"https"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">protocol</span>:<span style="color:#bbb"> </span>TCP<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">port</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">5001</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">targetPort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">5001</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">nodePort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">30001</span><span style="color:#bbb">
</span><span style="color:#bbb">
</span></code></pre></div><p>Should be pretty self-explanatory at this point. In this case, we expose two ports, one for HTTP and another for HTTPS. Our .NET 5 Web API works with both so that’s why we specify them here. This configuration says that the service should expose port <code>30000</code> and send traffic that comes into that port from the outside world into port <code>5000</code> on the container. Likewise, outside traffic coming to port <code>30001</code> will be sent to port <code>5001</code> in the container.</p>
<p>Save that file as <code>web-service.yaml</code> and we’re ready to apply the changes:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl apply -f web-persistent-volume.yaml
$ kubectl apply -f web-persistent-volume-claim.yaml
$ kubectl apply -f web-deployment.yaml
$ kubectl apply -f web-service.yaml
</code></pre></div><p>Feel free to explore the dashboard’s “Deployments”, “Pods”, “Services”, “Persistent Volumes”, and “Persistent Volume Claims” sections to see the fruits of our labor.</p>
<h4 id="starting-the-application">Starting the application</h4>
<p>Let’s now do some final setup and start up our application. Start by connecting to the web application pod:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl exec -it vehicle-quotes-web-86cbc65c7f-5cpg8 -- bash
</code></pre></div><blockquote>
<p>Remember that the pod name will be different for you, so copy it from the dashboard or <code>kubectl get pods -A</code>.</p>
</blockquote>
<p>You’ll get a prompt like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">vscode ➜ /app (master ✗) $
</code></pre></div><p>Try <code>ls</code> to see all of the app’s source code files courtesy of the PV that we set up before:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">vscode ➜ /app (master ✗) $ ls
Controllers Dockerfile.prod Models README.md Startup.cs appsettings.Development.json k8s queries.sql
Data K8S_README.md Program.cs ResourceModels Validations appsettings.json k8s_wip
Dockerfile.dev Migrations Properties Services VehicleQuotes.csproj database.dbml obj
</code></pre></div><p>Now it’s just a few .NET commands to get the app up and running. First, compile and download packages:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ dotnet build
</code></pre></div><p>That will take a while. Once done, let’s build the database schema:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ dotnet ef database update
</code></pre></div><p>And finally, run the development web server:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ dotnet run
</code></pre></div><blockquote>
<p>If you get the error message “System.InvalidOperationException: Unable to configure HTTPS endpoint.” while trying <code>dotnet run</code>, follow the error message’s instructions and run <code>dotnet dev-certs https --trust</code>. This will generate a development certificate so that the dev server can serve HTTPS.</p>
</blockquote>
<p>As a result, you should see this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">vscode ➜ /app (master ✗) $ dotnet run
Building...
info: Microsoft.Hosting.Lifetime[0]
Now listening on: https://0.0.0.0:5001
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://0.0.0.0:5000
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app
</code></pre></div><p>It indicates that the application is up and running. Now, navigate to <code>http://localhost:30000</code> in your browser of choice and you should see our REST API’s Swagger UI:</p>
<p><img src="/blog/2022/01/kubernetes-101/swagger.webp" alt="Swagger!"></p>
<p>Notice that <code>30000</code> is the port we specified in the <code>web-service.yaml</code>’s <code>nodePort</code> for the <code>http</code> port. That’s the port that the service exposes to the world outside the cluster. Notice also how our .NET web app’s development server listens to traffic coming from ports <code>5000</code> and <code>5001</code> for HTTP and HTTPS respectively. That’s why we configured <code>web-service.yaml</code> like we did.</p>
<p>Outstanding! All our hard work has paid off and we have a full-fledged web application running in our Kubernetes cluster. This is quite a momentous occasion. We’ve built a custom image that can be used to create containers to run a .NET web application, pushed that image into our local registry so that K8s could use it, and deployed a functioning application. As a cherry on top, we made it so the source code is super easy to edit, as it lives within our own machine’s file system and the container in the cluster accesses it directly from there. Quite an accomplishment.</p>
<p>Now it’s time to go the extra mile and organize things a bit. Let’s talk about Kustomize next.</p>
<h3 id="putting-it-all-together-with-kustomize">Putting it all together with Kustomize</h3>
<p><a href="https://kubectl.docs.kubernetes.io/guides/introduction/kustomize/">Kustomize</a> is a tool that helps us improve Kubernetes’ declarative object management with configuration files (which is what we’ve been doing throughout this post). Kustomize has useful features that help with better organizing configuration files, managing configuration variables, and support for deployment variants (for things like dev vs. test vs. prod environments). Let’s explore what Kustomize has to offer.</p>
<p>First, be sure to tear down all the objects that we have created so far as we will be replacing them later once we have a setup with Kustomize. This will work for that:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl delete -f db-service.yaml
$ kubectl delete -f db-deployment.yaml
$ kubectl delete -f db-persistent-volume-claim.yaml
$ kubectl delete -f db-persistent-volume.yaml
$ kubectl delete -f web-service.yaml
$ kubectl delete -f web-deployment.yaml
$ kubectl delete -f web-persistent-volume-claim.yaml
$ kubectl delete -f web-persistent-volume.yaml
</code></pre></div><p>Next, let’s reorganize our <code>db-*</code> and <code>web-*</code> YAML files like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">k8s
├── db
│ ├── db-deployment.yaml
│ ├── db-persistent-volume-claim.yaml
│ ├── db-persistent-volume.yaml
│ └── db-service.yaml
└── web
├── web-deployment.yaml
├── web-persistent-volume-claim.yaml
├── web-persistent-volume.yaml
└── web-service.yaml
</code></pre></div><p>As you can see, we’ve put them all inside a new <code>k8s</code> directory, and further divided them into <code>db</code> and <code>web</code> sub-directories. <code>web-*</code> files went into the <code>web</code> directory and <code>db-*</code> files went into <code>db</code>. At this point, the prefixes on the files are a bit redundant so we can remove them. After all, we know what component they belong to because of the name of their respective sub-directories.</p>
<blockquote>
<p>There’s already a <code>k8s</code> directory in the repo. Feel free to get rid of it as we will build it back up from scratch now.</p>
</blockquote>
<p>So it should end up looking like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">k8s
├── db
│ ├── deployment.yaml
│ ├── persistent-volume-claim.yaml
│ ├── persistent-volume.yaml
│ └── service.yaml
└── web
├── deployment.yaml
├── persistent-volume-claim.yaml
├── persistent-volume.yaml
└── service.yaml
</code></pre></div><blockquote>
<p>kubectl’s <code>apply</code> and <code>delete</code> commands support directories as well, not only individual files. That means that, at this point, to build up all of our objects you could simply do <code>kubectl apply -f k8s/db</code> and <code>kubectl apply -f k8s/web</code>. This is much better than what we’ve been doing until now where we had to specify every single file. Still, with Kustomize, we can do better than that…</p>
</blockquote>
<h4 id="the-kustomization-file">The Kustomization file</h4>
<p>We can bring everything together with a <code>kustomization.yaml</code> file. For our setup, here’s what it could look like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># k8s/kustomization.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Kustomization<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">resources</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- db/persistent-volume.yaml<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- db/persistent-volume-claim.yaml<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- db/service.yaml<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- db/deployment.yaml<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- web/persistent-volume.yaml<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- web/persistent-volume-claim.yaml<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- web/service.yaml<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- web/deployment.yaml<span style="color:#bbb">
</span></code></pre></div><p>This first iteration of the Kustomization file is simple. It just lists all of our other config files in the <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/resource/"><code>resources</code></a> section in their relative locations. Save that as <code>k8s/kustomization.yaml</code> and you can apply it with the following:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl apply -k k8s
</code></pre></div><p>The <code>-k</code> option tells <code>kubectl apply</code> to look for a Kustomization within the given directory and use that to build the cluster objects. After running it, you should see familiar output:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">service/vehicle-quotes-db-service created
service/vehicle-quotes-web-service created
persistentvolume/vehicle-quotes-postgres-data-persisent-volume created
persistentvolume/vehicle-quotes-source-code-persisent-volume created
persistentvolumeclaim/vehicle-quotes-postgres-data-persisent-volume-claim created
persistentvolumeclaim/vehicle-quotes-source-code-persisent-volume-claim created
deployment.apps/vehicle-quotes-db created
deployment.apps/vehicle-quotes-web created
</code></pre></div><p>Feel free to explore the dashboard or <code>kubectl get</code> commands to see the objects that got created. You can connect to pods, run the app, query the database, everything. Just like we did before. The only difference is that now everything is neatly organized and there’s a single file that serves as bootstrap for the whole setup. All thanks to Kustomize and the <code>-k</code> option.</p>
<p><code>kubectl delete -k k8s</code> can be used to tear everything down.</p>
<h4 id="defining-reusable-configuration-values-with-configmaps">Defining reusable configuration values with ConfigMaps</h4>
<p>Another useful feature of Kustomize is <a href="https://kubernetes.io/docs/concepts/configuration/configmap/">ConfigMaps</a>. These allow us to specify configuration variables in the Kustomization and use them throughout the rest of the resource config files. A good candidate to demonstrate their use are the environment variables that configure our Postgres database and the connection string in our web application.</p>
<p>We’re going to make changes to the config so be sure to tear everything down with <code>kubectl delete -k k8s</code>.</p>
<p>We can start by adding the following to the <code>kustomization.yaml</code> file:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"> # k8s/kustomization.yaml
kind: Kustomization
resources:
- db/persistent-volume.yaml
- db/persistent-volume-claim.yaml
- db/service.yaml
- db/deployment.yaml
- web/persistent-volume.yaml
- web/persistent-volume-claim.yaml
- web/service.yaml
- web/deployment.yaml
<span style="color:#000;background-color:#dfd">+configMapGenerator:
</span><span style="color:#000;background-color:#dfd">+ - name: postgres-config
</span><span style="color:#000;background-color:#dfd">+ literals:
</span><span style="color:#000;background-color:#dfd">+ - POSTGRES_DB=vehicle_quotes
</span><span style="color:#000;background-color:#dfd">+ - POSTGRES_USER=vehicle_quotes
</span><span style="color:#000;background-color:#dfd">+ - POSTGRES_PASSWORD=password
</span></code></pre></div><p>The <code>configMapGenerator</code> section is where the magic happens. We’ve kept it simple and defined the variables as literals. <code>configMapGenerator</code> is much more flexible than that though, accepting external configuration files. <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/configmapgenerator/">The official documentation</a> has more details.</p>
<p>Now, let’s see what we have to do to actually use those values in our configuration.</p>
<p>First up is the database deployment configuration file, <code>k8s/db/deployment.yaml</code>. Update its <code>env</code> section like so:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"># k8s/db/deployment.yaml
# ...
env:
<span style="color:#000;background-color:#fdd">- - name: POSTGRES_DB
</span><span style="color:#000;background-color:#fdd">- value: vehicle_quotes
</span><span style="color:#000;background-color:#fdd">- - name: POSTGRES_USER
</span><span style="color:#000;background-color:#fdd">- value: vehicle_quotes
</span><span style="color:#000;background-color:#fdd">- - name: POSTGRES_PASSWORD
</span><span style="color:#000;background-color:#fdd">- value: password
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+ - name: POSTGRES_DB
</span><span style="color:#000;background-color:#dfd">+ valueFrom:
</span><span style="color:#000;background-color:#dfd">+ configMapKeyRef:
</span><span style="color:#000;background-color:#dfd">+ name: postgres-config
</span><span style="color:#000;background-color:#dfd">+ key: POSTGRES_DB
</span><span style="color:#000;background-color:#dfd">+ - name: POSTGRES_USER
</span><span style="color:#000;background-color:#dfd">+ valueFrom:
</span><span style="color:#000;background-color:#dfd">+ configMapKeyRef:
</span><span style="color:#000;background-color:#dfd">+ name: postgres-config
</span><span style="color:#000;background-color:#dfd">+ key: POSTGRES_USER
</span><span style="color:#000;background-color:#dfd">+ - name: POSTGRES_PASSWORD
</span><span style="color:#000;background-color:#dfd">+ valueFrom:
</span><span style="color:#000;background-color:#dfd">+ configMapKeyRef:
</span><span style="color:#000;background-color:#dfd">+ name: postgres-config
</span><span style="color:#000;background-color:#dfd">+ key: POSTGRES_PASSWORD
</span><span style="color:#000;background-color:#dfd"></span># ...
</code></pre></div><p>Notice how we’ve replaced the simple key-value pairs with new, more complex objects. Their <code>name</code>s are still the same, they have to be because that’s what the Postgres database container expects. But instead of a literal, hard-coded value, we have changed them to these <code>valueFrom.configMapKeyRef</code> objects. Their <code>name</code>s match the <code>name</code> of the <code>configMapGenerator</code> we configured in the Kustomization. Their <code>key</code>s match the keys of the literal values that we specified in the <code>configMapGenerator</code>’s <code>literals</code> field. That’s how it all ties together.</p>
<p>Similarly, we can update the web application deployment configuration file, <code>k8s/web/deployment.yaml</code>. Its <code>env</code> section would look like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"># k8s/web/deployment.yaml
# ...
env:
<span style="color:#000;background-color:#fdd">- - name: POSTGRES_DB
</span><span style="color:#000;background-color:#fdd">- value: vehicle_quotes
</span><span style="color:#000;background-color:#fdd">- - name: POSTGRES_USER
</span><span style="color:#000;background-color:#fdd">- value: vehicle_quotes
</span><span style="color:#000;background-color:#fdd">- - name: POSTGRES_PASSWORD
</span><span style="color:#000;background-color:#fdd">- value: password
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+ - name: POSTGRES_DB
</span><span style="color:#000;background-color:#dfd">+ valueFrom:
</span><span style="color:#000;background-color:#dfd">+ configMapKeyRef:
</span><span style="color:#000;background-color:#dfd">+ name: postgres-config
</span><span style="color:#000;background-color:#dfd">+ key: POSTGRES_DB
</span><span style="color:#000;background-color:#dfd">+ - name: POSTGRES_USER
</span><span style="color:#000;background-color:#dfd">+ valueFrom:
</span><span style="color:#000;background-color:#dfd">+ configMapKeyRef:
</span><span style="color:#000;background-color:#dfd">+ name: postgres-config
</span><span style="color:#000;background-color:#dfd">+ key: POSTGRES_USER
</span><span style="color:#000;background-color:#dfd">+ - name: POSTGRES_PASSWORD
</span><span style="color:#000;background-color:#dfd">+ valueFrom:
</span><span style="color:#000;background-color:#dfd">+ configMapKeyRef:
</span><span style="color:#000;background-color:#dfd">+ name: postgres-config
</span><span style="color:#000;background-color:#dfd">+ key: POSTGRES_PASSWORD
</span><span style="color:#000;background-color:#dfd"></span> - name: CUSTOMCONNSTR_VehicleQuotesContext
value: Host=$(VEHICLE_QUOTES_DB_SERVICE_SERVICE_HOST);Database=$(POSTGRES_DB);Username=$(POSTGRES_USER);Password=$(POSTGRES_PASSWORD)
# ...
</code></pre></div><p>This is the exact same change as with the database deployment. Out with the hard coded values and in with the new ConfigMap-driven ones.</p>
<p>Try <code>kubectl apply -k k8s</code> and you’ll see that things are still working well. Try to connect to the web application pod and build and run the app.</p>
<blockquote>
<p>For data that’s important to secure like passwords, tokens, and keys, Kubernetes and Kustomize also offer <a href="https://kubernetes.io/docs/concepts/configuration/secret/">Secrets</a> and <a href="https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/"><code>secretGenerator</code></a>. Secrets are very similar to ConfigMaps in how they work, but are tailored specifically for handling secret data. You can learn more about them in the official documentation.</p>
</blockquote>
<h3 id="creating-variants-for-production-and-development-environments">Creating variants for production and development environments</h3>
<p>The crowning achievement of Kustomize is its ability to facilitate multiple deployment variants. Variants, as the name suggests, are variations of deployment configurations that are ideal for setting up various execution environments for an application. Think development, staging, production, etc., all based on a common set of reusable configurations to avoid superfluous repetition.</p>
<p>Kustomize does this by introducing the concepts of <a href="https://kubectl.docs.kubernetes.io/guides/introduction/kustomize/#2-create-variants-using-overlays">bases and overlays</a>. A base is a set of configs that can be reused but not deployed on its own, and overlays are the actual configurations that use and extend the base and can be deployed.</p>
<p>To demonstrate this, let’s build two variants: one for development and another for production. Let’s consider the one we’ve already built to be the development variant and work towards properly specifying it as so, and then building a new production variant.</p>
<blockquote>
<p>Note that the so-called “production” variant we’ll build is not actually meant to be production worthy. It’s just an example to illustrate the concepts and process of building bases and overlays. It does not meet the rigors of a proper production system.</p>
</blockquote>
<h4 id="creating-the-base-and-overlays">Creating the base and overlays</h4>
<p>The strategy I like to use is to just copy everything over from one variant to another, implement the differences, identify the common elements, and extract them into a base that both use.</p>
<p>Let’s begin by creating a new <code>k8s/dev</code> directory and move all of our YAML files into it. That will be our “development overlay”. Then, make a copy the <code>k8s/dev</code> directory and all of its contents and call it <code>k8s/prod</code>. That will be our “production overlay”. Let’s also create a <code>k8s/base</code> directory to store the common files. That will be our “base”. It should be like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">k8s
├── base
├── dev
│ ├── kustomization.yaml
│ ├── db
│ │ ├── deployment.yaml
│ │ ├── persistent-volume-claim.yaml
│ │ ├── persistent-volume.yaml
│ │ └── service.yaml
│ └── web
│ ├── deployment.yaml
│ ├── persistent-volume-claim.yaml
│ ├── persistent-volume.yaml
│ └── service.yaml
└── prod
├── kustomization.yaml
├── db
│ ├── deployment.yaml
│ ├── persistent-volume-claim.yaml
│ ├── persistent-volume.yaml
│ └── service.yaml
└── web
├── deployment.yaml
├── persistent-volume-claim.yaml
├── persistent-volume.yaml
└── service.yaml
</code></pre></div><p>Now we have two variants, but they don’t do us any good because they aren’t any different. We’ll now go through each file one by one and identify which aspects need to be the same and which need to be different between our two variants:</p>
<ol>
<li><code>db/deployment.yaml</code>: I want the same database instance configuration for both our variants. So we copy the file into <code>base/db/deployment.yaml</code> and delete <code>dev/db/deployment.yaml</code> and <code>prod/db/deployment.yaml</code>.</li>
<li><code>db/persistent-volume-claim.yaml</code>: This one is also the same for both variants. So we copy the file into <code>base/db/persistent-volume-claim.yaml</code> and delete <code>dev/db/persistent-volume-claim.yaml</code> and <code>prod/db/persistent-volume-claim.yaml</code>.</li>
<li><code>db/persistent-volume.yaml</code>: This file defines the location in the host machine that will be available for the Postgres instance that’s running in the cluster to store its data files. I do want this path to be different between variants. So let’s leave them where they are and do the following changes to them: For <code>dev/db/persistent-volume.yaml</code>, change its <code>spec.hostPath.path</code> to <code>"/path/to/vehicle-quotes-postgres-data-dev"</code>. For <code>prod/db/persistent-volume.yaml</code>, change its <code>spec.hostPath.path</code> to <code>"/path/to/vehicle-quotes-postgres-data-prod"</code>. Of course, adjust the paths to something that makes sense in your environment.</li>
<li><code>db/service.yaml</code>: There doesn’t need to be any difference in this file between the variants so we copy it into <code>base/db/service.yaml</code> and delete <code>dev/db/service.yaml</code> and <code>prod/db/service.yaml</code>.</li>
<li><code>web/deployment.yaml</code>: There are going to be quite a few differences between the dev and prod deployments of the web application. So we leave them as they are. Later we’ll see the differences in detail.</li>
<li><code>web/persistent-volume-claim.yaml</code>: This is also going to be different. Let’s leave it be now and we’ll come back to it later.</li>
<li><code>web/persistent-volume.yaml</code>: Same as <code>web/persistent-volume-claim.yaml</code>. Leave it be for now.</li>
<li><code>wev/service.yaml</code>: This one is going to be the same for both dev and prod so let’s do the usual and copy it into <code>base/web/service.yaml</code> and remove <code>dev/web/service.yaml</code> and <code>prod/web/service.yaml</code></li>
</ol>
<blockquote>
<p>The decisions made when designing these overlays and the base may seem arbitrary. That’s because they totally are. The purpose of this article is to demonstrate Kustomize’s features, not produce a real-world, production-worthy setup.</p>
</blockquote>
<p>Once all those changes are done, you should have the following file structure:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">k8s
├── base
│ ├── db
│ │ ├── deployment.yaml
│ │ ├── persistent-volume-claim.yaml
│ │ └── service.yaml
│ └── web
│ └── service.yaml
├── dev
│ ├── kustomization.yaml
│ ├── db
│ │ └── persistent-volume.yaml
│ └── web
│ ├── deployment.yaml
│ ├── persistent-volume-claim.yaml
│ └── persistent-volume.yaml
└── prod
├── kustomization.yaml
├── db
│ └── persistent-volume.yaml
└── web
├── deployment.yaml
├── persistent-volume-claim.yaml
└── persistent-volume.yaml
</code></pre></div><p>Much better, huh? We’ve gotten rid of quite a bit of repetition. But we’re not done just yet. The base also needs a Kustomization file. Let’s create it as <code>k8s/base/kustomization.yaml</code> and add these contents:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># k8s/base/kustomization.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Kustomization<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">resources</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- db/persistent-volume-claim.yaml<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- db/service.yaml<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- db/deployment.yaml<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- web/service.yaml<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">configMapGenerator</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>postgres-config<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">literals</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- POSTGRES_DB=vehicle_quotes<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- POSTGRES_USER=vehicle_quotes<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- POSTGRES_PASSWORD=password<span style="color:#bbb">
</span></code></pre></div><p>As you can see, the file is very similar to the other one we created. We just list the resources that we moved into the <code>base</code> directory and define the database environment variables via the <code>configMapGenerator</code>. We need to define the <code>configMapGenerator</code> here because we’ve moved all the other files that use them into here.</p>
<p>Now that we have the base defined, we need to update the <code>kustomization.yaml</code> files of the overlays to use it. We also need to update them so that they only point to the resources that they need to.</p>
<p>Here’s how the changes to the “dev” overlay’s <code>kustomization.yaml</code> file look:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"> # dev/kustomization.yaml
kind: Kustomization
<span style="color:#000;background-color:#dfd">+bases:
</span><span style="color:#000;background-color:#dfd">+ - ../base
</span><span style="color:#000;background-color:#dfd"></span>
resources:
- db/persistent-volume.yaml
<span style="color:#000;background-color:#fdd">- - db/persistent-volume-claim.yaml
</span><span style="color:#000;background-color:#fdd">- - db/service.yaml
</span><span style="color:#000;background-color:#fdd">- - db/deployment.yaml
</span><span style="color:#000;background-color:#fdd"></span> - web/persistent-volume.yaml
- web/persistent-volume-claim.yaml
<span style="color:#000;background-color:#fdd">- - web/service.yaml
</span><span style="color:#000;background-color:#fdd"></span> - web/deployment.yaml
<span style="color:#000;background-color:#fdd">-configMapGenerator:
</span><span style="color:#000;background-color:#fdd">- - name: postgres-config
</span><span style="color:#000;background-color:#fdd">- literals:
</span><span style="color:#000;background-color:#fdd">- - POSTGRES_DB=vehicle_quotes
</span><span style="color:#000;background-color:#fdd">- - POSTGRES_USER=vehicle_quotes
</span><span style="color:#000;background-color:#fdd">- - POSTGRES_PASSWORD=password
</span></code></pre></div><p>As you can see we removed the <code>configMapGenerator</code> and the individual resources that were already defined in the base. Most importantly, we’ve added a <code>bases</code> element that indicates that our Kustomization over on the <code>base</code> directory is this overlay’s base.</p>
<p>The changes to the “prod” overlay’s <code>kustomization.yaml</code> file are identical. Go ahead and make them.</p>
<p>At this point, you can run <code>kubectl apply -k k8s/dev</code> or <code>kubectl apply -k k8s/prod</code> and things should work just like before.</p>
<blockquote>
<p>Don’t forget to also do <code>kubectl delete -k k8s/dev</code> or <code>kubectl delete -k k8s/prod</code> when you’re done testing the previous commands, as we’ll continue doing changes to the configs. Keep in mind also that both variants can’t be deployed at the same time. So be sure <code>delete</code> one before <code>apply</code>ing the other.</p>
</blockquote>
<h4 id="developing-the-production-variant">Developing the production variant</h4>
<p>I want our production variant to use a different image for the web application. That means a new Dockerfile. If you downloaded the source code from the GitHub repo, you should see the production Dockerfile in the root directory of the repo. It’s called <code>Dockerfile.prod</code>.</p>
<p>Here’s what it looks like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-Dockerfile" data-lang="Dockerfile"><span style="color:#888"># Dockerfile.prod</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">ARG</span> <span style="color:#369">VARIANT</span>=<span style="color:#d20;background-color:#fff0f0">"5.0"</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">FROM</span><span style="color:#d20;background-color:#fff0f0"> mcr.microsoft.com/dotnet/sdk:${VARIANT}</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> apt-get update && <span style="color:#038">export</span> <span style="color:#369">DEBIAN_FRONTEND</span>=noninteractive <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> && apt-get -y install --no-install-recommends postgresql-client-common postgresql-client<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> dotnet tool install --global dotnet-ef<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">ENV</span> PATH <span style="color:#369">$PATH</span>:/root/.dotnet/tools<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> dotnet dev-certs https<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">WORKDIR</span><span style="color:#d20;background-color:#fff0f0"> /source</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">COPY</span> . .<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">ENTRYPOINT</span> [<span style="color:#d20;background-color:#fff0f0">"tail"</span>, <span style="color:#d20;background-color:#fff0f0">"-f"</span>, <span style="color:#d20;background-color:#fff0f0">"/dev/null"</span>]<span style="color:#a61717;background-color:#e3d2d2">
</span></code></pre></div><p>The first takeaway from this production Dockerfile is that it is simpler, when compared to the development one. The image here is based on the official <code>dotnet/sdk</code> instead of the dev-ready one from <code>vscode/devcontainers/dotnet</code>. Also, this Dockerfile just copies all the source code into a <code>/source</code> directory within the image. This is because we want to “ship” the image with everything it needs to work without too much manual intervention. Also, we won’t be editing code live on the container, as opposed to how we set up the dev variant which allowed that. So we just copy the files over instead of leaving them out to provision them later via volumes. We’ll see how that pans out later.</p>
<p>Now that we have our production Dockerfile, we can build an image with it and push it to the registry so that Kubernetes can use it. So, save that file as <code>Dockerfile.prod</code> (or just use the one that’s already in the repo), and run the following commands:</p>
<p>Build the image with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ docker build . -f Dockerfile.prod -t localhost:32000/vehicle-quotes-prod:registry
</code></pre></div><p>And push it to the registry with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ docker push localhost:32000/vehicle-quotes-prod:registry
</code></pre></div><p>Now, we need to modify our prod variant’s deployment configuration so that it can work well with this new prod image. Here’s how the new <code>k8s/prod/web/deployment.yaml</code> should look:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># k8s/prod/web/deployment.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>apps/v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Deployment<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-web<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">selector</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">matchLabels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>vehicle-quotes-web<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">replicas</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">1</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">template</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">labels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>vehicle-quotes-web<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">initContainers</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>await-db-ready<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>postgres:13<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">command</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"/bin/sh"</span>]<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">args</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"-c"</span>,<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"until pg_isready -h $(VEHICLE_QUOTES_DB_SERVICE_SERVICE_HOST) -p 5432; do echo waiting for database; sleep 2; done;"</span>]<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>build<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>localhost:32000/vehicle-quotes-dev:registry<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">workingDir</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"/source"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">command</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"/bin/sh"</span>]<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">args</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"-c"</span>,<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"dotnet restore -v n && dotnet ef database update && dotnet publish -c release -o /app --no-restore"</span>]<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumeMounts</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">mountPath</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"/app"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-source-code-storage<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">env</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>POSTGRES_DB<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">valueFrom</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">configMapKeyRef</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>postgres-config<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">key</span>:<span style="color:#bbb"> </span>POSTGRES_DB<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>POSTGRES_USER<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">valueFrom</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">configMapKeyRef</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>postgres-config<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">key</span>:<span style="color:#bbb"> </span>POSTGRES_USER<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>POSTGRES_PASSWORD<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">valueFrom</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">configMapKeyRef</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>postgres-config<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">key</span>:<span style="color:#bbb"> </span>POSTGRES_PASSWORD<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>CUSTOMCONNSTR_VehicleQuotesContext<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">value</span>:<span style="color:#bbb"> </span>Host=$(VEHICLE_QUOTES_DB_SERVICE_SERVICE_HOST);Database=$(POSTGRES_DB);Username=$(POSTGRES_USER);Password=$(POSTGRES_PASSWORD)<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">containers</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-web<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>localhost:32000/vehicle-quotes-dev:registry<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">workingDir</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"/app"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">command</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"/bin/sh"</span>]<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">args</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"-c"</span>,<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"dotnet VehicleQuotes.dll --urls=https://0.0.0.0:5001/"</span>]<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">containerPort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">5001</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"https"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumeMounts</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">mountPath</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"/app"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-source-code-storage<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">env</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>POSTGRES_DB<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">valueFrom</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">configMapKeyRef</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>postgres-config<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">key</span>:<span style="color:#bbb"> </span>POSTGRES_DB<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>POSTGRES_USER<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">valueFrom</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">configMapKeyRef</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>postgres-config<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">key</span>:<span style="color:#bbb"> </span>POSTGRES_USER<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>POSTGRES_PASSWORD<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">valueFrom</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">configMapKeyRef</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>postgres-config<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">key</span>:<span style="color:#bbb"> </span>POSTGRES_PASSWORD<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>CUSTOMCONNSTR_VehicleQuotesContext<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">value</span>:<span style="color:#bbb"> </span>Host=$(VEHICLE_QUOTES_DB_SERVICE_SERVICE_HOST);Database=$(POSTGRES_DB);Username=$(POSTGRES_USER);Password=$(POSTGRES_PASSWORD)<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">resources</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">limits</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">memory</span>:<span style="color:#bbb"> </span>2Gi<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">cpu</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"1"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-source-code-storage<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">emptyDir</span>:<span style="color:#bbb"> </span>{}<span style="color:#bbb">
</span></code></pre></div><p>This deployment config is similar to the one from the dev variant, but we’ve changed a few elements on it.</p>
<h4 id="init-containers">Init containers</h4>
<p>The most notable change is that we added an <code>initContainers</code> section. Init Containers are one-and-done containers that run specific processes during pod initialization. They are good for doing any sort of initialization tasks that need to be run once, before a pod is ready to work. After they’ve done their task, they go away, and the pod is left with the containers specified in the <code>containers</code> section, like usual. In this case, we’ve added two init containers.</p>
<p>First is the <code>await-db-ready</code> one. This is a simple container based on the <code>postgres:13</code> image that just sits there waiting for the database to become available. This is thanks to its <code>command</code> and <code>args</code>, which make up a simple shell script that leverages the <code>pg_isready</code> tool to continuously check if connections can be made to our database:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#b06;font-weight:bold">command</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"/bin/sh"</span>]<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">args</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"-c"</span>,<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"until pg_isready -h $(VEHICLE_QUOTES_DB_SERVICE_SERVICE_HOST) -p 5432; do echo waiting for database; sleep 2; done;"</span>]<span style="color:#bbb">
</span></code></pre></div><p>This will cause pod initialization to stop until the database is ready.</p>
<blockquote>
<p>Thanks to <a href="https://medium.com/@xcoulon/initializing-containers-in-order-with-kubernetes-18173b9cc222">this blog post</a> for the very useful recipe.</p>
</blockquote>
<p>We need to wait for the database to be ready before continuing because of what the second init container does. Among other things, the <code>build</code> init container sets up the database. The database needs to be available for it to be able to to do that. The init container also downloads dependencies, builds the app, produces the deployable artifacts and copies them over to the directory from which the app will run: <code>/app</code>. You can see that all that is specified in the <code>command</code> and <code>args</code> elements, which define a few shell commands to do those tasks.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#b06;font-weight:bold">command</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"/bin/sh"</span>]<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">args</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"-c"</span>,<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"dotnet restore -v n && dotnet ef database update && dotnet publish -c release -o /app --no-restore"</span>]<span style="color:#bbb">
</span></code></pre></div><p>Another interesting aspect of this deployment is the volume that we’ve defined. It’s at the bottom of the file, take a quick look:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#b06;font-weight:bold">volumes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-source-code-storage<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">emptyDir</span>:<span style="color:#bbb"> </span>{}<span style="color:#bbb">
</span></code></pre></div><p>This one is different from the ones we’ve seen before which relied on persistent volumes and persistent volume claims. This one uses <code>emptyDir</code>. This means that this volume will provide storage that is persistent throughout the lifetime of the pod. That is, not tied to any specific container. In other words, even when the container goes away, the files in this volume will stay. This is a mechanism that’s useful when we want one container to produce some files that another container will use. In our case, the <code>build</code> init container produces the artifacts/binaries that the main <code>vehicle-quotes-web</code> container will use to actually run the web app.</p>
<p>The only other notable difference of this deployment is how its containers use the new prod image that we built before, instead of the dev one. That is, it uses <code>localhost:32000/vehicle-quotes-prod:registry</code> instead of <code>localhost:32000/vehicle-quotes-dev:registry</code>.</p>
<p>The rest of the deployment doesn’t have anything we haven’t already seen. Feel free to explore it.</p>
<p>As you saw, this prod variant doesn’t need to access the source code via a persistent volume. So, we don’t need PV and PVC definitions for it. So feel free to delete <code>k8s/prod/web/persistent-volume.yaml</code> and <code>k8s/prod/web/persistent-volume-claim.yaml</code>. Remember to also remove them from the <code>resources</code> section in <code>k8s/prod/kustomization.yaml</code>.</p>
<p>With those changes done, we can fire up our prod variants with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ kubectl apply -k k8s/prod
</code></pre></div><p>The web pod will take quite a while to properly start up because it’s downloading a lot of dependencies. Remember that you can use <code>kubectl get pods -A</code> to see their current status. Take note of their names and you would also be able to see container specific logs.</p>
<ul>
<li>Use <code>kubectl logs -f <vehicle-quotes-web-pod-name> await-db-ready</code> to see the logs from the <code>await-db-ready</code> init container.</li>
<li>Use <code>kubectl logs -f <vehicle-quotes-web-pod-name> build</code> to see the logs from the <code>build</code> init container.</li>
</ul>
<blockquote>
<p>If this were an actual production setup, and we were worried about pod startup time, there’s one way we could make it faster. We could perform the “download dependencies” step when building the production image instead of when deploying the pods. So, we could have our <code>Dockerfile.prod</code> call <code>dotnet restore -v n</code>, instead of the <code>build</code> init container. That way building the image would take more time, but it would have all dependencies already baked in by the time Kubernetes tries to use it to build containers. Then the web pod would start up much faster.</p>
</blockquote>
<p>This deployment automatically starts the web app, so after the pods are in the “Running” status (or green in the dashboard!), we can just navigate to the app via a web browser. We’ve configured the deployment to only work over HTTPS (as given by the <code>ports</code> section in the <code>vehicle-quotes-web</code> container), so this is the only URL that’s available to us: https://localhost:30001. We can navigate to it and see the familiar screen:</p>
<p><img src="/blog/2022/01/kubernetes-101/swagger-prod.webp" alt="SwaggerUI on prod"></p>
<p>At this point, we finally have fully working, distinct variants. However, we can take our configuration a few steps further by leveraging some additional Kustomize features.</p>
<h4 id="using-patches-for-small-precise-changes">Using patches for small, precise changes</h4>
<p>Right now, the persistent volume configurations for the databases of both variants are pretty much identical. The only difference is the <code>hostPath</code>. With patches, we can focus in on that property and vary it specifically.</p>
<p>To do it, we first copy either of the variant’s <code>db/persistent-volume.yaml</code> into <code>k8s/base/db/persistent-volume.yaml</code>. We also need to add it under <code>resources</code> on <code>k8s/base/kustomization.yaml</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"> # k8s/base/kustomization.yaml
# ...
resources:
<span style="color:#000;background-color:#dfd">+ - db/persistent-volume.yaml
</span><span style="color:#000;background-color:#dfd"></span> - db/persistent-volume-claim.yaml
# ...
</code></pre></div><p>That will serve as the common ground for overlays to “patch over”. Now we can create the patches. First, the one for the one for the dev variant:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># k8s/dev/db/persistent-volume-host-path.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>PersistentVolume<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-postgres-data-persisent-volume<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">hostPath</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">path</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"/home/kevin/projects/vehicle-quotes-postgres-data-dev"</span><span style="color:#bbb">
</span></code></pre></div><p>And then the one for the prod variant:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># k8s/prod/db/persistent-volume-host-path.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>PersistentVolume<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-postgres-data-persisent-volume<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">hostPath</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">path</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"/home/kevin/projects/vehicle-quotes-postgres-data-prod"</span><span style="color:#bbb">
</span></code></pre></div><p>As you can see, these patches are sort of truncated persistent volume configs which only include the <code>kind</code>, <code>metadata.name</code>, and the value that actually changes: the <code>hostPath</code>.</p>
<p>Once those are saved, we need to include them in their respective <code>kustomization.yaml</code>. It’s the same modification to both <code>k8s/dev/kustomization.yaml</code> and <code>k8s/prod/kustomization.yaml</code>. Just remove the <code>db/persistent-volume.yaml</code> item from their <code>resources</code> sections and add the following to both of them:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#b06;font-weight:bold">patches</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- db/persistent-volume-host-path.yaml<span style="color:#bbb">
</span></code></pre></div><p>Right now, <code>k8s/dev/kustomization.yaml</code> should be:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># k8s/dev/kustomization.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Kustomization<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">bases</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- ../base<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">resources</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- web/persistent-volume.yaml<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- web/persistent-volume-claim.yaml<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- web/deployment.yaml<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">patches</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- db/persistent-volume-host-path.yaml<span style="color:#bbb">
</span></code></pre></div><p>And <code>k8s/prod/kustomization.yaml</code> should be:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># k8s/prod/kustomization.yaml</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Kustomization<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">bases</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- ../base<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">resources</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- web/deployment.yaml<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">patches</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- db/persistent-volume-host-path.yaml<span style="color:#bbb">
</span></code></pre></div><h4 id="overriding-container-images">Overriding container images</h4>
<p>Another improvement we can make is to use the <code>images</code> element in the <code>kustomization.yaml</code> files to control the web app images used by the deployments in the variants. This is easier for maintenance as it’s defined in one single, expected place. Also, the full name of the image doesn’t have to be used throughout the configs so it reduces repetition.</p>
<p>To put it in practice, add the following at the end of the <code>k8s/dev/kustomization.yaml</code> file:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#b06;font-weight:bold">images</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-web<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">newName</span>:<span style="color:#bbb"> </span>localhost:32000/vehicle-quotes-dev<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">newTag</span>:<span style="color:#bbb"> </span>registry<span style="color:#bbb">
</span></code></pre></div><p>Similar thing with <code>k8s/prod/kustomization.yaml</code>, only use the prod image for this one:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#b06;font-weight:bold">images</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>vehicle-quotes-web<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">newName</span>:<span style="color:#bbb"> </span>localhost:32000/vehicle-quotes-prod<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">newTag</span>:<span style="color:#bbb"> </span>registry<span style="color:#bbb">
</span></code></pre></div><p>Now, we can replace any mention of <code>localhost:32000/vehicle-quotes-dev</code> in the dev variant, and any mention of <code>localhost:32000/vehicle-quotes-prod</code> in the prod variant with <code>vehicle-quotes-web</code>. Which is simpler.</p>
<p>In <code>k8s/dev/web/deployment.yaml</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"> # ...
spec:
containers:
- name: vehicle-quotes-web
<span style="color:#000;background-color:#fdd">- image: localhost:32000/vehicle-quotes-dev:registry
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+ image: vehicle-quotes-web
</span><span style="color:#000;background-color:#dfd"></span> ports:
# ...
</code></pre></div><p>And in <code>k8s/prod/web/deployment.yaml</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"> # ...
- name: build
<span style="color:#000;background-color:#fdd">- image: localhost:32000/vehicle-quotes-prod:registry
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+ image: vehicle-quotes-web
</span><span style="color:#000;background-color:#dfd"></span> workingDir: "/source"
command: ["/bin/sh"]
# ...
containers:
- name: vehicle-quotes-web
<span style="color:#000;background-color:#fdd">- image: localhost:32000/vehicle-quotes-prod:registry
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+ image: vehicle-quotes-web
</span><span style="color:#000;background-color:#dfd"></span> workingDir: "/app"
# ...
</code></pre></div><p>Once that’s all done, you should be able to <code>kubectl apply -k k8s/dev</code> or <code>kubectl apply -k k8s/prod</code> and everything should work fine. Be sure to <code>kubectl delete</code> before <code>kubectl apply</code>ing a different variant though, as both of them cannot coexist in the same cluster, due to many objects having the same name.</p>
<h3 id="closing-thoughts">Closing thoughts</h3>
<p>Wow! That was a good one. In this post I’ve captured all the knowledge that I wish I had when I first encountered Kubernetes. We went from knowing nothing to being able to put together a competent environment. We figured out how to install Kubernetes locally via MicroK8s, along with a few useful add-ons. We learned about the main concepts in Kubernetes like nodes, pods, images, containers, deployments, services, and persistent volumes. Most importantly, we learned how to define and create them using a declarative configuration file approach.</p>
<p>Then, we learned about Kustomize and how to use it to implement variants of our configurations. And we did all that by actually getting our hands dirty and, step by step, deploying a real web application and its backing database system. When all was said and done, a simple <code>kubeclt apply -k <kustomization></code> was all it took to get the app up and running fully. Not bad, eh?</p>
<h3 id="useful-commands">Useful commands</h3>
<ul>
<li>Start up MicroK8s: <code>microk8s start</code></li>
<li>Shut down MicroK8s: <code>microk8s stop</code></li>
<li>Check MicroK8s info: <code>microk8s status</code></li>
<li>Start up the K8s dashboard: <code>microk8s dashboard-proxy</code></li>
<li>Get available pods on all namespaces: <code>kubectl get pods -A</code></li>
<li>Watch and follow the logs on a specific container in a pod: <code>kubectl logs -f <POD_NAME> <CONTAINER_NAME></code></li>
<li>Open a shell into the default container in a pod: <code>kubectl exec -it <POD_NAME> -- bash</code></li>
<li>Create a K8s resource given a YAML file or directory: <code>kubectl apply -f <FILE_OR_DIRECTORY_NAME></code></li>
<li>Delete a K8s resource given a YAML file or directory: <code>kubectl delete -f <FILE_OR_DIRECTORY_NAME></code></li>
<li>Create K8s resources with Kustomize: <code>kubectl apply -k <KUSTOMIZATION_DIR></code></li>
<li>Delete K8s resources with Kustomize: <code>kubectl delete -k <KUSTOMIZATION_DIR></code></li>
<li>Build custom images for the K8s registry: <code>docker build . -f <DOCKERFILE> -t localhost:32000/<IMAGE_NAME>:registry</code></li>
<li>Push custom images to the K8s registry: <code>docker push localhost:32000/<IMAGE_NAME>:registry</code></li>
</ul>
<h3 id="table-of-contents">Table of contents</h3>
<ul>
<li><a href="#what-is-kubernetes">What is Kubernetes?</a>
<ul>
<li><a href="#nodes-pods-and-containers">Nodes, pods and containers</a></li>
</ul>
</li>
<li><a href="#installing-and-setting-up-kubernetes">Installing and setting up Kubernetes</a>
<ul>
<li><a href="#installing-microk8s">Installing MicroK8s</a></li>
<li><a href="#introducing-kubectl">Introducing kubectl</a></li>
<li><a href="#installing-add-ons">Installing add-ons</a></li>
<li><a href="#introducing-the-dashboard">Introducing the Dashboard</a></li>
</ul>
</li>
<li><a href="#deploying-applications-into-a-kubernetes-cluster">Deploying applications into a Kubernetes cluster</a>
<ul>
<li><a href="#deployments">Deployments</a></li>
<li><a href="#using-kubectl-to-explore-a-deployment">Using kubectl to explore a deployment</a></li>
<li><a href="#using-the-dashboard-to-explore-a-deployment">Using the dashboard to explore a deployment</a></li>
<li><a href="#dissecting-the-deployment-configuration-file">Dissecting the deployment configuration file</a></li>
<li><a href="#connecting-to-the-containers-in-the-pods">Connecting to the containers in the pods</a></li>
<li><a href="#services">Services</a></li>
<li><a href="#accessing-an-application-via-a-service">Accessing an application via a service</a></li>
</ul>
</li>
<li><a href="#deploying-our-own-custom-application">Deploying our own custom application</a>
<ul>
<li><a href="#what-are-we-building">What are we building</a></li>
</ul>
</li>
<li><a href="#deploying-the-database">Deploying the database</a>
<ul>
<li><a href="#connecting-to-the-database">Connecting to the database</a></li>
<li><a href="#persistent-volumes-and-claims">Persistent volumes and claims</a></li>
<li><a href="#configuration-files-for-the-pv-and-pvc">Configuration files for the PV and PVC</a></li>
<li><a href="#configuring-the-deployment-to-use-the-pvc">Configuring the deployment to use the PVC</a></li>
<li><a href="#applying-changes">Applying changes</a></li>
<li><a href="#exposing-the-database-as-a-service">Exposing the database as a service</a></li>
</ul>
</li>
<li><a href="#deploying-the-web-application">Deploying the web application</a>
<ul>
<li><a href="#building-the-web-application-image">Building the web application image</a></li>
<li><a href="#making-the-image-accessible-to-kubernetes">Making the image accessible to Kubernetes</a></li>
<li><a href="#deploying-the-web-application-1">Deploying the web application</a></li>
<li><a href="#starting-the-application">Starting the application</a></li>
</ul>
</li>
<li><a href="#putting-it-all-together-with-kustomize">Putting it all together with Kustomize</a>
<ul>
<li><a href="#the-kustomization-file">The Kustomization file</a></li>
<li><a href="#defining-reusable-configuration-values-with-configmaps">Defining reusable configuration values with ConfigMaps</a></li>
</ul>
</li>
<li><a href="#creating-variants-for-production-and-development-environments">Creating variants for production and development environments</a>
<ul>
<li><a href="#creating-the-base-and-overlays">Creating the base and overlays</a></li>
<li><a href="#developing-the-production-variant">Developing the production variant</a></li>
<li><a href="#init-containers">Init containers</a></li>
<li><a href="#using-patches-for-small-precise-changes">Using patches for small, precise changes</a></li>
<li><a href="#overriding-container-images">Overriding container images</a></li>
</ul>
</li>
<li><a href="#closing-thoughts">Closing thoughts</a></li>
<li><a href="#useful-commands">Useful commands</a></li>
<li><a href="#table-of-contents">Table of contents</a></li>
</ul>
DevOps & Kubernetes engineer job openinghttps://www.endpointdev.com/blog/2021/11/devops-kubernetes-engineer-job/2021-11-04T00:00:00+00:00Jon Jensen
<p><img src="/blog/2021/11/devops-kubernetes-engineer-job/20210811-195659.jpg" alt="dumpster with worn sticker warning against sleeping or falling" /></p>
<!-- Photo by Jon Jensen -->
<p>We are looking for a full-time, salaried DevOps / Kubernetes engineer to work on cloud hosting projects with our clients and our internal hosting team.</p>
<p>End Point Dev is an Internet technology consulting company based in New York City, with 50 employees serving many clients ranging from small family businesses to large corporations. We are going strong after 26 years in business!</p>
<p>Even before the pandemic most of us worked remotely from home offices. We collaborate using SSH, Git, project tracking tools, Zulip chat, video conferencing, and of course email and phones.</p>
<h3 id="what-you-will-be-doing">What you will be doing:</h3>
<ul>
<li>Automate, set up, support, and maintain complex containerized applications in the cloud.</li>
<li>Audit and improve security, backups, reliability, monitoring.</li>
<li>Work together with End Point Dev co-workers and our clients’ in-house staff.</li>
<li>Use your desktop operating system of choice: Linux, macOS, or Windows.</li>
<li>Work with open source software and contribute back as opportunity arises.</li>
</ul>
<h3 id="youll-need-professional-experience-with">You’ll need professional experience with:</h3>
<ul>
<li><strong>Production Kubernetes administration on Amazon EKS (2+ years)</strong></li>
<li>Linux and common distributions including Ubuntu, RHEL/CentOS</li>
<li>Public clouds such as AWS, GCP, Azure, Linode, DigitalOcean</li>
<li>Containerization with Docker and/or Podman</li>
<li>IaC tools such as Terraform, CloudFormation, Ansible, Chef, Puppet, Salt</li>
<li>CI/CD pipelines, release management</li>
<li>Git version control, GitHub or GitLab</li>
<li>Scripting and programming with bash, Python, Ruby, Go, etc.</li>
<li>OS fundamentals, networking, and firewalls</li>
<li>HTTP, REST APIs</li>
</ul>
<h3 id="you-have-these-important-work-traits">You have these important work traits:</h3>
<ul>
<li>Strong verbal and written communication skills</li>
<li>An eye for detail</li>
<li>Tenacity in solving problems and focusing on customer needs</li>
<li>A feeling of ownership of your projects</li>
<li>Work both independently and as part of a team</li>
</ul>
<h3 id="what-work-here-offers">What work here offers:</h3>
<ul>
<li>Collaboration with knowledgeable, friendly, helpful, and diligent co-workers around the world</li>
<li>Freedom from being tied to an office location</li>
<li>Flexible, sane work hours</li>
<li>Paid holidays and vacation</li>
<li>Annual bonus opportunity</li>
<li>(For U.S. employees:) Health insurance subsidy and 401(k) retirement savings plan</li>
</ul>
<h3 id="get-in-touch-with-us">Get in touch with us:</h3>
<p><del>Please email us an introduction to <a href="mailto:jobs@endpointdev.com">jobs@endpointdev.com</a> to apply.</del>
<strong>(This job has been filled.)</strong></p>
<p>Include your location, a resume/CV, your Git repository or LinkedIn URLs, and whatever else may help us get to know you.</p>
<p>We look forward to hearing from you! Direct work seekers only, please—this role is not for agencies or subcontractors.</p>
<p>We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of sex/gender, race, religion, color, national origin, sexual orientation, age, marital status, veteran status, or disability status.</p>
Building REST APIs with .NET 5, ASP.NET Core, and PostgreSQLhttps://www.endpointdev.com/blog/2021/07/dotnet-5-web-api/2021-07-09T00:00:00+00:00Kevin Campusano
<p><img src="/blog/2021/07/dotnet-5-web-api/market-cropped.jpg" alt="A market at night">
<a href="https://unsplash.com/photos/cpbWNtkKoiU">Photo</a> by <a href="https://unsplash.com/@sam_beasley">Sam Beasley</a></p>
<p>This is old news by now, but I’m still amazed by the fact that nowadays <a href="https://dotnet.microsoft.com/platform/open-source">.NET is open source and can run on Linux</a>. I truly believe that this new direction can help the technology realize its true potential, since it’s no longer shackled to Windows-based environments. I’ve personally been outside the .NET game for a good while, but with <a href="https://docs.microsoft.com/en-us/dotnet/core/dotnet-five">the milestone release that is .NET 5</a>, I think now is a great time to dive back in.</p>
<p>So I thought of taking some time to do just that, really dive in, see what’s new, and get a sense of the general developer experience that the current incarnation of .NET offers. So in this blog post, I’m going to chronicle my experience developing a simple but complete <a href="https://www.redhat.com/en/topics/api/what-is-a-rest-api">REST API</a> application. Along the way, I’ll touch on the most common problems that one runs into when developing such applications and how they are solved in the .NET world. So think of this piece as a sort of tutorial or overview of the most common framework features when it comes to developing REST APIs.</p>
<blockquote>
<p>There’s a <a href="#table-of-contents">table of contents</a> at the bottom.</p>
</blockquote>
<p>First, let’s get familiar with what we’re building.</p>
<h3 id="what-were-building">What we’re building</h3>
<h4 id="the-demo-application">The demo application</h4>
<blockquote>
<p>You can find the finished product on my <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api">GitHub</a>.</p>
</blockquote>
<p>The application that we’ll be building throughout this article will address a request from a hypothetical car junker business. Our client wants to automate the process of calculating how much money to offer their customers for their vehicles, given certain information about them. And they want an app to do that. We are building the back-end component that will support that app. It is a REST API that allows users to provide vehicle information (year, make, model, condition, etc.) and will produce a quote of how much money our hypothetical client would be willing to pay for it.</p>
<p>Here’s a short list of features that we need to implement in order to fulfill that requirement:</p>
<ol>
<li>Given a vehicle model and condition, calculate a price.</li>
<li>Store and manage rules that are used to calculate vehicle prices.</li>
<li>Store and manage pricing overrides on a vehicle model basis. Price overrides are used regardless of the current rules.</li>
<li>CRUD vehicle models so that overrides can be specified for them.</li>
</ol>
<h4 id="the-data-model">The data model</h4>
<p>Here’s what our data model looks like:</p>
<p><img src="/blog/2021/07/dotnet-5-web-api/data-model.png" alt="Data Model"></p>
<p>The main table in our model is the <code>quotes</code> table. It stores all the requests for quotes received from our client’s customers. It captures all the relevant vehicle information in terms of model and condition. It also captures the offered quote; that is, the money value that our system calculates for their vehicle.</p>
<p>The <code>quotes</code> table includes all the fields that identify a vehicle: year, make, model, body style and size. It also includes a <code>model_style_year_id</code> field which is an optional foreign key to another table. This FK points to the <code>model_style_years</code> table which contains specific vehicle models that our system can store explicitly.</p>
<p>The idea of this is that, when a customer submits a request for a quote, if we have their vehicle registered in our database, then we can populate this foreign key and link the quote with the specific vehicle that it’s quoting. If we don’t have their vehicle registered, then we leave that field unpopulated. Either way, we can offer a quote. The only difference is the level or certainty of the quote.</p>
<p>The records in the <code>model_style_years</code> table represent specific vehicles. That whole hierarchy works like this: A vehicle make (e.g. Honda, Toyota, etc. in the <code>makes</code> table) has many models (e.g. Civic, Corolla, etc. in the <code>models</code> table), each model has many styles (the <code>model_styles</code> table). Styles are combinations of body types (the <code>body_types</code> table) and sizes (e.g. Mid-size Sedan, Compact Coupe, etc. in the <code>sizes</code> table). And finally, each model style has many years in which they were being produced (via the <code>model_style_years</code> table).</p>
<p>This model allows us very fine-grained differentiation between vehicles. For example, we can have a “2008 Honda Civic Hatchback which is a Compact car” and also a “1990 Honda Civic Hatchback which is a Sub-compact”. That is, same model, different year, size or body type.</p>
<p>We also have a <code>quote_rules</code> table which stores the rules that are applied when it comes to calculating a vehicle quote. The rules are pairs of key-values with an associated monetary value. So for example, rules like “a vehicle that has alloy wheels is worth $10 more” can be expressed in the table with a record where <code>feature_type</code> is “has_alloy_wheels”, <code>feature_value</code> is “true” and <code>price_modifier</code> is “10”.</p>
<p>Finally, we have a <code>quote_overrides</code> table which specifies a flat, static price for specific vehicles (via the link to the <code>model_style_years</code> table). The idea here is that if some customer requests a quote for a vehicle for which we have an override, no price calculation rules are applied and they are offered what is specified in the override record.</p>
<h3 id="the-development-environment">The development environment</h3>
<h4 id="setting-up-the-postgresql-database-with-docker">Setting up the PostgreSQL database with Docker</h4>
<p>For this project, our database of choice is <a href="https://www.postgresql.org/">PostgreSQL</a>. Luckily for us, getting a PostgreSQL instance up and running is very easy thanks to <a href="https://www.docker.com/">Docker</a>.</p>
<blockquote>
<p>If you want to learn more about dockerizing a typical web application, take a look at <a href="/blog/2020/08/containerizing-magento-with-docker-compose-elasticsearch-mysql-and-magento/">this article</a> that explains the process in detail.</p>
</blockquote>
<p>Once you have <a href="https://docs.docker.com/get-docker/">Docker installed</a> in your machine, getting a PostgreSQL instance is as simple as running the following command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ docker run -d <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> --name vehicle-quote-postgres <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -p 5432:5432 <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> --network host <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -e <span style="color:#369">POSTGRES_DB</span>=vehicle_quote <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -e <span style="color:#369">POSTGRES_USER</span>=vehicle_quote <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -e <span style="color:#369">POSTGRES_PASSWORD</span>=password <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> postgres
</code></pre></div><p>Here we’re asking Docker to run a new <a href="https://docs.docker.com/get-started/#what-is-a-container">container</a> based on the latest <code>postgres</code> <a href="https://docs.docker.com/get-started/#what-is-a-container-image">image</a> from <a href="https://hub.docker.com/_/postgres">DockerHub</a>, name it <code>vehicle-quote-postgres</code>, specify the port to use the default PostgreSQL one, make it accessible to the local network (with the <code>--network host</code> option) and finally, specify a few environment variables that the <code>postgres</code> image uses when building our new instance to set up the default database name, user and password (with the three <code>-e</code> options).</p>
<p>After Docker is done working its magic, you should be able to access the database with something like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ docker exec -it vehicle-quote-postgres psql -U vehicle_quote
psql (13.2 (Debian 13.2-1.pgdg100+1))
Type "help" for help.
vehicle_quote=#
</code></pre></div><p>This command is connecting to our new <code>vehicle-quote-postgres</code> container and then, from within the container, using the <a href="https://www.postgresql.org/docs/current/app-psql.html">command line client psql</a> in order to connect to the database.</p>
<p>If you have <a href="https://www.compose.com/articles/postgresql-tips-installing-the-postgresql-client/">psql installed</a> on your own machine, you can use it directly to connect to the PostgreSQL instance running inside the container:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ psql -h localhost -U vehicle_quote
</code></pre></div><p>This is possible because we specified in our <code>docker run</code> command that the container would be accepting traffic over port 5432 (<code>-p 5432:5432</code>) and that it would be accesible within the same network as our actual machine (<code>--network host</code>).</p>
<h4 id="installing-the-net-5-sdk">Installing the .NET 5 SDK</h4>
<p>Ok, with that out of the way, let’s install .NET 5.</p>
<p>.NET 5 truly is multi-platform, so whatever environment you prefer to work with, they’ve got you covered. You can go to <a href="https://dotnet.microsoft.com/download/dotnet/5.0">the .NET 5 download page</a> and pick your desired flavor of the SDK.</p>
<p>On Ubuntu 20.10, which is what I’m running, installation is painless. It’s your typical process with <a href="https://en.wikipedia.org/wiki/APT_(software)">APT</a> and <a href="https://docs.microsoft.com/en-us/dotnet/core/install/linux-ubuntu#2010-">this page from the official docs</a> has all the details.</p>
<p>First step is to add the Microsoft package repository:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ wget https://packages.microsoft.com/config/ubuntu/20.10/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
$ sudo dpkg -i packages-microsoft-prod.deb
</code></pre></div><p>Then, install .NET 5 with APT like one would any other software package:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ sudo apt-get update; <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> sudo apt-get install -y apt-transport-https && <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> sudo apt-get update && <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> sudo apt-get install -y dotnet-sdk-5.0
</code></pre></div><p>Run <code>dotnet --version</code> in your console and you should see something like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet --version
5.0.301
</code></pre></div><h3 id="setting-up-the-project">Setting up the project</h3>
<h4 id="creating-our-aspnet-core-rest-api-project">Creating our ASP.NET Core REST API project</h4>
<p>Ok now that we have our requirements, database and SDK, let’s start setting up our project. We do so with the following command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet new webapi -o VehicleQuotes
</code></pre></div><p>This instructs the <code>dotnet</code> command line tool to create a new REST API web application project for us in a new <code>VehicleQuotes</code> directory.</p>
<p>As a result, <code>dotnet</code> will give you some messages, including this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">The template "ASP.NET Core Web API" was created successfully.
</code></pre></div><p>A new directory was created with our web application files. The newly created <code>VehicleQuotes</code> project looks like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">.
├── appsettings.Development.json
├── appsettings.json
├── Controllers
│ └── WeatherForecastController.cs
├── obj
│ ├── project.assets.json
│ ├── project.nuget.cache
│ ├── VehicleQuotes.csproj.nuget.dgspec.json
│ ├── VehicleQuotes.csproj.nuget.g.props
│ └── VehicleQuotes.csproj.nuget.g.targets
├── Program.cs
├── Properties
│ └── launchSettings.json
├── Startup.cs
├── VehicleQuotes.csproj
└── WeatherForecast.cs
</code></pre></div><p>Important things to note here are the <code>appsettings.json</code> and <code>appsettings.Development.json</code> files which contain environment specific configuration values; the <code>Controllers</code> directory where we define our application controllers and action methods (i.e. our REST API endpoints); the <code>Program.cs</code> and <code>Startup.cs</code> files that contain our application’s entry point and bootstrapping logic; and finally <code>VehicleQuotes.csproj</code> which is the file that contains project-wide configuration that the framework cares about like references, compilation targets, and other options. Feel free to explore.</p>
<p>The <code>dotnet new</code> command has given us quite a bit. These files make up a fully working application that we can run and play around with. It even has a <a href="https://swagger.io/tools/swagger-ui/">Swagger UI</a>, as I’ll demonstrate shortly. It’s a great place to get started from.</p>
<blockquote>
<p>You can also get a pretty comprehensive <code>.gitignore</code> file by running the <code>dotnet new gitignore</code> command.</p>
</blockquote>
<p>From inside the <code>VehicleQuotes</code> directory, you can run the application with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet run
</code></pre></div><p>Which will start up a development server and give out the following output:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet run
Building...
info: Microsoft.Hosting.Lifetime[0]
Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: /home/kevin/projects/endpoint/blog/VehicleQuotes
</code></pre></div><p>Open up a browser window and go to <code>https://localhost:5001/swagger</code> to find a Swagger UI listing our API’s endpoints:</p>
<p><img src="/blog/2021/07/dotnet-5-web-api/initial-swagger.png" alt="Initial Swagger UI"></p>
<p>As you can see we’ve got a <code>GET /WeatherForecast</code> endpoint in our app. This is included by default in the <code>webapi</code> project template that we specified in our call to <code>dotnet new</code>. You can see it defined in the <code>Controllers/WeatherForecastController.cs</code> file.</p>
<h4 id="installing-packages-well-need">Installing packages we’ll need</h4>
<p>Now let’s install all the tools and libraries we will need for our application. First, we install the <a href="https://www.nuget.org/packages/dotnet-aspnet-codegenerator/">ASP.NET Code Generator</a> tool which we’ll use later for scaffolding controllers:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet tool install --global dotnet-aspnet-codegenerator
</code></pre></div><p>We also need to install the <a href="https://www.nuget.org/packages/dotnet-ef/">Entity Framework command line tools</a> which help us with creating and applying database migrations:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet tool install --global dotnet-ef
</code></pre></div><p>Now, we need to install a few libraries that we’ll use in our project. First are all the packages that allow us to use <a href="https://docs.microsoft.com/en-us/ef/">Entity Framework Core</a>, provide scaffolding support and give us a detailed debugging page for database errors:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet add package Microsoft.VisualStudio.Web.CodeGeneration.Design
$ dotnet add package Microsoft.EntityFrameworkCore.Design
$ dotnet add package Microsoft.EntityFrameworkCore.SqlServer
$ dotnet add package Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore
</code></pre></div><p>We also need the <a href="https://www.npgsql.org/efcore/">EF Core driver for PostgreSQL</a> which will allow us to interact with our database:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL
</code></pre></div><p>Finally, we need <a href="https://github.com/efcore/EFCore.NamingConventions">another package</a> that will allow us to use the <a href="https://en.wikipedia.org/wiki/Snake_case">snake case</a> naming convention for our database tables, fields, etc. We need this because EF Core uses <a href="https://wiki.c2.com/?UpperCamelCase">capitalized camel case</a> by default, which is not very common in the PostgreSQL world, so this will allow us to play nice. This is the package:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet add package EFCore.NamingConventions
</code></pre></div><h4 id="connecting-to-the-database-and-performing-initial-app-configuration">Connecting to the database and performing initial app configuration</h4>
<p>In order to connect to, query, and modify a database using EF Core, we need to create a <a href="https://docs.microsoft.com/en-us/ef/core/dbcontext-configuration/"><code>DbContext</code></a>. This is a class that serves as the entry point into the database. Create a new directory called <code>Data</code> in the project root and add this new <code>VehicleQuotesContext.cs</code> file to it:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">Microsoft.EntityFrameworkCore</span>;
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">VehicleQuotesContext</span> : DbContext
{
<span style="color:#080;font-weight:bold">public</span> VehicleQuotesContext (DbContextOptions<VehicleQuotesContext> options)
: <span style="color:#080;font-weight:bold">base</span>(options)
{
}
}
}
</code></pre></div><p>As you can see this is just a simple class that inherits from EF Core’s <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.entityframeworkcore.dbcontext?view=efcore-5.0"><code>DbContext</code></a> class. That’s all we need for now. We will continue building on this class as we add new tables and configurations.</p>
<p>Now, we need to add this class into <a href="https://docs.microsoft.com/en-us/aspnet/core/?view=aspnetcore-5.0">ASP.NET Core’s</a> built-in <a href="https://martinfowler.com/articles/injection.html">IoC (inversion of control) container</a> so that it’s available to controllers and other classes via <a href="https://en.wikipedia.org/wiki/Dependency_injection">Dependency Injection</a>, and tell it how to find our database. Go to <code>Startup.cs</code> and add the following using statement near the top of the file:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">Microsoft.EntityFrameworkCore</span>;
</code></pre></div><p>That will allow us to do the following change in the <code>ConfigureServices</code> method:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"> public void ConfigureServices(IServiceCollection services)
{
// ...
<span style="color:#000;background-color:#dfd">+ services.AddDbContext<VehicleQuotesContext>(options =>
</span><span style="color:#000;background-color:#dfd">+ options
</span><span style="color:#000;background-color:#dfd">+ .UseNpgsql(Configuration.GetConnectionString("VehicleQuotesContext"))
</span><span style="color:#000;background-color:#dfd">+ );
</span><span style="color:#000;background-color:#dfd"></span> }
</code></pre></div><blockquote>
<p><code>UseNpgsql</code> is an <a href="https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/extension-methods">extension method</a> made available to us by the <code>Npgsql.EntityFrameworkCore.PostgreSQL</code> package that we installed in the previous step.</p>
</blockquote>
<p>The <code>services</code> variable contains all the objects (known as “services”) that are available in the app for Dependency Injection. So here, we’re adding our newly created <code>DbContext</code> to it, specifying that it will connect to a PostgreSQL database (via the <code>options.UseNpgsql</code> call), and that it will use a connection string named <code>VehicleQuotesContext</code> from the app’s default configuration file. So let’s add the connection string then. To do so, change the <code>appsettings.json</code> like so:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"> {
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
<span style="color:#000;background-color:#fdd">- "AllowedHosts": "*"
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+ "AllowedHosts": "*",
</span><span style="color:#000;background-color:#dfd">+ "ConnectionStrings": {
</span><span style="color:#000;background-color:#dfd">+ "VehicleQuotesContext": "Host=localhost;Database=vehicle_quote;Username=vehicle_quote;Password=password"
</span><span style="color:#000;background-color:#dfd">+ }
</span><span style="color:#000;background-color:#dfd"></span> }
</code></pre></div><p>This is your typical PostgreSQL connection string. The only gotcha is that it needs to be specified under the <code>ConnectionStrings</code> -> <code>VehicleQuotesContext</code> section so that our call to <code>Configuration.GetConnectionString</code> can find it.</p>
<p>Now let’s put the <code>EFCore.NamingConventions</code> package to good use and configure EF Core to use snake case when naming database objects. Add the following to the <code>ConfigureServices</code> method in <code>Startup.cs</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">public void ConfigureServices(IServiceCollection services)
{
// ...
services.AddDbContext<VehicleQuotesContext>(options =>
options
.UseNpgsql(Configuration.GetConnectionString("VehicleQuotesContext"))
<span style="color:#000;background-color:#dfd">+ .UseSnakeCaseNamingConvention()
</span><span style="color:#000;background-color:#dfd"></span> );
}
</code></pre></div><blockquote>
<p><code>UseSnakeCaseNamingConvention</code> is an extension method made available to us by the <code>EFCore.NamingConventions</code> package that we installed in the previous step.</p>
</blockquote>
<p>Now let’s make logging a little bit more verbose with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">public void ConfigureServices(IServiceCollection services)
{
// ...
services.AddDbContext<VehicleQuotesContext>(options =>
options
.UseNpgsql(Configuration.GetConnectionString("VehicleQuotesContext"))
.UseSnakeCaseNamingConvention()
<span style="color:#000;background-color:#dfd">+ .UseLoggerFactory(LoggerFactory.Create(builder => builder.AddConsole()))
</span><span style="color:#000;background-color:#dfd">+ .EnableSensitiveDataLogging()
</span><span style="color:#000;background-color:#dfd"></span> );
}
</code></pre></div><p>This will make sure full database queries appear in the log in the console, including parameter values. This could expose sensitive data so be careful when using <code>EnableSensitiveDataLogging</code> in production.</p>
<p>We can also add the following service configuration to have the app display detailed error pages when something related to the database or migrations goes wrong:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">public void ConfigureServices(IServiceCollection services)
{
// ...
<span style="color:#000;background-color:#dfd">+ services.AddDatabaseDeveloperPageExceptionFilter();
</span><span style="color:#000;background-color:#dfd"></span>}
</code></pre></div><blockquote>
<p><code>AddDatabaseDeveloperPageExceptionFilter</code> is an extension method made available to us by the <code>Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore</code> package that we installed in the previous step.</p>
</blockquote>
<p>Finally, one last configuration I like to do is have the Swagger UI show up at the root URL, so that instead of using <code>https://localhost:5001/swagger</code>, we’re able to just use <code>https://localhost:5001</code>. We do so by by updating the <code>Configure</code> method this time, in the same <code>Startup.cs</code> file that we’ve been working on:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseSwagger();
<span style="color:#000;background-color:#fdd">- app.UseSwaggerUI(c => c.SwaggerEndpoint("/swagger/v1/swagger.json", "VehicleQuotes v1"));
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+ app.UseSwaggerUI(c => {
</span><span style="color:#000;background-color:#dfd">+ c.SwaggerEndpoint("/swagger/v1/swagger.json", "VehicleQuotes v1");
</span><span style="color:#000;background-color:#dfd">+ c.RoutePrefix = "";
</span><span style="color:#000;background-color:#dfd">+ });
</span><span style="color:#000;background-color:#dfd"></span> }
</code></pre></div><p>The magic is done by the <code>c.RoutePrefix = "";</code> line which makes it so there’s no need to put any prefix in order to access the auto generated Swagger UI.</p>
<p>Try it out. Do <code>dotnet run</code> and navigate to <code>https://localhost:5001</code> and you should see the Swagger UI there.</p>
<h3 id="building-the-application">Building the application</h3>
<h4 id="creating-model-entities-migrations-and-updating-the-database">Creating model entities, migrations and updating the database</h4>
<p>Alright, with all that configuration out of the way, let’s implement some of our actual application logic now. Refer back to our data model. We’ll start by defining our three simplest tables: <code>makes</code>, <code>sizes</code> and <code>body_types</code>. With EF Core, we define tables via so-called <a href="https://en.wikipedia.org/wiki/Plain_old_CLR_object">POCO</a> entities, which are simple C# classes with some properties. The classes become tables and the properties become the tables’ fields. Instances of these classes represent records in the database.</p>
<p>So, create a new <code>Models</code> directory in our project’s root and add these three files:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#888">// Models/BodyType.cs
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Models</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">BodyType</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> ID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span> Name { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
}
}
</code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#888">// Models/Make.cs
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Models</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">Make</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> ID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span> Name { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
}
}
</code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#888">// Models/Size.cs
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Models</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">Size</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> ID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span> Name { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
}
}
</code></pre></div><p>Now, we add three corresponding <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.entityframeworkcore.dbset-1?view=efcore-5.0"><code>DbSet</code></a>s to our <code>DbContext</code> in <code>Data/VehicleQuoteContext.cs</code>. Here’s the diff:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">using Microsoft.EntityFrameworkCore;
<span style="color:#000;background-color:#dfd">+using VehicleQuotes.Models;
</span><span style="color:#000;background-color:#dfd"></span>
namespace VehicleQuotes
{
public class VehicleQuotesContext : DbContext
{
public VehicleQuotesContext (DbContextOptions<VehicleQuotesContext> options)
: base(options)
{
}
<span style="color:#000;background-color:#dfd">+ public DbSet<Make> Makes { get; set; }
</span><span style="color:#000;background-color:#dfd">+ public DbSet<Size> Sizes { get; set; }
</span><span style="color:#000;background-color:#dfd">+ public DbSet<BodyType> BodyTypes { get; set; }
</span><span style="color:#000;background-color:#dfd"></span> }
}
</code></pre></div><p>This is how we tell EF Core to build tables in our database for our entities. You’ll see later how we use those <code>DbSet</code>s to access the data in those tables. For now, let’s create a <a href="https://docs.microsoft.com/en-us/ef/core/managing-schemas/migrations/?tabs=dotnet-core-cli">migration</a> script that we can later run to apply changes to our database. Run the following to have EF Core create it for us:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet ef migrations add AddLookupTables
</code></pre></div><p>Now take a look at the newly created <code>Migrations</code> directory. It contains a few new files, but the one we care about right now is <code>Migrations/{TIMESTAMP}_AddLookupTables.cs</code>. In its <code>Up</code> method, it’s got some code that will modify the database structure when run. The EF Core tooling has inspected our project, identified the new entities, and automatically generated a migration script for us that creates tables for them. Notice also how the tables and fields use the snake case naming convention, just as we specified with the call to <code>UseSnakeCaseNamingConvention</code> in <code>Startup.cs</code>.</p>
<p>Now, to actually run the migration script and apply the changes to the database, we do:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet ef database update
</code></pre></div><p>That command inspects our project to find any migrations that haven’t been run yet, and applies them. In this case, we only have one, so that’s what it runs. Look at the output in the console to see it working its magic step by step:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet ef database update
Build started...
Build succeeded.
warn: Microsoft.EntityFrameworkCore.Model.Validation[10400]
Sensitive data logging is enabled. Log entries and exception messages may include sensitive application data; this mode should only be enabled during development.
...
info: Microsoft.EntityFrameworkCore.Database.Command[20101]
Executed DbCommand (4ms) [<span style="color:#369">Parameters</span>=[], <span style="color:#369">CommandType</span>=<span style="color:#d20;background-color:#fff0f0">'Text'</span>, <span style="color:#369">CommandTimeout</span>=<span style="color:#d20;background-color:#fff0f0">'30'</span>]
CREATE TABLE sizes (
id integer GENERATED BY DEFAULT AS IDENTITY,
name text NULL,
CONSTRAINT pk_sizes PRIMARY KEY (id)
);
info: Microsoft.EntityFrameworkCore.Database.Command[20101]
Executed DbCommand (1ms) [<span style="color:#369">Parameters</span>=[], <span style="color:#369">CommandType</span>=<span style="color:#d20;background-color:#fff0f0">'Text'</span>, <span style="color:#369">CommandTimeout</span>=<span style="color:#d20;background-color:#fff0f0">'30'</span>]
INSERT INTO <span style="color:#d20;background-color:#fff0f0">"__EFMigrationsHistory"</span> (migration_id, product_version)
VALUES (<span style="color:#d20;background-color:#fff0f0">'20210625212939_AddLookupTables'</span>, <span style="color:#d20;background-color:#fff0f0">'5.0.7'</span>);
Done.
</code></pre></div><p>Notice how it warns us about potential exposure of sensitive data because of that <code>EnableSensitiveDataLogging</code> option we opted into in <code>Startup.cs</code>. Also, EF Core related logs are extra verbose showing all database operations because of another configuration option that we applied there: the <code>UseLoggerFactory(LoggerFactory.Create(builder => builder.AddConsole()))</code> one.</p>
<p>You can connect to the database with the <code>psql</code> command line client and see that the changes took effect:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ psql -h localhost -U vehicle_quote
...
<span style="color:#369">vehicle_quote</span>=<span style="color:#888"># \c vehicle_quote </span>
psql (12.7 (Ubuntu 12.7-0ubuntu0.20.10.1), server 13.2 (Debian 13.2-1.pgdg100+1))
You are now connected to database <span style="color:#d20;background-color:#fff0f0">"vehicle_quote"</span> as user <span style="color:#d20;background-color:#fff0f0">"vehicle_quote"</span>.
<span style="color:#369">vehicle_quote</span>=<span style="color:#888"># \dt</span>
List of relations
Schema | Name | Type | Owner
--------+-----------------------+-------+---------------
public | __EFMigrationsHistory | table | vehicle_quote
public | body_types | table | vehicle_quote
public | makes | table | vehicle_quote
public | sizes | table | vehicle_quote
(<span style="color:#00d;font-weight:bold">4</span> rows)
<span style="color:#369">vehicle_quote</span>=<span style="color:#888"># \d makes</span>
Table <span style="color:#d20;background-color:#fff0f0">"public.makes"</span>
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+----------------------------------
id | integer | | not null | generated by default as identity
name | text | | |
Indexes:
<span style="color:#d20;background-color:#fff0f0">"pk_makes"</span> PRIMARY KEY, btree (id)
</code></pre></div><p>There are our tables in all their normalized, snake-cased glory. The <code>__EFMigrationsHistory</code> table is used internally by EF Core to keep track of which migrations have been applied.</p>
<h4 id="creating-controllers-for-cruding-our-tables">Creating controllers for CRUDing our tables</h4>
<p>Now that we have that, let’s add a few endpoints to support basic CRUD of those tables. We can use the <code>dotnet-aspnet-codegenerator</code> scaffolding tool that we installed earlier. For the three tables that we have, we would do:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet aspnet-codegenerator controller <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -name MakesController <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -m Make <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -dc VehicleQuotesContext <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -async <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -api <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -outDir Controllers
$ dotnet aspnet-codegenerator controller <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -name BodyTypesController <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -m BodyType <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -dc VehicleQuotesContext <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -async <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -api <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -outDir Controllers
$ dotnet aspnet-codegenerator controller <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -name SizesController <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -m Size <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -dc VehicleQuotesContext <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -async <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -api <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -outDir Controllers
</code></pre></div><p>Those commands tell the scaffolding tool to create new controllers that:</p>
<ol>
<li>Are named as given by the <code>-name</code> option.</li>
<li>Use the model class specified in the <code>-m</code> option.</li>
<li>Use our <code>VehicleQuotesContext</code> to talk to the database. As per the <code>-dc</code> option.</li>
<li>Define the methods using <code>async</code>/<code>await</code> syntax. Given by the <code>-async</code> option.</li>
<li>Are API controllers. Specified by the <code>-api</code> option.</li>
<li>Are created in the <code>Controllers</code> directory. Via the <code>-outDir</code> option.</li>
</ol>
<p>Explore the new files that got created in the <code>Controllers</code> directory: <code>MakesController.cs</code>, <code>BodyTypesController.cd</code> and <code>SizesController.cs</code>. The controllers have been generated with the necessary <a href="https://docs.microsoft.com/en-us/aspnet/mvc/overview/older-versions-1/controllers-and-routing/aspnet-mvc-controllers-overview-cs#understanding-controller-actions">Action Methods</a> to fetch, create, update and delete their corresponding entities. Try <code>dotnet run</code> and navigate to <code>https://localhost:5001</code> to see the new endpoints in the Swagger UI:</p>
<p><img src="/blog/2021/07/dotnet-5-web-api/swagger-lookup-tables.png" alt="Swagger UI with lookup tables"></p>
<p>Try it out! You can interact with each of the endpoints from the Swagger UI and it all works as you’d expect.</p>
<h4 id="adding-unique-constraints-via-indexes">Adding unique constraints via indexes</h4>
<p>Ok, our app is coming along well. Right now though, there’s an issue with the tables that we’ve created. It’s possible to create vehicle makes with the same name. The same is true for body types and sizes. This doesn’t make much sense for these tables. So let’s fix that by adding a uniqueness constraint. We can do it by creating a unique database index using EF Core’s <code>Index</code> attribute. For example, we can modify our <code>Models/Make.cs</code> like so:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"><span style="color:#000;background-color:#dfd">+using Microsoft.EntityFrameworkCore;
</span><span style="color:#000;background-color:#dfd"></span>
namespace VehicleQuotes.Models
{
<span style="color:#000;background-color:#dfd">+ [Index(nameof(Name), IsUnique = true)]
</span><span style="color:#000;background-color:#dfd"></span> public class Make
{
public int ID { get; set; }
public string Name { get; set; }
}
}
</code></pre></div><p>In fact, do the same for our other entities in <code>Models/BodyType.cs</code> and <code>Models/Size.cs</code>. Don’t forget the <code>using Microsoft.EntityFrameworkCore</code> statement.</p>
<p>With that, we can create a new migration:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet ef migrations add AddUniqueIndexesToLookupTables
</code></pre></div><p>That will result in a new migration script in <code>Migrations/{TIMESTAMP}_AddUniqueIndexesToLookupTables.cs</code>. Its <code>Up</code> method looks like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#080;font-weight:bold">protected</span> <span style="color:#080;font-weight:bold">override</span> <span style="color:#080;font-weight:bold">void</span> Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateIndex(
name: <span style="color:#d20;background-color:#fff0f0">"ix_sizes_name"</span>,
table: <span style="color:#d20;background-color:#fff0f0">"sizes"</span>,
column: <span style="color:#d20;background-color:#fff0f0">"name"</span>,
unique: <span style="color:#080;font-weight:bold">true</span>);
migrationBuilder.CreateIndex(
name: <span style="color:#d20;background-color:#fff0f0">"ix_makes_name"</span>,
table: <span style="color:#d20;background-color:#fff0f0">"makes"</span>,
column: <span style="color:#d20;background-color:#fff0f0">"name"</span>,
unique: <span style="color:#080;font-weight:bold">true</span>);
migrationBuilder.CreateIndex(
name: <span style="color:#d20;background-color:#fff0f0">"ix_body_types_name"</span>,
table: <span style="color:#d20;background-color:#fff0f0">"body_types"</span>,
column: <span style="color:#d20;background-color:#fff0f0">"name"</span>,
unique: <span style="color:#080;font-weight:bold">true</span>);
}
</code></pre></div><p>As you can see, new unique indexes are being created on the tables and fields that we specified. Like before, apply the changes to the database structure with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet ef database update
</code></pre></div><p>Now if you try to create, for example, a vehicle make with a repeated name, you’ll get an error. Try doing so by <code>POST</code>ing to <code>/api/Makes</code> via the Swagger UI:</p>
<p><img src="/blog/2021/07/dotnet-5-web-api/unique-constraint-violation.png" alt="Unique constraint violation"></p>
<h4 id="responding-with-specific-http-error-codes-409-conflict">Responding with specific HTTP error codes (409 Conflict)</h4>
<p>The fact that we can now enforce unique constraints is all well and good. But the error scenario is not very user friendly. Instead of returning a “500 Internal Server Error” status code with a wall of text, we should be responding with something more sensible. Maybe a “409 Conflict” would be more appropriate for this kind of error. We can easily update our controllers to handle that scenario. What we need to do is update the methods that handle the <code>POST</code> and <code>PUT</code> endpoints so that they catch the <code>Microsoft.EntityFrameworkCore.DbUpdateException</code> exception and return the proper response. Here’s how we would do it for the <code>MakesController</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">// ...
namespace VehicleQuotes.Controllers
{
[Route("api/[controller]")]
[ApiController]
public class MakesController : ControllerBase
{
// ...
[HttpPut("{id}")]
public async Task<IActionResult> PutMake(int id, Make make)
{
// ...
try
{
await _context.SaveChangesAsync();
}
// ...
<span style="color:#000;background-color:#dfd">+ catch (Microsoft.EntityFrameworkCore.DbUpdateException)
</span><span style="color:#000;background-color:#dfd">+ {
</span><span style="color:#000;background-color:#dfd">+ return Conflict();
</span><span style="color:#000;background-color:#dfd">+ }
</span><span style="color:#000;background-color:#dfd"></span>
return NoContent();
}
[HttpPost]
public async Task<ActionResult<Make>> PostMake(Make make)
{
_context.Makes.Add(make);
<span style="color:#000;background-color:#fdd">- await _context.SaveChangesAsync();
</span><span style="color:#000;background-color:#fdd"></span>
<span style="color:#000;background-color:#dfd">+ try
</span><span style="color:#000;background-color:#dfd">+ {
</span><span style="color:#000;background-color:#dfd">+ await _context.SaveChangesAsync();
</span><span style="color:#000;background-color:#dfd">+ }
</span><span style="color:#000;background-color:#dfd">+ catch (Microsoft.EntityFrameworkCore.DbUpdateException)
</span><span style="color:#000;background-color:#dfd">+ {
</span><span style="color:#000;background-color:#dfd">+ return Conflict();
</span><span style="color:#000;background-color:#dfd">+ }
</span><span style="color:#000;background-color:#dfd"></span>
return CreatedAtAction("GetMake", new { id = make.ID }, make);
}
// ...
}
}
</code></pre></div><p>Go ahead and do the same for the other two controllers, and try again to POST a repeated make name via the Swagger UI. You should see this now instead:</p>
<p><img src="/blog/2021/07/dotnet-5-web-api/http-409-conflict.png" alt="HTTP 409 Conflict"></p>
<p>Much better now, don’t you think?</p>
<h4 id="adding-a-more-complex-entity-to-the-model">Adding a more complex entity to the model</h4>
<p>Now let’s work on an entity that’s a little bit more complex: the one we will use to represent vehicle models.</p>
<p>For this entity, we don’t want our API to be as low level as the one for the other three, where it was basically a thin wrapper over database tables. We want it to be a little bit more abstract and not expose the entire database structure verbatim.</p>
<p>Refer back to the data model. We’ll add <code>models</code>, <code>model_styles</code> and <code>model_style_years</code>. Let’s start by adding the following classes:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#888">// Models/Model.cs
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.Collections.Generic</span>;
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Models</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">Model</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> ID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span> Name { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> MakeID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> Make Make { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> ICollection<ModelStyle> ModelStyles { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
}
}
</code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#888">// Models/ModelStyle.cs
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.Collections.Generic</span>;
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Models</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">ModelStyle</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> ID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> ModelID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> BodyTypeID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> SizeID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> Model Model { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> BodyType BodyType { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> Size Size { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> ICollection<ModelStyleYear> ModelStyleYears { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
}
}
</code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#888">// Models/ModelStyleYear.cs
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Models</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">ModelStyleYear</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> ID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span> Year { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> ModelStyleID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> ModelStyle ModelStyle { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
}
}
</code></pre></div><p>Notice how some of these entities now include properties whose types are other entities. Some of them are collections even. These are called Navigation Properties and are how we tell EF Core that our entities are related to one another. These will result in foreign keys being created in the database.</p>
<p>Take the <code>Model</code> entity for example. It has a property <code>Make</code> of type <code>Make</code>. It also has a <code>MakeID</code> property of type <code>int</code>. EF Core sees this and figures out that there’s a relation between the <code>makes</code> and <code>models</code> tables. Specifically, that <code>models</code> have a <code>make</code>. A many-to-one relation where the <code>models</code> table stores a foreign key to the <code>makes</code> table.</p>
<p>Similarly, the <code>Model</code> entity has a <code>ModelStyles</code> property of type <code>ICollection<ModelStyleYear></code>. This tells EF Core that <code>models</code> have many <code>model_styles</code>. This one is a one-to-many relation from the perspective of the <code>models</code> table. The foreign key lives in the <code>model_styles</code> table and points back to <code>models</code>.</p>
<blockquote>
<p>The <a href="https://docs.microsoft.com/en-us/ef/core/modeling/relationships?tabs=fluent-api%2Cfluent-api-simple-key%2Csimple-key#single-navigation-property-1">official documentation</a> is a great resource to learn more details about how relationships work in EF Core.</p>
</blockquote>
<p>After that, same as before, we have to add the corresponding <code>DbSet</code>s to our <code>DbContext</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">// Data/VehicleQuotesContext.cs
// ...
namespace VehicleQuotes
{
public class VehicleQuotesContext : DbContext
{
// ...
<span style="color:#000;background-color:#dfd">+ public DbSet<Model> Models { get; set; }
</span><span style="color:#000;background-color:#dfd">+ public DbSet<ModelStyle> ModelStyles { get; set; }
</span><span style="color:#000;background-color:#dfd">+ public DbSet<ModelStyleYear> ModelStyleYears { get; set; }
</span><span style="color:#000;background-color:#dfd"></span> }
}
</code></pre></div><p>Don’t forget the migration script. First create it:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet ef migrations add AddVehicleModelTables
</code></pre></div><p>And then apply it:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet ef database update
</code></pre></div><h4 id="adding-composite-unique-indexes">Adding composite unique indexes</h4>
<p>These vehicle model related tables also need some uniqueness enforcement. This time, however, the unique keys are composite. Meaning that they involve multiple fields. For vehicle models, for example, it makes no sense to have multiple records with the same make and name. But it does make sense to have multiple models with the same name, as long as they belong to different makes. We can solve for that with a composite index. Here’s how we create one of those in <code>Model</code> with EF Core:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">using System.Collections.Generic;
<span style="color:#000;background-color:#dfd">+using Microsoft.EntityFrameworkCore;
</span><span style="color:#000;background-color:#dfd"></span>
namespace VehicleQuotes.Models
{
<span style="color:#000;background-color:#dfd">+ [Index(nameof(Name), nameof(MakeID), IsUnique = true)]
</span><span style="color:#000;background-color:#dfd"></span> public class Model
{
// ...
}
}
</code></pre></div><p>Very similar to what we did with the <code>Make</code>, <code>BodyType</code>, and <code>Size</code> entities. The only difference is that this time we included multiple fields in the parameters for the <code>Index</code> attribute.</p>
<p>We should do the same for <code>ModelStyle</code> and <code>ModelStyleYear</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">using System.Collections.Generic;
<span style="color:#000;background-color:#dfd">+using Microsoft.EntityFrameworkCore;
</span><span style="color:#000;background-color:#dfd"></span>
namespace VehicleQuotes.Models
{
<span style="color:#000;background-color:#dfd">+ [Index(nameof(ModelID), nameof(BodyTypeID), nameof(SizeID), IsUnique = true)]
</span><span style="color:#000;background-color:#dfd"></span> public class ModelStyle
{
// ...
}
}
</code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"><span style="color:#000;background-color:#dfd">+using Microsoft.EntityFrameworkCore;
</span><span style="color:#000;background-color:#dfd"></span>
namespace VehicleQuotes.Models
{
<span style="color:#000;background-color:#dfd">+ [Index(nameof(Year), nameof(ModelStyleID), IsUnique = true)]
</span><span style="color:#000;background-color:#dfd"></span> public class ModelStyleYear
{
// ...
}
}
</code></pre></div><p>Don’t forget the migrations:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet ef migrations add AddUniqueIndexesForVehicleModelTables
$ dotnet ef database update
</code></pre></div><h4 id="adding-controllers-with-custom-routes">Adding controllers with custom routes</h4>
<p>Our data model dictates that vehicle models belong in a make. In other words, a vehicle model has no meaning by itself. It only has meaning within the context of a make. Ideally, we want our API routes to reflect this concept. In other words, instead of URLs for models to look like this: <code>/api/Models/{id}</code>; we’d rather them look like this: <code>/api/Makes/{makeId}/Models/{modelId}</code>. Let’s go ahead and scaffold a controller for this entity:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet aspnet-codegenerator controller <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -name ModelsController <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -m Model <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -dc VehicleQuotesContext <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -async <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -api <span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> -outDir Controllers
</code></pre></div><p>Now let’s change the resulting <code>Controllers/ModelsController.cs</code> to use the URL structure that we want. To do so, we modify the <code>Route</code> attribute that’s applied to the <code>ModelsController</code> class to this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#369">[Route("api/Makes/{makeId}/[controller]</span>/<span style="color:#d20;background-color:#fff0f0">")]
</span></code></pre></div><p>Do a <code>dotnet run</code> and take a peek at the Swagger UI on <code>https://localhost:5001</code> to see what the <code>Models</code> endpoint routes look like now:</p>
<p><img src="/blog/2021/07/dotnet-5-web-api/nested-routes.png" alt="Nested routes"></p>
<p>The vehicle model routes are now nested within makes, just like we wanted.</p>
<p>Of course, this is just eye candy for now. We need to actually use this new <code>makeId</code> parameter for the logic in the endpoints. For example, one would expect a <code>GET</code> to <code>/api/Makes/1/Models</code> to return all the vehicle models that belong to the make with <code>id</code> 1. But right now, all vehicle models are returned regardless. All other endpoints behave similarly, there’s no limit to the operations on the vehicle models. The given <code>makeId</code> is not taken into consideration at all.</p>
<p>Let’s update the <code>ModelsController</code>’s <code>GetModels</code> method (which is the one that handles the <code>GET /api/Makes/{makeId}/Models</code> endpoint) to behave like one would expect. It should look like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#369">[HttpGet]</span>
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> Task<ActionResult<IEnumerable<Model>>> GetModels([FromRoute] <span style="color:#888;font-weight:bold">int</span> makeId)
{
<span style="color:#888;font-weight:bold">var</span> make = <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.Makes.FindAsync(makeId);
<span style="color:#080;font-weight:bold">if</span> (make == <span style="color:#080;font-weight:bold">null</span>)
{
<span style="color:#080;font-weight:bold">return</span> NotFound();
}
<span style="color:#080;font-weight:bold">return</span> <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.Models.Where(m => m.MakeID == makeId).ToListAsync();
}
</code></pre></div><p>See how we’ve included a new parameter to the method: <code>[FromRoute] int makeId</code>. This <code>[FromRoute]</code> <a href="https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/attributes/">attribute</a> is how we tell ASP.NET Core that this endpoint will use that <code>makeId</code> parameter coming from the URL route. Then, we use our <code>DbContext</code> to try to find the make that corresponds to the given identifier. This is done in <code>_context.Makes.FindAsync(makeId)</code>. Then, if we can’t find the given make, we return a <code>404 Not Found</code> HTTP status code with the <code>return NotFound();</code> line. Finally, we query the <code>models</code> table for all the records whose <code>make_id</code> matches the given parameter. That’s done in the last line of the method.</p>
<blockquote>
<p>We have access to the <code>DbContext</code> because it has been injected as a dependency into the controller via its constructor by the framework.</p>
</blockquote>
<blockquote>
<p><a href="https://docs.microsoft.com/en-us/ef/core/querying/">The official documentation</a> is a great resource to learn about all the possibilities when querying data with EF Core.</p>
</blockquote>
<p>Let’s update the <code>GetModel</code> method, which handles the <code>GET /api/Makes/{makeId}/Models/{id}</code> endpoint, similarly.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">[HttpGet("{id}")]
<span style="color:#000;background-color:#fdd">-public async Task<ActionResult<Model>> GetModel(int id)
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+public async Task<ActionResult<Model>> GetModel([FromRoute] int makeId, int id)
</span><span style="color:#000;background-color:#dfd"></span>{
<span style="color:#000;background-color:#fdd">- var model = await _context.Models.FindAsync(id);
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+ var model = await _context.Models.FirstOrDefaultAsync(m =>
</span><span style="color:#000;background-color:#dfd">+ m.MakeID == makeId && m.ID == id
</span><span style="color:#000;background-color:#dfd">+ );
</span><span style="color:#000;background-color:#dfd"></span>
if (model == null)
{
return NotFound();
}
return model;
}
</code></pre></div><p>We’ve once again included the <code>makeId</code> as a parameter to the method and modified the EF Core query to use both the make ID and the vehicle model ID when looking for the record.</p>
<p>And that’s the gist of it. Other methods would need to be updated similarly. The next section will include these methods in their final form, so I won’t go through each one of them here.</p>
<h4 id="using-resource-models-as-dtos-for-controllers">Using resource models as DTOs for controllers</h4>
<p>Now, I did say at the beginning that we wanted the vehicle model endpoint to be a bit more abstract. Right now it’s operating directly over the EF Core entities and our table. As a result, creating new vehicle models via the <code>POST /api/Makes/{makeId}/Models</code> endpoint is a pain. Take a look at the Swagger UI request schema for that endpoint:</p>
<p><img src="/blog/2021/07/dotnet-5-web-api/raw-model-request-schema.png" alt="Raw model request schema"></p>
<p>This is way too much. Let’s make it a little bit more user friendly by making it more abstract.</p>
<p>To do that, we will introduce what I like to call a Resource Model (or <a href="https://martinfowler.com/eaaCatalog/dataTransferObject.html">DTO, Data Transfer Object</a>, or View Model). This is a class whose only purpose is to streamline the API contract of the endpoint by defining a set of fields that clients will use to make requests and interpret responses. Something that’s simpler than our actual database structure, but still captures all the information that’s important for our application. We will update the <code>ModelsController</code> so that it’s able to receive objects of this new class as requests, operate on them, translate them to our EF Core entities and actual database records, and return them as a response. The hope is that, by hiding the details of our database structure, we make it easier for clients to interact with our API.</p>
<p>So let’s create a new <code>ResourceModels</code> directory in our project’s root and add these two classes:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#888">// ResourceModels/ModelSpecification.cs
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.ResourceModels</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">ModelSpecification</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> ID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span> Name { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> ModelSpecificationStyle[] Styles { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
}
}
</code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#888">// ResourceModels/ModelSpecificationStyle.cs
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.ResourceModels</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">ModelSpecificationStyle</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span> BodyType { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span> Size { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span>[] Years { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
}
}
</code></pre></div><p>Thanks to these two, instead of that mess from above, clients <code>POST</code>ing to <code>/api/Makes/{makeId}/Models</code> will be able to use a request body like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#b06;font-weight:bold">"name"</span>: <span style="color:#d20;background-color:#fff0f0">"string"</span>,
<span style="color:#b06;font-weight:bold">"styles"</span>: [
{
<span style="color:#b06;font-weight:bold">"bodyType"</span>: <span style="color:#d20;background-color:#fff0f0">"string"</span>,
<span style="color:#b06;font-weight:bold">"size"</span>: <span style="color:#d20;background-color:#fff0f0">"string"</span>,
<span style="color:#b06;font-weight:bold">"years"</span>: [
<span style="color:#d20;background-color:#fff0f0">"string"</span>
]
}
]
}
</code></pre></div><p>Which is much simpler. We have the vehicle model name and an array of styles. Each style has a body type and a size, which we can specify by their names because those are unique keys. We don’t need their integer IDs (i.e. primary keys) in order to find to them. Then, each style has an array of strings that contain the years in which those styles are available for that model. The make is part of the URL already, so we don’t need to also specify it in the request payload.</p>
<p>Let’s update our <code>ModelsController</code> to use these Resource Models instead of the <code>Model</code> EF Core entity. Be sure to include the namespace where the Resource Models are defined by adding the following using statement: <code>using VehicleQuotes.ResourceModels;</code>. Now, let’s update the <code>GetModels</code> method (which handles the <code>GET /api/Makes/{makeId}/Models</code> endpoint) so that it looks like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#369">[HttpGet]</span>
<span style="color:#888">// Return a collection of `ModelSpecification`s and expect a `makeId` from the URL.
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> Task<ActionResult<IEnumerable<ModelSpecification>>> GetModels([FromRoute] <span style="color:#888;font-weight:bold">int</span> makeId)
{
<span style="color:#888">// Look for the make identified by `makeId`.
</span><span style="color:#888"></span> <span style="color:#888;font-weight:bold">var</span> make = <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.Makes.FindAsync(makeId);
<span style="color:#888">// If we can't find the make, then we return a 404.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">if</span> (make == <span style="color:#080;font-weight:bold">null</span>)
{
<span style="color:#080;font-weight:bold">return</span> NotFound();
}
<span style="color:#888">// Build a query to fetch the relevant records from the `models` table and
</span><span style="color:#888"></span> <span style="color:#888">// build `ModelSpecification` with the data.
</span><span style="color:#888"></span> <span style="color:#888;font-weight:bold">var</span> modelsToReturn = <span style="color:#00d;font-weight:bold">_</span>context.Models
.Where(m => m.MakeID == makeId)
.Select(m => <span style="color:#080;font-weight:bold">new</span> ModelSpecification {
ID = m.ID,
Name = m.Name,
Styles = m.ModelStyles.Select(ms => <span style="color:#080;font-weight:bold">new</span> ModelSpecificationStyle {
BodyType = ms.BodyType.Name,
Size = ms.Size.Name,
Years = ms.ModelStyleYears.Select(msy => msy.Year).ToArray()
}).ToArray()
});
<span style="color:#888">// Execute the query and respond with the results.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">return</span> <span style="color:#080;font-weight:bold">await</span> modelsToReturn.ToListAsync();
}
</code></pre></div><p>The first thing that we changed was the return type. Instead of <code>Task<ActionResult<IEnumerable<Model>>></code>, the method now returns <code>Task<ActionResult<IEnumerable<ModelSpecification>>></code>. We’re going to use our new Resource Models as these endpoints’ contract, so we need to make sure we are returning those. Next, we considerably changed the LINQ expression that searches the database for the vehicle model records we want. The filtering logic (given by the <code>Where</code>) is the same. That is, we’re still searching for vehicle models within the given make ID. What we changed was the projection logic in the <code>Select</code>. Our Action Method now returns a collection of <code>ModelSpecification</code> objects, so we updated the <code>Select</code> to produce such objects, based on the records from the <code>models</code> table that match our search criteria. We build <code>ModelSpecification</code>s using the data coming from <code>models</code> records and their related <code>model_styles</code> and <code>model_style_years</code>. Finally, we asynchronously execute the query to fetch the data from the database and return it.</p>
<p>Next, let’s move on to the <code>GetModel</code> method, which handles the <code>GET /api/Makes/{makeId}/Models/{id}</code> endpoint. This is what it should look like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#369">[HttpGet("{id}")]</span>
<span style="color:#888">// Return a `ModelSpecification`s and expect `makeId` and `id` from the URL.
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> Task<ActionResult<ModelSpecification>> GetModel([FromRoute] <span style="color:#888;font-weight:bold">int</span> makeId, [FromRoute] <span style="color:#888;font-weight:bold">int</span> id)
{
<span style="color:#888">// Look for the model specified by the given identifiers and also load
</span><span style="color:#888"></span> <span style="color:#888">// all related data that we care about for this method.
</span><span style="color:#888"></span> <span style="color:#888;font-weight:bold">var</span> model = <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.Models
.Include(m => m.ModelStyles).ThenInclude(ms => ms.BodyType)
.Include(m => m.ModelStyles).ThenInclude(ms => ms.Size)
.Include(m => m.ModelStyles).ThenInclude(ms => ms.ModelStyleYears)
.FirstOrDefaultAsync(m => m.MakeID == makeId && m.ID == id);
<span style="color:#888">// If we couldn't find it, respond with a 404.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">if</span> (model == <span style="color:#080;font-weight:bold">null</span>)
{
<span style="color:#080;font-weight:bold">return</span> NotFound();
}
<span style="color:#888">// Use the fetched data to construct a `ModelSpecification` to use in the response.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">return</span> <span style="color:#080;font-weight:bold">new</span> ModelSpecification {
ID = model.ID,
Name = model.Name,
Styles = model.ModelStyles.Select(ms => <span style="color:#080;font-weight:bold">new</span> ModelSpecificationStyle {
BodyType = ms.BodyType.Name,
Size = ms.Size.Name,
Years = ms.ModelStyleYears.Select(msy => msy.Year).ToArray()
}).ToArray()
};
}
</code></pre></div><p>Same as before, we changed the return type of the method to be <code>ModelSpecification</code>. Then, we modified the query so that it loads all the related data for the <code>Model</code> entity via its navigation properties. That’s what the <code>Include</code> and <code>ThenInclude</code> calls do. We need this data loaded because we use it in the method’s return statement to build the <code>ModelSpecification</code> that will be included in the response. The logic to build it is very similar to that of the previous method.</p>
<blockquote>
<p>You can learn more about the various available approaches for loading data with EF Core in <a href="https://docs.microsoft.com/en-us/ef/core/querying/related-data/">the official documentation</a>.</p>
</blockquote>
<p>Next is the <code>PUT /api/Makes/{makeId}/Models/{id}</code> endpoint, handled by the <code>PutModel</code> method:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#369">[HttpPut("{id}")]</span>
<span style="color:#888">// Expect `makeId` and `id` from the URL and a `ModelSpecification` from the request payload.
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> Task<IActionResult> PutModel([FromRoute] <span style="color:#888;font-weight:bold">int</span> makeId, <span style="color:#888;font-weight:bold">int</span> id, ModelSpecification model)
{
<span style="color:#888">// If the id in the URL and the request payload are different, return a 400.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">if</span> (id != model.ID)
{
<span style="color:#080;font-weight:bold">return</span> BadRequest();
}
<span style="color:#888">// Obtain the `models` record that we want to update. Include any related
</span><span style="color:#888"></span> <span style="color:#888">// data that we want to update as well.
</span><span style="color:#888"></span> <span style="color:#888;font-weight:bold">var</span> modelToUpdate = <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.Models
.Include(m => m.ModelStyles)
.FirstOrDefaultAsync(m => m.MakeID == makeId && m.ID == id);
<span style="color:#888">// If we can't find the record, then return a 404.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">if</span> (modelToUpdate == <span style="color:#080;font-weight:bold">null</span>)
{
<span style="color:#080;font-weight:bold">return</span> NotFound();
}
<span style="color:#888">// Update the record with what came in the request payload.
</span><span style="color:#888"></span> modelToUpdate.Name = model.Name;
<span style="color:#888">// Build EF Core entities based on the incoming Resource Model object.
</span><span style="color:#888"></span> modelToUpdate.ModelStyles = model.Styles.Select(style => <span style="color:#080;font-weight:bold">new</span> ModelStyle {
BodyType = <span style="color:#00d;font-weight:bold">_</span>context.BodyTypes.Single(bodyType => bodyType.Name == style.BodyType),
Size = <span style="color:#00d;font-weight:bold">_</span>context.Sizes.Single(size => size.Name == style.Size),
ModelStyleYears = style.Years.Select(year => <span style="color:#080;font-weight:bold">new</span> ModelStyleYear {
Year = year
}).ToList()
}).ToList();
<span style="color:#080;font-weight:bold">try</span>
{
<span style="color:#888">// Try saving the changes. This will run the UPDATE statement in the database.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.SaveChangesAsync();
}
<span style="color:#080;font-weight:bold">catch</span> (Microsoft.EntityFrameworkCore.DbUpdateException)
{
<span style="color:#888">// If there's an error updating, respond accordingly.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">return</span> Conflict();
}
<span style="color:#888">// Finally return a 204 if everything went well.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">return</span> NoContent();
}
</code></pre></div><p>The purpose of this endpoint is to update existing resources. So, it receives a representation of said resource as a parameter that comes from the request body. Before, it expected an instance of the <code>Model</code> entity, but now, we’ve changed it to receive a <code>ModelSpecification</code>. The rest of the method is your usual structure of first obtaining the record to update by the given IDs, then changing its values according to what came in as a parameter, and finally, saving the changes.</p>
<p>You probably get the idea by now: since the API is using the Resource Model, we need to change input and output values for the methods and run some logic to translate between Resource Model objects and Data Model objects that EF Core can understand so that it can perform its database operations.</p>
<p>That said, here’s what the <code>PostModel</code> Action Method, handler of the <code>POST /api/Makes/{makeId}/Models</code> endpoint, should look like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#369">[HttpPost]</span>
<span style="color:#888">// Return a `ModelSpecification`s and expect `makeId` from the URL and a `ModelSpecification` from the request payload.
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> Task<ActionResult<ModelSpecification>> PostModel([FromRoute] <span style="color:#888;font-weight:bold">int</span> makeId, ModelSpecification model)
{
<span style="color:#888">// First, try to find the make specified by the incoming `makeId`.
</span><span style="color:#888"></span> <span style="color:#888;font-weight:bold">var</span> make = <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.Makes.FindAsync(makeId);
<span style="color:#888">// Respond with 404 if not found.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">if</span> (make == <span style="color:#080;font-weight:bold">null</span>)
{
<span style="color:#080;font-weight:bold">return</span> NotFound();
}
<span style="color:#888">// Build out a new `Model` entity, complete with all related data, based on
</span><span style="color:#888"></span> <span style="color:#888">// the `ModelSpecification` parameter.
</span><span style="color:#888"></span> <span style="color:#888;font-weight:bold">var</span> modelToCreate = <span style="color:#080;font-weight:bold">new</span> Model {
Make = make,
Name = model.Name,
ModelStyles = model.Styles.Select(style => <span style="color:#080;font-weight:bold">new</span> ModelStyle {
<span style="color:#888">// Notice how we search both body type and size by their name field.
</span><span style="color:#888"></span> <span style="color:#888">// We can do that because their names are unique.
</span><span style="color:#888"></span> BodyType = <span style="color:#00d;font-weight:bold">_</span>context.BodyTypes.Single(bodyType => bodyType.Name == style.BodyType),
Size = <span style="color:#00d;font-weight:bold">_</span>context.Sizes.Single(size => size.Name == style.Size),
ModelStyleYears = style.Years.Select(year => <span style="color:#080;font-weight:bold">new</span> ModelStyleYear {
Year = year
}).ToArray()
}).ToArray()
};
<span style="color:#888">// Add it to the DbContext.
</span><span style="color:#888"></span> <span style="color:#00d;font-weight:bold">_</span>context.Add(modelToCreate);
<span style="color:#080;font-weight:bold">try</span>
{
<span style="color:#888">// Try running the INSERTs.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.SaveChangesAsync();
}
<span style="color:#080;font-weight:bold">catch</span> (Microsoft.EntityFrameworkCore.DbUpdateException)
{
<span style="color:#888">// Return accordingly if an error happens.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">return</span> Conflict();
}
<span style="color:#888">// Get back the autogenerated ID of the record we just INSERTed.
</span><span style="color:#888"></span> model.ID = modelToCreate.ID;
<span style="color:#888">// Finally, return a 201 including a location header containing the newly
</span><span style="color:#888"></span> <span style="color:#888">// created resource's URL and the resource itself in the response payload.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">return</span> CreatedAtAction(
nameof(GetModel),
<span style="color:#080;font-weight:bold">new</span> { makeId = makeId, id = model.ID },
model
);
}
</code></pre></div><p>All that should be pretty self explanatory by now. Moving on to the <code>DeleteModel</code> method which handles the <code>DELETE /api/Makes/{makeId}/Models/{id}</code> endpoint:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#369">[HttpDelete("{id}")]</span>
<span style="color:#888">// Expect `makeId` and `id` from the URL.
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> Task<IActionResult> DeleteModel([FromRoute] <span style="color:#888;font-weight:bold">int</span> makeId, <span style="color:#888;font-weight:bold">int</span> id)
{
<span style="color:#888">// Try to find the record identified by the ids from the URL.
</span><span style="color:#888"></span> <span style="color:#888;font-weight:bold">var</span> model = <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.Models.FirstOrDefaultAsync(m => m.MakeID == makeId && m.ID == id);
<span style="color:#888">// Respond with a 404 if we can't find it.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">if</span> (model == <span style="color:#080;font-weight:bold">null</span>)
{
<span style="color:#080;font-weight:bold">return</span> NotFound();
}
<span style="color:#888">// Mark the entity for removal and run the DELETE.
</span><span style="color:#888"></span> <span style="color:#00d;font-weight:bold">_</span>context.Models.Remove(model);
<span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.SaveChangesAsync();
<span style="color:#888">// Respond with a 204.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">return</span> NoContent();
}
</code></pre></div><p>And that’s all for that controller. Hopefully that demonstrated what it looks like to have endpoints that operate using objects other than the EF Core entities. Fire up the app with <code>dotnet run</code> and explore the Swagger UI and you’ll see the changes that we’ve made reflected in there. Try it out. Try CRUDing some vehicle models. And don’t forget to take a look at our POST endpoint specification which looks much more manageable now:</p>
<p><img src="/blog/2021/07/dotnet-5-web-api/post-model-endpoint.png" alt="POST Models endpoint"></p>
<p>Which means that you can send in something like this, for example:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#b06;font-weight:bold">"name"</span>: <span style="color:#d20;background-color:#fff0f0">"Corolla"</span>,
<span style="color:#b06;font-weight:bold">"styles"</span>: [
{
<span style="color:#b06;font-weight:bold">"bodyType"</span>: <span style="color:#d20;background-color:#fff0f0">"Sedan"</span>,
<span style="color:#b06;font-weight:bold">"size"</span>: <span style="color:#d20;background-color:#fff0f0">"Compact"</span>,
<span style="color:#b06;font-weight:bold">"years"</span>: [ <span style="color:#d20;background-color:#fff0f0">"2000"</span>, <span style="color:#d20;background-color:#fff0f0">"2001"</span> ]
}
]
}
</code></pre></div><blockquote>
<p>This will work assuming you’ve created at least one make to add the vehicle model to, as well as a body type whose name is <code>Sedan</code> and a size whose name is <code>Compact</code>.</p>
</blockquote>
<blockquote>
<p>There’s also a <code>ModelExists</code> method in that controller which we don’t need anymore. You can delete it.</p>
</blockquote>
<h4 id="validation-using-built-in-data-annotations">Validation using built-in Data Annotations</h4>
<p>Depending on how “creative” you were in the previous section when trying to CRUD models, you may have run into an issue or two regarding the data that’s allowed into our database. We solve that by implementing input validation. In ASP.NET Core, the easiest way to implement validation is via Data Annotation attributes on the entities or other objects that controllers receive as request payloads. So let’s see about adding some validation to our app. Since our <code>ModelsController</code> uses the <code>ModelSpecification</code> and <code>ModelSpecificationStyle</code> Resource Models to talk to clients, let’s start there. Here’s the diff:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"><span style="color:#000;background-color:#dfd">+using System.ComponentModel.DataAnnotations;
</span><span style="color:#000;background-color:#dfd"></span>
namespace VehicleQuotes.ResourceModels
{
public class ModelSpecification
{
public int ID { get; set; }
<span style="color:#000;background-color:#dfd">+ [Required]
</span><span style="color:#000;background-color:#dfd"></span> public string Name { get; set; }
<span style="color:#000;background-color:#dfd">+ [Required]
</span><span style="color:#000;background-color:#dfd"></span> public ModelSpecificationStyle[] Styles { get; set; }
}
}
</code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"><span style="color:#000;background-color:#dfd">+using System.ComponentModel.DataAnnotations;
</span><span style="color:#000;background-color:#dfd"></span>
namespace VehicleQuotes.ResourceModels
{
public class ModelSpecificationStyle
{
<span style="color:#000;background-color:#dfd">+ [Required]
</span><span style="color:#000;background-color:#dfd"></span> public string BodyType { get; set; }
<span style="color:#000;background-color:#dfd">+ [Required]
</span><span style="color:#000;background-color:#dfd"></span> public string Size { get; set; }
<span style="color:#000;background-color:#dfd">+ [Required]
</span><span style="color:#000;background-color:#dfd">+ [MinLength(1)]
</span><span style="color:#000;background-color:#dfd"></span> public string[] Years { get; set; }
}
}
</code></pre></div><p>And just like that, we get a good amount of functionality. We use the <code>Required</code> and <code>MinLength</code> attributes from the <code>System.ComponentModel.DataAnnotations</code> namespace to specify that some fields are required, and that our <code>Years</code> array needs to contain at least one element. When the app receives a request to the PUT or POST endpoints — which are the ones that expect a <code>ModelSpecification</code> as the payload — validation kicks in. If it fails, the action method is never executed and a 400 status code is returned as a response. Try POSTing to <code>/api/Makes/{makeId}/Models</code> with a payload that violates some of these rules to see for yourself. I tried for example sending this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#b06;font-weight:bold">"name"</span>: <span style="color:#080;font-weight:bold">null</span>,
<span style="color:#b06;font-weight:bold">"styles"</span>: [
{
<span style="color:#b06;font-weight:bold">"bodyType"</span>: <span style="color:#d20;background-color:#fff0f0">"Sedan"</span>,
<span style="color:#b06;font-weight:bold">"size"</span>: <span style="color:#d20;background-color:#fff0f0">"Full size"</span>,
<span style="color:#b06;font-weight:bold">"years"</span>: []
}
]
}
</code></pre></div><p>And I got back a 400 response with this payload:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#b06;font-weight:bold">"type"</span>: <span style="color:#d20;background-color:#fff0f0">"https://tools.ietf.org/html/rfc7231#section-6.5.1"</span>,
<span style="color:#b06;font-weight:bold">"title"</span>: <span style="color:#d20;background-color:#fff0f0">"One or more validation errors occurred."</span>,
<span style="color:#b06;font-weight:bold">"status"</span>: <span style="color:#00d;font-weight:bold">400</span>,
<span style="color:#b06;font-weight:bold">"traceId"</span>: <span style="color:#d20;background-color:#fff0f0">"00-0fd4f00eeb9f2f458ccefc180fcfba1c-79a618f13218394b-00"</span>,
<span style="color:#b06;font-weight:bold">"errors"</span>: {
<span style="color:#b06;font-weight:bold">"Name"</span>: [
<span style="color:#d20;background-color:#fff0f0">"The Name field is required."</span>
],
<span style="color:#b06;font-weight:bold">"Styles[0].Years"</span>: [
<span style="color:#d20;background-color:#fff0f0">"The field Years must be a string or array type with a minimum length of '1'."</span>
]
}
}
</code></pre></div><p>Pretty neat, huh? With minimal effort, we have some basic validation rules in place and a pretty usable response for when errors occur.</p>
<blockquote>
<p>To learn more about model validation, including all the various validation attributes included in the framework, check the official documentation: <a href="https://docs.microsoft.com/en-us/aspnet/core/mvc/models/validation?view=aspnetcore-5.0">Model validation</a> and <a href="https://docs.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations?view=net-5.0">System.ComponentModel.DataAnnotations Namespace</a>.</p>
</blockquote>
<h4 id="validation-using-custom-attributes">Validation using custom attributes</h4>
<p>Of course, the framework is never going to cover all possible validation scenarios with the built-in attributes. Case in point, it’d be great to validate that the <code>Years</code> array contains values that look like actual years. That is, four-character, digit-only strings. There are no validation attributes for that. So, we need to create our own. Let’s add this file into a new <code>Validations</code> directory:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#888">// Validations/ContainsYearsAttribute.cs
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.ComponentModel.DataAnnotations</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.Linq</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.Runtime.CompilerServices</span>;
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Validation</span>
{
<span style="color:#888">// In the .NET Framework, attribute classes need to have their name suffixed with the word "Attribute".
</span><span style="color:#888"></span> <span style="color:#888">// Validation attributes need to inherit from `System.ComponentModel.DataAnnotations`'s `ValidationAttribute` class
</span><span style="color:#888"></span> <span style="color:#888">// and override the `IsValid` method.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">ContainsYearsAttribute</span> : ValidationAttribute
{
<span style="color:#080;font-weight:bold">private</span> <span style="color:#888;font-weight:bold">string</span> propertyName;
<span style="color:#888">// This constructor is called by the framework when the attribute is applied to some member. In this specific
</span><span style="color:#888"></span> <span style="color:#888">// case, we define a `propertyName` parameter annotated with a `CallerMemberName` attribute. This makes it so
</span><span style="color:#888"></span> <span style="color:#888">// the framework sends in the name of the member to which our `ContainsYears` attribute is applied to.
</span><span style="color:#888"></span> <span style="color:#888">// We store the value to use it later when constructing our validation error message.
</span><span style="color:#888"></span> <span style="color:#888">// Check https://docs.microsoft.com/en-us/dotnet/api/system.runtime.compilerservices.callermembernameattribute?view=net-5.0
</span><span style="color:#888"></span> <span style="color:#888">// for more info on `CallerMemberName`.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">public</span> ContainsYearsAttribute([CallerMemberName] <span style="color:#888;font-weight:bold">string</span> propertyName = <span style="color:#080;font-weight:bold">null</span>)
{
<span style="color:#080;font-weight:bold">this</span>.propertyName = propertyName;
}
<span style="color:#888">// This method is called by the framework during validation. `value` is the actual value of the field that this
</span><span style="color:#888"></span> <span style="color:#888">// attribute will validate.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">protected</span> <span style="color:#080;font-weight:bold">override</span> ValidationResult IsValid(<span style="color:#888;font-weight:bold">object</span> <span style="color:#080;font-weight:bold">value</span>, ValidationContext validationContext)
{
<span style="color:#888">// By only applying the validation checks when the value is not null, we make it possible for this
</span><span style="color:#888"></span> <span style="color:#888">// attribute to work on optional fields. In other words, this attribute will skip validation if there is no
</span><span style="color:#888"></span> <span style="color:#888">// value to validate.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">if</span> (<span style="color:#080;font-weight:bold">value</span> != <span style="color:#080;font-weight:bold">null</span>)
{
<span style="color:#888">// Check if all the elements of the string array are valid years. Check the `IsValidYear` method below
</span><span style="color:#888"></span> <span style="color:#888">// to see what checks are applied for each of the array elements.
</span><span style="color:#888"></span> <span style="color:#888;font-weight:bold">var</span> isValid = (<span style="color:#080;font-weight:bold">value</span> <span style="color:#080;font-weight:bold">as</span> <span style="color:#888;font-weight:bold">string</span>[]).All(IsValidYear);
<span style="color:#080;font-weight:bold">if</span> (!isValid)
{
<span style="color:#888">// If not, return an error.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">return</span> <span style="color:#080;font-weight:bold">new</span> ValidationResult(GetErrorMessage());
}
}
<span style="color:#888">// Return a successful validation result if no errors were detected.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">return</span> ValidationResult.Success;
}
<span style="color:#888">// Determines if a given value is valid by making sure it's not null, nor empty, that its length is 4 and that
</span><span style="color:#888"></span> <span style="color:#888">// all its characters are digits.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">private</span> <span style="color:#888;font-weight:bold">bool</span> IsValidYear(<span style="color:#888;font-weight:bold">string</span> <span style="color:#080;font-weight:bold">value</span>) =>
!String.IsNullOrEmpty(<span style="color:#080;font-weight:bold">value</span>) && <span style="color:#080;font-weight:bold">value</span>.Length == <span style="color:#00d;font-weight:bold">4</span> && <span style="color:#080;font-weight:bold">value</span>.All(Char.IsDigit);
<span style="color:#888">// Builds a user friendly error message which includes the name of the field that this validation attribute has
</span><span style="color:#888"></span> <span style="color:#888">// been applied to.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">private</span> <span style="color:#888;font-weight:bold">string</span> GetErrorMessage() =>
<span style="color:#d20;background-color:#fff0f0">$"The {propertyName} field must be an array of strings containing four digits."</span>;
}
}
</code></pre></div><p>Check the comments in the code for more details into how that class works. Then, we apply our custom attribute to our <code>ModelSpecificationStyle</code> class in the same way that we applied the built in ones:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">using System.ComponentModel.DataAnnotations;
<span style="color:#000;background-color:#dfd">+using VehicleQuotes.Validation;
</span><span style="color:#000;background-color:#dfd"></span>
namespace VehicleQuotes.ResourceModels
{
public class ModelSpecificationStyle
{
// ...
[Required]
[MinLength(1)]
<span style="color:#000;background-color:#dfd">+ [ContainsYears]
</span><span style="color:#000;background-color:#dfd"></span> public string[] Years { get; set; }
}
}
</code></pre></div><p>Now do a <code>dotnet run</code> and try to POST to <code>/api/Makes/{makeId}/Models</code> this payload:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#b06;font-weight:bold">"name"</span>: <span style="color:#d20;background-color:#fff0f0">"Rav4"</span>,
<span style="color:#b06;font-weight:bold">"styles"</span>: [
{
<span style="color:#b06;font-weight:bold">"bodyType"</span>: <span style="color:#d20;background-color:#fff0f0">"SUV"</span>,
<span style="color:#b06;font-weight:bold">"size"</span>: <span style="color:#d20;background-color:#fff0f0">"Mid size"</span>,
<span style="color:#b06;font-weight:bold">"years"</span>: [ <span style="color:#d20;background-color:#fff0f0">"not_a_year"</span> ]
}
]
}
</code></pre></div><p>That should make the API respond with this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#b06;font-weight:bold">"type"</span>: <span style="color:#d20;background-color:#fff0f0">"https://tools.ietf.org/html/rfc7231#section-6.5.1"</span>,
<span style="color:#b06;font-weight:bold">"title"</span>: <span style="color:#d20;background-color:#fff0f0">"One or more validation errors occurred."</span>,
<span style="color:#b06;font-weight:bold">"status"</span>: <span style="color:#00d;font-weight:bold">400</span>,
<span style="color:#b06;font-weight:bold">"traceId"</span>: <span style="color:#d20;background-color:#fff0f0">"00-9980325f3e388f48a5975ef382d5b137-2d55da1bb9613e4f-00"</span>,
<span style="color:#b06;font-weight:bold">"errors"</span>: {
<span style="color:#b06;font-weight:bold">"Styles[0].Years"</span>: [
<span style="color:#d20;background-color:#fff0f0">"The Years field must be an array of strings containing four numbers."</span>
]
}
}
</code></pre></div><p>That’s our custom validation attribute doing its job.</p>
<p>There’s another aspect that we could validate using a custom validation attribute. What happens if we try to POST a payload with a body type or size that doesn’t exist? These queries from the <code>PostModel</code> method would throw an <code>InvalidOperationException</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs">BodyType = <span style="color:#00d;font-weight:bold">_</span>context.BodyTypes.Single(bodyType => bodyType.Name == style.BodyType)
</code></pre></div><p>and</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs">Size = <span style="color:#00d;font-weight:bold">_</span>context.Sizes.Single(size => size.Name == style.Size)
</code></pre></div><p>They do so because we used the <code>Single</code> method, which is designed like that. It tries to find a body type or size whose name is the given value, can’t find it, and thus, throws an exception.</p>
<blockquote>
<p>If, for example, we wanted not-founds to return <code>null</code>, we could have used <code>SingleOrDefault</code> instead.</p>
</blockquote>
<p>This unhandled exception results in a response that’s quite unbecoming:</p>
<p><img src="/blog/2021/07/dotnet-5-web-api/invalid-operation-exception.png" alt="InvalidOperationException during POST"></p>
<p>So, to prevent that exception and control the error messaging, we need a couple of new validation attributes that go into the <code>body_types</code> and <code>sizes</code> tables and check if the given values exist. Here’s what one would look like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#888">// Validations/VehicleBodyTypeAttribute.cs
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.ComponentModel.DataAnnotations</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">Microsoft.EntityFrameworkCore</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.Linq</span>;
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Validation</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">VehicleBodyTypeAttribute</span> : ValidationAttribute
{
<span style="color:#080;font-weight:bold">protected</span> <span style="color:#080;font-weight:bold">override</span> ValidationResult IsValid(<span style="color:#888;font-weight:bold">object</span> <span style="color:#080;font-weight:bold">value</span>, ValidationContext validationContext)
{
<span style="color:#080;font-weight:bold">if</span> (<span style="color:#080;font-weight:bold">value</span> == <span style="color:#080;font-weight:bold">null</span>) <span style="color:#080;font-weight:bold">return</span> ValidationResult.Success;
<span style="color:#888;font-weight:bold">var</span> dbContext = validationContext.GetService(<span style="color:#080;font-weight:bold">typeof</span>(VehicleQuotesContext)) <span style="color:#080;font-weight:bold">as</span> VehicleQuotesContext;
<span style="color:#888;font-weight:bold">var</span> bodyTypes = dbContext.BodyTypes.Select(bt => bt.Name).ToList();
<span style="color:#080;font-weight:bold">if</span> (!bodyTypes.Contains(<span style="color:#080;font-weight:bold">value</span>))
{
<span style="color:#888;font-weight:bold">var</span> allowed = String.Join(<span style="color:#d20;background-color:#fff0f0">", "</span>, bodyTypes);
<span style="color:#080;font-weight:bold">return</span> <span style="color:#080;font-weight:bold">new</span> ValidationResult(
<span style="color:#d20;background-color:#fff0f0">$"Invalid vehicle body type {value}. Allowed values are {allowed}."</span>
);
}
<span style="color:#080;font-weight:bold">return</span> ValidationResult.Success;
}
}
}
</code></pre></div><blockquote>
<p>You can find the other one here: <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/Validations/VehicleSizeAttribute.cs">Validations/VehicleSizeAttribute.cs</a>.</p>
</blockquote>
<p>These two are very similar to one another. The most interesting part is how we use the <code>IsValid</code> method’s second parameter (<code>ValidationContext</code>) to obtain an instance of <code>VehicleQuotesContext</code> that we can use to query the database. The rest should be pretty self-explanatory. These attributes are classes that inherit from <code>System.ComponentModel.DataAnnotations</code>’s <code>ValidationAttribute</code> and implement the <code>IsValid</code> method. The method then checks that the value under scrutiny exists in the corresponding table and if it does not, raises a validation error. The validation error includes a list of all allowed values. They can be applied to our <code>ModelSpecificationStyle</code> class like so:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">// ...
namespace VehicleQuotes.ResourceModels
{
public class ModelSpecificationStyle
{
[Required]
<span style="color:#000;background-color:#dfd">+ [VehicleBodyType]
</span><span style="color:#000;background-color:#dfd"></span> public string BodyType { get; set; }
[Required]
<span style="color:#000;background-color:#dfd">+ [VehicleSize]
</span><span style="color:#000;background-color:#dfd"></span> public string Size { get; set; }
//...
}
}
</code></pre></div><p>Now, a request like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#b06;font-weight:bold">"name"</span>: <span style="color:#d20;background-color:#fff0f0">"Rav4"</span>,
<span style="color:#b06;font-weight:bold">"styles"</span>: [
{
<span style="color:#b06;font-weight:bold">"bodyType"</span>: <span style="color:#d20;background-color:#fff0f0">"not_a_body_type"</span>,
<span style="color:#b06;font-weight:bold">"size"</span>: <span style="color:#d20;background-color:#fff0f0">"Mid size"</span>,
<span style="color:#b06;font-weight:bold">"years"</span>: [ <span style="color:#d20;background-color:#fff0f0">"2000"</span> ]
}
]
}
</code></pre></div><p>Produces a response like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#b06;font-weight:bold">"type"</span>: <span style="color:#d20;background-color:#fff0f0">"https://tools.ietf.org/html/rfc7231#section-6.5.1"</span>,
<span style="color:#b06;font-weight:bold">"title"</span>: <span style="color:#d20;background-color:#fff0f0">"One or more validation errors occurred."</span>,
<span style="color:#b06;font-weight:bold">"status"</span>: <span style="color:#00d;font-weight:bold">400</span>,
<span style="color:#b06;font-weight:bold">"traceId"</span>: <span style="color:#d20;background-color:#fff0f0">"00-9ad59a7aff60944ab54c19a73be73cc7-eeabafe03df74e40-00"</span>,
<span style="color:#b06;font-weight:bold">"errors"</span>: {
<span style="color:#b06;font-weight:bold">"Styles[0].BodyType"</span>: [
<span style="color:#d20;background-color:#fff0f0">"Invalid vehicle body type not_a_body_type. Allowed values are Coupe, Sedan, Convertible, Hatchback, SUV, Truck."</span>
]
}
}
</code></pre></div><h4 id="implementing-endpoints-for-quote-rules-and-overrides">Implementing endpoints for quote rules and overrides</h4>
<p>At this point we’ve explored many of the most common features available to us for developing Web APIs. So much so that implementing the next two pieces of functionality for our app doesn’t really introduce any new concepts. So, I wont discuss that here in great detail.</p>
<p>Feel free to browse the source code <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api">on GitHub</a> if you want though. These are the relevant files:</p>
<ul>
<li><a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/Controllers/QuoteOverridesController.cs">Controllers/QuoteOverridesController.cs</a></li>
<li><a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/Controllers/QuoteRulesController.cs">Controllers/QuoteRulesController.cs</a></li>
<li><a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/Models/QuoteOverride.cs">Models/QuoteOverride.cs</a></li>
<li><a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/Models/QuoteRule.cs">Models/QuoteRule.cs</a></li>
<li><a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/ResourceModels/QuoteOverrideSpecification.cs">ResourceModels/QuoteOverrideSpecification.cs</a></li>
<li><a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/Validations/FeatureTypeAttribute.cs">Validation/FeatureTypeAttribute.cs</a></li>
<li><a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/Migrations/20210627204444_AddQuoteRulesAndOverridesTables.cs">Migrations/20210627204444_AddQuoteRulesAndOverridesTables.cs</a></li>
</ul>
<p>The <code>FeatureTypeAttribute</code> class is interesting in that it provides another example of a validation attribute. This time is one that makes sure the value being validated is included in an array of strings that’s defined literally in the code.</p>
<p>Other than that, it’s all stuff we’ve already covered: models, migrations, scaffolding controllers, custom routes, resource models, etc.</p>
<p>If you are following along, be sure to add those files and run a <code>dotnet ef database update</code> to apply the migration.</p>
<h4 id="implementing-the-quote-model">Implementing the quote model</h4>
<p>Let’s now start implementing the main capability of our app: calculating quotes for vehicles. Let’s start with the <code>Quote</code> entity. This is what the new <code>Models/Quote.cs</code> file containing the entity class will look like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#888">// Models/Quote.cs
</span><span style="color:#888"></span><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.ComponentModel.DataAnnotations.Schema</span>;
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Models</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">Quote</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> ID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#888">// Directly tie this quote record to a specific vehicle that we have
</span><span style="color:#888"></span> <span style="color:#888">// registered in our db, if we have it.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int?</span> ModelStyleYearID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#888">// If we don't have the specific vehicle in our db, then store the
</span><span style="color:#888"></span> <span style="color:#888">// vehicle model details independently.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span> Year { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span> Make { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span> Model { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> BodyTypeID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> SizeID { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">bool</span> ItMoves { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">bool</span> HasAllWheels { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">bool</span> HasAlloyWheels { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">bool</span> HasAllTires { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">bool</span> HasKey { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">bool</span> HasTitle { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">bool</span> RequiresPickup { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">bool</span> HasEngine { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">bool</span> HasTransmission { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">bool</span> HasCompleteInterior { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">int</span> OfferedQuote { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> <span style="color:#888;font-weight:bold">string</span> Message { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> DateTime CreatedAt { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> ModelStyleYear ModelStyleYear { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> BodyType BodyType { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
<span style="color:#080;font-weight:bold">public</span> Size Size { <span style="color:#080;font-weight:bold">get</span>; <span style="color:#080;font-weight:bold">set</span>; }
}
}
</code></pre></div><p>This should be pretty familiar by now. It’s a plain old class that defines a number of properties, one for each of the fields in the resulting table and a few navigation properties that serve to access related data.</p>
<p>The only aspect worth noting is that we’ve defined the <code>ModelStyleYearID</code> property as a nullable integer (with <code>int?</code>). This is because, like we discussed at the beginning, the foreign key from <code>quotes</code> to <code>vehicle_style_years</code> is actually optional. The reason being that we may receive a quote request for a vehicle that we don’t have registered in our database. We need to be able to support quoting those vehicles too, so if we don’t have the requested vehicle registered, then that foreign key will stay unpopulated and we’ll rely on the other fields (i.e. <code>Year</code>, <code>Make</code>, <code>Model</code>, <code>BodyTypeID</code> and <code>SizeID</code>) to identify the vehicle and calculate the quote for it.</p>
<h4 id="using-dependency-injection">Using Dependency Injection</h4>
<p>So far we’ve been putting a lot of logic in our controllers. That’s generally not ideal, but fine as long as the logic is simple. The problem is that a design like that can quickly become a hindrance for maintainability and testing as our application grows more complex. For the logic that calculates a quote, we’d be better served by implementing it in its own class, outside of the controller that should only care about defining endpoints and handling HTTP concerns. Then, the controller can be given access to that class and delegate to it all the quote calculation logic. Thankfully, ASP.NET Core includes an IoC container by default, which allows us to use Dependency Injection to solve these kinds of problems. Let’s see what that looks like.</p>
<p>For working with quotes, we want to offer two endpoints:</p>
<ol>
<li>A <code>POST api/Quotes</code> that captures the vehicle information, calculates the quote, keeps record of the request, and responds with the calculated value.</li>
<li>A <code>GET api/Quotes</code> that returns all the currently registered quotes in the system.</li>
</ol>
<p>Using the Dependency Injection capabilities, a controller that implements those two could look like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.Collections.Generic</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.Threading.Tasks</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">Microsoft.AspNetCore.Mvc</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.ResourceModels</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Services</span>;
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Controllers</span>
{
<span style="color:#369"> [Route("api/[controller]</span><span style="color:#d20;background-color:#fff0f0">")]
</span><span style="color:#d20;background-color:#fff0f0"></span><span style="color:#369"> [ApiController]</span>
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">QuotesController</span> : ControllerBase
{
<span style="color:#080;font-weight:bold">private</span> <span style="color:#080;font-weight:bold">readonly</span> QuoteService <span style="color:#00d;font-weight:bold">_</span>service;
<span style="color:#888">// When intiating the request processing logic, the framework recognizes
</span><span style="color:#888"></span> <span style="color:#888">// that this controller has a dependency on QuoteService and expects an
</span><span style="color:#888"></span> <span style="color:#888">// instance of it to be injected via the constructor. The framework then
</span><span style="color:#888"></span> <span style="color:#888">// does what it needs to do in order to provide that dependency.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">public</span> QuotesController(QuoteService service)
{
<span style="color:#00d;font-weight:bold">_</span>service = service;
}
<span style="color:#888">// GET: api/Quotes
</span><span style="color:#888"></span><span style="color:#369"> [HttpGet]</span>
<span style="color:#888">// This method returns a collection of a new resource model instead of just the `Quote` entity directly.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> Task<ActionResult<IEnumerable<SubmittedQuoteRequest>>> GetAll()
{
<span style="color:#888">// Instead of directly implementing the logic in this method, we call on
</span><span style="color:#888"></span> <span style="color:#888">// the service class and let it take care of the rest.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">return</span> <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>service.GetAllQuotes();
}
<span style="color:#888">// POST: api/Quotes
</span><span style="color:#888"></span><span style="color:#369"> [HttpPost]</span>
<span style="color:#888">// This method receives as a paramater a `QuoteRequest` of just the `Quote` entity directly.
</span><span style="color:#888"></span> <span style="color:#888">// That way callers of this endpoint don't need to be exposed to the details of our data model implementation.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> Task<ActionResult<SubmittedQuoteRequest>> Post(QuoteRequest request)
{
<span style="color:#888">// Instead of directly implementing the logic in this method, we call on
</span><span style="color:#888"></span> <span style="color:#888">// the service class and let it take care of the rest.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">return</span> <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>service.CalculateQuote(request);
}
}
}
</code></pre></div><p>As you can see, we’ve once again opted to abstract away clients from the implementation details of our data model and used Resource Models for the API contract instead of the <code>Quote</code> entity directly. We have one for input data that’s called <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/ResourceModels/QuoteRequest.cs"><code>QuoteRequest</code></a> and another one for output: <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/blob/master/ResourceModels/SubmittedQuoteRequest.cs"><code>SubmittedQuoteRequest</code></a>. Not very remarkable by themselves, but feel free to explore the source code in <a href="https://github.com/megakevin/end-point-blog-dotnet-5-web-api/tree/master/ResourceModels">the GitHub repo</a>.</p>
<p>This controller has a dependency on <code>QuoteService</code>, which it uses to perform all of the necessary logic. This class is not defined yet so let’s do that next:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.Collections.Generic</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.Linq</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.Threading.Tasks</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">Microsoft.EntityFrameworkCore</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Models</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.ResourceModels</span>;
<span style="color:#080;font-weight:bold">namespace</span> <span style="color:#b06;font-weight:bold">VehicleQuotes.Services</span>
{
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">class</span> <span style="color:#b06;font-weight:bold">QuoteService</span>
{
<span style="color:#080;font-weight:bold">private</span> <span style="color:#080;font-weight:bold">readonly</span> VehicleQuotesContext <span style="color:#00d;font-weight:bold">_</span>context;
<span style="color:#888">// This constructor defines a dependency on VehicleQuotesContext, similar to most of our controllers.
</span><span style="color:#888"></span> <span style="color:#888">// Via the built in dependency injection features, the framework makes sure to provide this parameter when
</span><span style="color:#888"></span> <span style="color:#888">// creating new instances of this class.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">public</span> QuoteService(VehicleQuotesContext context)
{
<span style="color:#00d;font-weight:bold">_</span>context = context;
}
<span style="color:#888">// This method takes all the records from the `quotes` table and constructs `SubmittedQuoteRequest`s with them.
</span><span style="color:#888"></span> <span style="color:#888">// Then returns that as a list.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> Task<List<SubmittedQuoteRequest>> GetAllQuotes()
{
<span style="color:#888;font-weight:bold">var</span> quotesToReturn = <span style="color:#00d;font-weight:bold">_</span>context.Quotes.Select(q => <span style="color:#080;font-weight:bold">new</span> SubmittedQuoteRequest
{
ID = q.ID,
CreatedAt = q.CreatedAt,
OfferedQuote = q.OfferedQuote,
Message = q.Message,
Year = q.Year,
Make = q.Make,
Model = q.Model,
BodyType = q.BodyType.Name,
Size = q.Size.Name,
ItMoves = q.ItMoves,
HasAllWheels = q.HasAllWheels,
HasAlloyWheels = q.HasAlloyWheels,
HasAllTires = q.HasAllTires,
HasKey = q.HasKey,
HasTitle = q.HasTitle,
RequiresPickup = q.RequiresPickup,
HasEngine = q.HasEngine,
HasTransmission = q.HasTransmission,
HasCompleteInterior = q.HasCompleteInterior,
});
<span style="color:#080;font-weight:bold">return</span> <span style="color:#080;font-weight:bold">await</span> quotesToReturn.ToListAsync();
}
<span style="color:#888">// This method takes an incoming `QuoteRequest` and calculates a quote based on the vehicle described by it.
</span><span style="color:#888"></span> <span style="color:#888">// To calculate this quote, it looks for any overrides before trying to use the currently existing rules defined
</span><span style="color:#888"></span> <span style="color:#888">// in the `quote_rules` table. It also stores a record on the `quotes` table with all the incoming data and the
</span><span style="color:#888"></span> <span style="color:#888">// quote calculation result. It returns back the quote value as well as a message explaining the conditions of
</span><span style="color:#888"></span> <span style="color:#888">// the quote.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> Task<SubmittedQuoteRequest> CalculateQuote(QuoteRequest request)
{
<span style="color:#888;font-weight:bold">var</span> response = <span style="color:#080;font-weight:bold">this</span>.CreateResponse(request);
<span style="color:#888;font-weight:bold">var</span> quoteToStore = <span style="color:#080;font-weight:bold">await</span> <span style="color:#080;font-weight:bold">this</span>.CreateQuote(request);
<span style="color:#888;font-weight:bold">var</span> requestedModelStyleYear = <span style="color:#080;font-weight:bold">await</span> <span style="color:#080;font-weight:bold">this</span>.FindModelStyleYear(request);
QuoteOverride quoteOverride = <span style="color:#080;font-weight:bold">null</span>;
<span style="color:#080;font-weight:bold">if</span> (requestedModelStyleYear != <span style="color:#080;font-weight:bold">null</span>)
{
quoteToStore.ModelStyleYear = requestedModelStyleYear;
quoteOverride = <span style="color:#080;font-weight:bold">await</span> <span style="color:#080;font-weight:bold">this</span>.FindQuoteOverride(requestedModelStyleYear);
<span style="color:#080;font-weight:bold">if</span> (quoteOverride != <span style="color:#080;font-weight:bold">null</span>)
{
response.OfferedQuote = quoteOverride.Price;
}
}
<span style="color:#080;font-weight:bold">if</span> (quoteOverride == <span style="color:#080;font-weight:bold">null</span>)
{
response.OfferedQuote = <span style="color:#080;font-weight:bold">await</span> <span style="color:#080;font-weight:bold">this</span>.CalculateOfferedQuote(request);
}
<span style="color:#080;font-weight:bold">if</span> (requestedModelStyleYear == <span style="color:#080;font-weight:bold">null</span>)
{
response.Message = <span style="color:#d20;background-color:#fff0f0">"Offer subject to change upon vehicle inspection."</span>;
}
quoteToStore.OfferedQuote = response.OfferedQuote;
quoteToStore.Message = response.Message;
<span style="color:#00d;font-weight:bold">_</span>context.Quotes.Add(quoteToStore);
<span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.SaveChangesAsync();
response.ID = quoteToStore.ID;
response.CreatedAt = quoteToStore.CreatedAt;
<span style="color:#080;font-weight:bold">return</span> response;
}
<span style="color:#888">// Creates a `SubmittedQuoteRequest`, initialized with default values, using the data from the incoming
</span><span style="color:#888"></span> <span style="color:#888">// `QuoteRequest`. `SubmittedQuoteRequest` is what gets returned in the response payload of the quote endpoints.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">private</span> SubmittedQuoteRequest CreateResponse(QuoteRequest request)
{
<span style="color:#080;font-weight:bold">return</span> <span style="color:#080;font-weight:bold">new</span> SubmittedQuoteRequest
{
OfferedQuote = <span style="color:#00d;font-weight:bold">0</span>,
Message = <span style="color:#d20;background-color:#fff0f0">"This is our final offer."</span>,
Year = request.Year,
Make = request.Make,
Model = request.Model,
BodyType = request.BodyType,
Size = request.Size,
ItMoves = request.ItMoves,
HasAllWheels = request.HasAllWheels,
HasAlloyWheels = request.HasAlloyWheels,
HasAllTires = request.HasAllTires,
HasKey = request.HasKey,
HasTitle = request.HasTitle,
RequiresPickup = request.RequiresPickup,
HasEngine = request.HasEngine,
HasTransmission = request.HasTransmission,
HasCompleteInterior = request.HasCompleteInterior,
};
}
<span style="color:#888">// Creates a `Quote` based on the data from the incoming `QuoteRequest`. This is the object that gets eventually
</span><span style="color:#888"></span> <span style="color:#888">// stored in the database.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">private</span> <span style="color:#080;font-weight:bold">async</span> Task<Quote> CreateQuote(QuoteRequest request)
{
<span style="color:#080;font-weight:bold">return</span> <span style="color:#080;font-weight:bold">new</span> Quote
{
Year = request.Year,
Make = request.Make,
Model = request.Model,
BodyTypeID = (<span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.BodyTypes.SingleAsync(bt => bt.Name == request.BodyType)).ID,
SizeID = (<span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.Sizes.SingleAsync(s => s.Name == request.Size)).ID,
ItMoves = request.ItMoves,
HasAllWheels = request.HasAllWheels,
HasAlloyWheels = request.HasAlloyWheels,
HasAllTires = request.HasAllTires,
HasKey = request.HasKey,
HasTitle = request.HasTitle,
RequiresPickup = request.RequiresPickup,
HasEngine = request.HasEngine,
HasTransmission = request.HasTransmission,
HasCompleteInterior = request.HasCompleteInterior,
CreatedAt = DateTime.Now
};
}
<span style="color:#888">// Tries to find a registered vehicle that matches the one for which the quote is currently being requested.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">private</span> <span style="color:#080;font-weight:bold">async</span> Task<ModelStyleYear> FindModelStyleYear(QuoteRequest request)
{
<span style="color:#080;font-weight:bold">return</span> <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.ModelStyleYears.FirstOrDefaultAsync(msy =>
msy.Year == request.Year &&
msy.ModelStyle.Model.Make.Name == request.Make &&
msy.ModelStyle.Model.Name == request.Model &&
msy.ModelStyle.BodyType.Name == request.BodyType &&
msy.ModelStyle.Size.Name == request.Size
);
}
<span style="color:#888">// Tries to find an override for the vehicle for which the quote is currently being requested.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">private</span> <span style="color:#080;font-weight:bold">async</span> Task<QuoteOverride> FindQuoteOverride(ModelStyleYear modelStyleYear)
{
<span style="color:#080;font-weight:bold">return</span> <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.QuoteOverides
.FirstOrDefaultAsync(qo => qo.ModelStyleYear == modelStyleYear);
}
<span style="color:#888">// Uses the rules stored in the `quote_rules` table to calculate how much money to offer for the vehicle
</span><span style="color:#888"></span> <span style="color:#888">// described in the incoming `QuoteRequest`.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">private</span> <span style="color:#080;font-weight:bold">async</span> Task<<span style="color:#888;font-weight:bold">int</span>> CalculateOfferedQuote(QuoteRequest request)
{
<span style="color:#888;font-weight:bold">var</span> rules = <span style="color:#080;font-weight:bold">await</span> <span style="color:#00d;font-weight:bold">_</span>context.QuoteRules.ToListAsync();
<span style="color:#888">// Given a vehicle feature type, find a rule that applies to that feature type and has the value that
</span><span style="color:#888"></span> <span style="color:#888">// matches the condition of the incoming vehicle being quoted.
</span><span style="color:#888"></span> Func<<span style="color:#888;font-weight:bold">string</span>, QuoteRule> theMatchingRule = featureType =>
rules.FirstOrDefault(r =>
r.FeatureType == featureType &&
r.FeatureValue == request[featureType]
);
<span style="color:#888">// For each vehicle feature that we care about, sum up the the monetary values of all the rules that match
</span><span style="color:#888"></span> <span style="color:#888">// the given vehicle condition.
</span><span style="color:#888"></span> <span style="color:#080;font-weight:bold">return</span> QuoteRule.FeatureTypes.All
.Select(theMatchingRule)
.Where(r => r != <span style="color:#080;font-weight:bold">null</span>)
.Sum(r => r.PriceModifier);
}
}
}
</code></pre></div><p>Finally, we need to tell the framework that this class is available for Dependency Injection. Similarly to how we did with our <code>VehicleQuotesContext</code>, we do so in the <code>Startup.cs</code> file’s <code>ConfigureServices</code> method. Just add this line at the top:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs">services.AddScoped<Services.QuoteService>();
</code></pre></div><blockquote>
<p>The core tenet of Inversion of Control is to depend on abstractions, not on implementations. So ideally, we would not have our controller directly call for a <code>QuoteService</code> instance. Instead, we would have it reference an abstraction, e.g. an interface like <code>IQuoteService</code>. The statement on <code>Startup.cs</code> would then look like this instead: <code>services.AddScoped<Services.IQuoteService, Services.QuoteService>();</code>.</p>
<p>This is important because it would allow us to unit test the component that depends on our service class (i.e. the controller in this case) by passing it a <a href="https://en.wikipedia.org/wiki/Mock_object">mock object</a> — one that also implements <code>IQuoteService</code> but does not really implement all the functionality of the actual <code>QuoteService</code> class. Since the controller only knows about the interface (that is, it “depends on an abstraction”), the actual object that we give it as a dependency doesn’t matter to it, as long as it implements that interface. This ability to inject mocks as dependencies is invaluable during testing. Testing is beyond the scope of this article though, so I’ll stick with the simpler approach with a static dependency on a concrete class. Know that this is not a good practice when it comes to actual production systems.</p>
</blockquote>
<p>And that’s all it takes. Once you add a few rules via <code>POST /api/QuoteRules</code>, you should be able to get some vehicles quoted with <code>POST /api/Quotes</code>. And also see what the system has stored via <code>GET /api/Quotes</code>.</p>
<p><img src="/blog/2021/07/dotnet-5-web-api/a-quote.png" alt="A Quote"></p>
<p>And that’s all the functionality that we set out to build into our REST API! There are a few other neat things that I thought I’d include though.</p>
<h4 id="adding-seed-data-for-lookup-tables">Adding seed data for lookup tables</h4>
<p>Our vehicle size and body type data isn’t meant to really change much. In fact, we could even preload that data when our application starts. EF Core provides a data seeding feature that we can access via configurations on the <code>DbContext</code> itself. For our case, we could add this method to our <code>VehicleQuotesContext</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#080;font-weight:bold">protected</span> <span style="color:#080;font-weight:bold">override</span> <span style="color:#080;font-weight:bold">void</span> OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Size>().HasData(
<span style="color:#080;font-weight:bold">new</span> Size { ID = <span style="color:#00d;font-weight:bold">1</span>, Name = <span style="color:#d20;background-color:#fff0f0">"Subcompact"</span> },
<span style="color:#080;font-weight:bold">new</span> Size { ID = <span style="color:#00d;font-weight:bold">2</span>, Name = <span style="color:#d20;background-color:#fff0f0">"Compact"</span> },
<span style="color:#080;font-weight:bold">new</span> Size { ID = <span style="color:#00d;font-weight:bold">3</span>, Name = <span style="color:#d20;background-color:#fff0f0">"Mid Size"</span> },
<span style="color:#080;font-weight:bold">new</span> Size { ID = <span style="color:#00d;font-weight:bold">5</span>, Name = <span style="color:#d20;background-color:#fff0f0">"Full Size"</span> }
);
modelBuilder.Entity<BodyType>().HasData(
<span style="color:#080;font-weight:bold">new</span> BodyType { ID = <span style="color:#00d;font-weight:bold">1</span>, Name = <span style="color:#d20;background-color:#fff0f0">"Coupe"</span> },
<span style="color:#080;font-weight:bold">new</span> BodyType { ID = <span style="color:#00d;font-weight:bold">2</span>, Name = <span style="color:#d20;background-color:#fff0f0">"Sedan"</span> },
<span style="color:#080;font-weight:bold">new</span> BodyType { ID = <span style="color:#00d;font-weight:bold">3</span>, Name = <span style="color:#d20;background-color:#fff0f0">"Hatchback"</span> },
<span style="color:#080;font-weight:bold">new</span> BodyType { ID = <span style="color:#00d;font-weight:bold">4</span>, Name = <span style="color:#d20;background-color:#fff0f0">"Wagon"</span> },
<span style="color:#080;font-weight:bold">new</span> BodyType { ID = <span style="color:#00d;font-weight:bold">5</span>, Name = <span style="color:#d20;background-color:#fff0f0">"Convertible"</span> },
<span style="color:#080;font-weight:bold">new</span> BodyType { ID = <span style="color:#00d;font-weight:bold">6</span>, Name = <span style="color:#d20;background-color:#fff0f0">"SUV"</span> },
<span style="color:#080;font-weight:bold">new</span> BodyType { ID = <span style="color:#00d;font-weight:bold">7</span>, Name = <span style="color:#d20;background-color:#fff0f0">"Truck"</span> },
<span style="color:#080;font-weight:bold">new</span> BodyType { ID = <span style="color:#00d;font-weight:bold">8</span>, Name = <span style="color:#d20;background-color:#fff0f0">"Mini Van"</span> },
<span style="color:#080;font-weight:bold">new</span> BodyType { ID = <span style="color:#00d;font-weight:bold">9</span>, Name = <span style="color:#d20;background-color:#fff0f0">"Roadster"</span> }
);
}
</code></pre></div><p><a href="https://docs.microsoft.com/en-us/ef/core/modeling/#use-fluent-api-to-configure-a-model"><code>OnModelCreating</code></a> is a hook that we can define to run some code at the time the model is being created for the first time. Here, we’re using it to seed some data. In order to apply that, a migration needs to be created and executed. If you’ve added some data to the database, be sure to wipe it before running the migration so that we don’t run into unique constraint violations. Here are the migrations:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ dotnet ef migrations add AddSeedDataForSizesAndBodyTypes
$ dotnet ef database update
</code></pre></div><p>After that’s done, it no longer makes sense to allow creating, updating, deleting and fetching individual sizes and body types, so I would delete those endpoints from the respective controllers.</p>
<p><img src="/blog/2021/07/dotnet-5-web-api/body-types-get-all-only.png" alt="Body Types, GET all only"></p>
<p><img src="/blog/2021/07/dotnet-5-web-api/sizes-get-all-only.png" alt="Sizes, GET all only"></p>
<blockquote>
<p>There are other options for data seeding in EF Core. Take a look: <a href="https://docs.microsoft.com/en-us/ef/core/modeling/data-seeding">Data Seeding</a>.</p>
</blockquote>
<h4 id="improving-the-swagger-ui-via-xml-comments">Improving the Swagger UI via XML comments</h4>
<p>Our current auto-generated Swagger UI is pretty awesome. Especially considering that we got it for free. It’s a little lacking when it comes to more documentation about specific endpoint summaries or expected responses. The good news is that there’s a way to leverage <a href="https://docs.microsoft.com/en-us/dotnet/csharp/codedoc">C# XML Comments</a> in order to improve the Swagger UI.</p>
<p>We can add support for that by configuring our project to produce, at build time, an XML file with the docs that we write. In order to do so, we need to update the <code>VehicleQuotes.csproj</code> like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff"><Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<!-- ... -->
<span style="color:#000;background-color:#dfd">+ <GenerateDocumentationFile>true</GenerateDocumentationFile>
</span><span style="color:#000;background-color:#dfd">+ <NoWarn>$(NoWarn);1591</NoWarn>
</span><span style="color:#000;background-color:#dfd"></span> </PropertyGroup>
<!-- ... -->
</Project>
</code></pre></div><p><code>GenerateDocumentationFile</code> is the flag that tells the .NET 5 build tools to generate the documentation file. The <code>NoWarn</code> element prevents our build output from getting cluttered with a lot of warnings saying that some classes and methods are not properly documented. We don’t want that because we just want to write enough documentation for the Swagger UI. And that includes only the controllers.</p>
<p>You can run <code>dotnet build</code> and look for the new file in <code>bin/Debug/net5.0/VehicleQuotes.xml</code>.</p>
<p>Then, we need to update <code>Startup.cs</code>. First we need to add the following <code>using</code> statements:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.IO</span>;
<span style="color:#080;font-weight:bold">using</span> <span style="color:#b06;font-weight:bold">System.Reflection</span>;
</code></pre></div><p>And add the following code to the <code>ConfigureServices</code> method on <code>Startup.cs</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">public void ConfigureServices(IServiceCollection services)
{
// ...
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "VehicleQuotes", Version = "v1" });
<span style="color:#000;background-color:#dfd">+ c.IncludeXmlComments(
</span><span style="color:#000;background-color:#dfd">+ Path.Combine(
</span><span style="color:#000;background-color:#dfd">+ AppContext.BaseDirectory,
</span><span style="color:#000;background-color:#dfd">+ $"{Assembly.GetExecutingAssembly().GetName().Name}.xml"
</span><span style="color:#000;background-color:#dfd">+ )
</span><span style="color:#000;background-color:#dfd"></span> );
});
// ...
}
</code></pre></div><p>This makes it so the <code>SwaggerGen</code> service knows to look for the XML documentation file when building up the Open API specification file used for generating the Swagger UI.</p>
<p>Now that all of that is set up, we can actually write some XML comments and attributes that will enhance our Swagger UI. As an example, put this on top of <code>ModelsController</code>’s <code>Post</code> method:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-cs" data-lang="cs"><span style="color:#888">/// <summary>
</span><span style="color:#888">/// Creates a new vehicle model for the given make.
</span><span style="color:#888">/// </summary>
</span><span style="color:#888">/// <param name="makeId">The ID of the vehicle make to add the model to.</param>
</span><span style="color:#888">/// <param name="model">The data to create the new model with.</param>
</span><span style="color:#888">/// <response code="201">When the request is valid.</response>
</span><span style="color:#888">/// <response code="404">When the specified vehicle make does not exist.</response>
</span><span style="color:#888">/// <response code="409">When there's already another model in the same make with the same name.</response>
</span><span style="color:#888"></span><span style="color:#369">[HttpPost]</span>
<span style="color:#369">[ProducesResponseType(StatusCodes.Status201Created)]</span>
<span style="color:#369">[ProducesResponseType(StatusCodes.Status404NotFound)]</span>
<span style="color:#369">[ProducesResponseType(StatusCodes.Status409Conflict)]</span>
<span style="color:#080;font-weight:bold">public</span> <span style="color:#080;font-weight:bold">async</span> Task<ActionResult<ModelSpecification>> Post([FromRoute] <span style="color:#888;font-weight:bold">int</span> makeId, ModelSpecification model)
{
<span style="color:#888">// ...
</span><span style="color:#888"></span>}
</code></pre></div><p>Then, the Swagger UI now looks like this for this endpoint:</p>
<p><img src="/blog/2021/07/dotnet-5-web-api/fully-documented-post-models.png" alt="Fully documented POST Models endpoint"></p>
<h4 id="configuring-the-app-via-settings-files-and-environment-variables">Configuring the app via settings files and environment variables</h4>
<p>Another aspect that’s important to web applications is having them be configurable via things like configuration files or environment variables. The framework already has provision for this, we just need to use it. I’m talking about the <code>appsettings</code> files.</p>
<p>We have two of them created for us by default: <code>appsettings.json</code> which is applied in all environments, and <code>appsettings.Development.json</code> that is applied only under development environments. The environment is given by the <code>ASPNETCORE_ENVIRONMENT</code> environment variable, and it can be set to either <code>Development</code>, <code>Staging</code>, or <code>Production</code> by default. That means that if we had, for example, an <code>appsettings.Staging.json</code> file, the settings defined within would be loaded if the <code>ASPNETCORE_ENVIRONMENT</code> environment variable were set to <code>Staging</code>. You get the idea.</p>
<blockquote>
<p>You can learn more about <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/?view=aspnetcore-5.0">configuration</a> and <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/environments?view=aspnetcore-5.0">environments</a> in the official documentation.</p>
</blockquote>
<p>Anyway, let’s add a new setting on <code>appsettings.json</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">{
// ...
<span style="color:#000;background-color:#dfd">+ "DefaultOffer": 77
</span><span style="color:#000;background-color:#dfd"></span>}
</code></pre></div><p>We’ll use this setting to give default offers when we’re not able to calculate appropriate quotes for vehicles. This can happen if we don’t have rules, or if the ones we have don’t match any of the incoming vehicle features or if for some other reason the final sum ends up in zero or negative number. We can use this setting in our <code>QuoteService</code> like so:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-diff" data-lang="diff">// ...
<span style="color:#000;background-color:#dfd">+using Microsoft.Extensions.Configuration;
</span><span style="color:#000;background-color:#dfd"></span>
namespace VehicleQuotes.Services
{
public class QuoteService
{
// ...
<span style="color:#000;background-color:#dfd">+ private readonly IConfiguration _configuration;
</span><span style="color:#000;background-color:#dfd"></span>
<span style="color:#000;background-color:#fdd">- public QuoteService(VehicleQuotesContext context)
</span><span style="color:#000;background-color:#fdd"></span><span style="color:#000;background-color:#dfd">+ public QuoteService(VehicleQuotesContext context, IConfiguration configuration)
</span><span style="color:#000;background-color:#dfd"></span> {
_context = context;
<span style="color:#000;background-color:#dfd">+ _configuration = configuration;
</span><span style="color:#000;background-color:#dfd"></span> }
// ...
public async Task<SubmittedQuoteRequest> CalculateQuote(QuoteRequest request)
{
// ...
<span style="color:#000;background-color:#dfd">+ if (response.OfferedQuote <= 0)
</span><span style="color:#000;background-color:#dfd">+ {
</span><span style="color:#000;background-color:#dfd">+ response.OfferedQuote = _configuration.GetValue<int>("DefaultOffer", 0);
</span><span style="color:#000;background-color:#dfd">+ }
</span><span style="color:#000;background-color:#dfd"></span>
quoteToStore.OfferedQuote = response.OfferedQuote;
// ...
}
// ...
}
}
</code></pre></div><p>Here, we’ve added a new parameter to the constructor to specify that <code>VehicleQuotesContext</code> has a dependency on <code>IConfiguration</code>. This prompts the framework to provide an instance of that when instantiating the class. We can use that instance to access the settings that we defined in the <code>appsettings.json</code> file via its <code>GetValue</code> method, like I demonstrated above.</p>
<p>The value of the settings in <code>appsettings.json</code> can be overridden by environment variables as well. On Linux, for example, we can run the app and set an environment value with a line like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ <span style="color:#369">DefaultOffer</span>=<span style="color:#00d;font-weight:bold">123</span> dotnet run
</code></pre></div><p>This will make the application use <code>123</code> instead of <code>77</code> when it comes to the <code>DefaultOffer</code> setting. This flexibility is great from a DevOps perspective. And we had to do minimal work in order to get that going.</p>
<h3 id="thats-all-for-now">That’s all for now</h3>
<p>And that’s it! In this article we’ve gone through many of the features offered in <a href="https://docs.microsoft.com/en-us/dotnet/core/dotnet-five">.NET 5</a>, <a href="https://docs.microsoft.com/en-us/aspnet/core/introduction-to-aspnet-core?view=aspnetcore-5.0">ASP.NET Core</a>, and <a href="https://docs.microsoft.com/en-us/ef/core/">Entity Framework Core</a> to support some of the most common use cases when it comes to developing Web API applications.</p>
<p>We’ve installed .NET 5 and created an ASP.NET Core Web API project with EF Core and a few bells and whistles, created controllers to support many different endpoints, played a little bit with routes and response codes, created and built upon a data model and updated a database via entities and migrations, implemented database constraints using unique indexes, implemented input validation using both built-in and custom validation attributes, implemented resource models as DTOs for defining the contract of some of our API endpoints, tapped into the built-in dependency injection capabilities, explored and improved the auto-generated Swagger UI, added seed data for our database, learned about configuration via settings files and environment variables.</p>
<p>.NET 5 is looking great.</p>
<h3 id="table-of-contents">Table of contents</h3>
<ul>
<li><a href="#what-were-building">What we’re building</a>
<ul>
<li><a href="#the-demo-application">The demo application</a></li>
<li><a href="#the-data-model">The data model</a></li>
</ul>
</li>
<li><a href="#the-development-environment">The development environment</a>
<ul>
<li><a href="#setting-up-the-postgresql-database-with-docker">Setting up the PostgreSQL database with Docker</a></li>
<li><a href="#installing-the-net-5-sdk">Installing the .NET 5 SDK</a></li>
</ul>
</li>
<li><a href="#setting-up-the-project">Setting up the project</a>
<ul>
<li><a href="#creating-our-aspnet-core-rest-api-project">Creating our ASP.NET Core REST API project</a></li>
<li><a href="#installing-packages-well-need">Installing packages we’ll need</a></li>
<li><a href="#connecting-to-the-database-and-performing-initial-app-configuration">Connecting to the database and performing initial app configuration</a></li>
</ul>
</li>
<li><a href="#building-the-application">Building the application</a>
<ul>
<li><a href="#creating-model-entities-migrations-and-updating-the-database">Creating model entities, migrations and updating the database</a></li>
<li><a href="#creating-controllers-for-cruding-our-tables">Creating controllers for CRUDing our tables</a></li>
<li><a href="#adding-unique-constraints-via-indexes">Adding unique constraints via indexes</a></li>
<li><a href="#responding-with-specific-http-error-codes-409-conflict">Responding with specific HTTP error codes (409 Conflict)</a></li>
<li><a href="#adding-a-more-complex-entity-to-the-model">Adding a more complex entity to the model</a></li>
<li><a href="#adding-composite-unique-indexes">Adding composite unique indexes</a></li>
<li><a href="#adding-controllers-with-custom-routes">Adding controllers with custom routes</a></li>
<li><a href="#using-resource-models-as-dtos-for-controllers">Using resource models as DTOs for controllers</a></li>
<li><a href="#validation-using-built-in-data-annotations">Validation using built-in Data Annotations</a></li>
<li><a href="#validation-using-custom-attributes">Validation using custom attributes</a></li>
<li><a href="#implementing-endpoints-for-quote-rules-and-overrides">Implementing endpoints for quote rules and overrides</a></li>
<li><a href="#implementing-the-quote-model">Implementing the quote model</a></li>
<li><a href="#using-dependency-injection">Using Dependency Injection</a></li>
<li><a href="#adding-seed-data-for-lookup-tables">Adding seed data for lookup tables</a></li>
<li><a href="#improving-the-swagger-ui-via-xml-comments">Improving the Swagger UI via XML comments</a></li>
<li><a href="#configuring-the-app-via-settings-files-and-environment-variables">Configuring the app via settings files and environment variables</a></li>
</ul>
</li>
<li><a href="#thats-all-for-now">That’s all for now</a></li>
</ul>
Craft: A CMS for developershttps://www.endpointdev.com/blog/2020/10/craft-a-cms-for-developers/2020-10-31T00:00:00+00:00Kevin Campusano
<p><img src="/blog/2020/10/craft-a-cms-for-developers/banner.png" alt="Craft CMS banner"></p>
<p>As a software engineer, I thrive and thoroughly enjoy working on fully custom software products, applications conceived to model and help in the execution of some business process and that are built from the ground up by a team of developers.</p>
<p>Such projects are often complex and expensive, though, and for some clients, they can be overkill. Some clients come up with requirements that are better served by off-the-shelf software solutions. One group of such solutions are <a href="https://en.wikipedia.org/wiki/Content_management_system">content management systems (CMS)</a>. As a rule of thumb, if a client wants a website whose main purpose is to showcase some content, their brand or image, and custom business logic requirements are limited, then chances are that a CMS will fit the bill nicely.</p>
<p>Lately we’ve been using the <a href="https://craftcms.com/">Craft CMS</a> for a client that meets the aforementioned criteria, and I gotta say, I’ve been pleasantly surprised by the developer experience it offers.</p>
<p>Unlike most of the technology and products we discuss in our blog, Craft CMS is not <a href="https://opensource.org/osd">Open Source</a> or <a href="https://www.gnu.org/philosophy/free-sw.html">Free Software</a>. The source code is readily available in <a href="https://github.com/craftcms/cms">GitHub</a> for anybody to use, study, and modify, but commercial use of it is restricted and certain features are exclusive to a so-called “Pro” edition. Learn more by reading their <a href="https://github.com/craftcms/cms/blob/develop/LICENSE.md">license</a> and their <a href="https://craftcms.com/pricing">pricing structure</a>.</p>
<p>The features that we will discuss in this article are all part of the no-charge “Solo” edition of Craft CMS 3 that can be used for noncommercial websites.</p>
<p>In this article I’m going to talk through a few of the key aspects of Craft that make me think that it’s really a CMS made for developers. Let’s get started:</p>
<h3 id="craft-is-easy-to-get-up-and-running">Craft is easy to get up and running</h3>
<p>Craft is just a PHP application. And it is as typical as modern PHP applications go, capable of being initially set up with Composer and of running on top of a MySQL database (it also supports Postgres!) and the Apache web server. <a href="https://craftcms.com/docs/3.x/console-commands.html">It can all be done via console</a> too, if that’s how you roll.</p>
<p>If you already have a box with Apache, PHP, MySQL and Composer, it all amounts to little more than creating a MySQL database for Craft, <code>composer install</code>ing the Craft package, sorting out some permissions, running <code>php craft setup</code>, following the prompts, and finally, configuring a virtual host in Apache to serve the <code>web</code> directory from inside where Craft was installed.</p>
<p>All of this is explained in Craft’s <a href="https://craftcms.com/docs/3.x/installation.html">official documentation</a>.</p>
<h3 id="craft-is-easy-to-put-in-containers">Craft is easy to put in containers</h3>
<p>For ease of development and project bootstrapping, I’ve created a containerized setup with Docker and <a href="https://docs.docker.com/compose/">Docker Compose</a> that encapsulates some infrastructure tailored to my development needs. You can get the relevant files <a href="https://github.com/megakevin/craft-cms-docker-bootstrap">here</a>.</p>
<p>If you want to follow along, clone that repo, and you’ll end up with this file structure (as shown by the <code>tree</code> command):</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">.
├── apache_config
│ └── 000-default.conf
├── docker-compose.yml
├── Dockerfile
└── README.md
1 directory, 4 files
</code></pre></div><p>This setup includes two containers: one for running Apache and Craft, and another for running MySQL. The <code>apache_config/000-default.conf</code> contains some Apache VirtualHost configuration for serving the site. <code>docker-compose.yml</code> defines the whole infrastructure: both containers, a network that they use to talk to each other, and a volume to persist MySQL database files. The <code>Dockerfile</code> is the definition of the image for the container that runs Apache and Craft.</p>
<p>Feel free to explore the files; I’ve made sure to comment them so that they are easy to understand and modify as you see fit.</p>
<p>Note: If you want to run this setup, be sure to change the <code>ServerAdmin</code> value in <code>apache_config/000-default.conf</code>, and the <code>USER</code>, <code>UID</code>, and <code>GID</code> values in <code>docker-compose.yml</code> under <code>services > web > build > args</code> according to your environment and user account information.</p>
<p>If you have Docker and Docker Compose installed in your machine, you can go to the directory just created by the <code>git clone</code> and:</p>
<ol>
<li>
<p>Run <code>docker-compose up</code> to set up the infrastructure. You will see Docker and Docker Compose creating the image defined in <code>Dockerfile</code> and the containers defined in <code>docker-compose.yml</code>. Then the logs of the various containers will start showing. It you want to run this in the background, use <code>docker-compose up -d</code> instead and it will give you control of the terminal immediately after it’s done.</p>
</li>
<li>
<p>Run <code>docker-compose exec web bash</code> to connect to the <code>web</code> container. This is the container that has Craft’s code and is running Apache. You’ll be “logged into” the container and be placed in <code>/var/www</code>. This is the directory where we will install Craft.</p>
</li>
<li>
<p>Once in there, run <code>composer create-project craftcms/craft ./install</code> to install Craft with Composer. In other words, it will download all of the files that Craft needs to run. You should see something like this at the end:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">> @php craft setup/welcome
______ .______ ___ _______ .___________.
/ || _ \ / \ | ____|| |
| ,----'| |_) | / ^ \ | |__ `---| |----`
| | | / / /_\ \ | __| | |
| `----.| |\ \----./ _____ \ | | | |
\______|| _| `._____/__/ \__\ |__| |__|
A N E W I N S T A L L
______ .___ ___. _______.
/ || \/ | / |
| ,----'| \ / | | (----`
| | | |\/| | \ \
| `----.| | | | .----) |
\______||__| |__| |_______/
Generating an application ID ... done (CraftCMS--xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)
Generating a security key ... done (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)
Welcome to Craft CMS! Run the following command if you want to setup Craft from your terminal:
/var/www/install/craft setup
</code></pre></div></li>
<li>
<p>After that’s done, use this bit of bash black magic:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">(<span style="color:#038">shopt</span> -s dotglob; mv -v ./install/* .)
</code></pre></div><p>This command moves all the files that composer just downloaded into <code>/var/www/install</code> out from there and into <code>/var/www</code>. Then <code>rmdir install</code> to remove the <code>install</code> directory because we no longer need it.</p>
</li>
<li>
<p>Now do <code>php craft setup</code> to use Craft’s CLI to set up the site, its configuration, and its database structure. Just follow the prompts. When it asks you for database configuration, choose <code>mysql</code> as the database driver, set <code>mysql</code> as the database server name (because that’s the name we’ve given it in our <code>docker-compose.yml</code>), and use the environment variables defined in <code>docker-compose.yml</code> around lines 14 to 17 for the rest of the values. You can change these to whatever you want as long as you make sure that it coincides with how you defined your MySQL database container in the <code>docker-compose.yml</code> file. The prompts should look something like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">Which database driver are you using? [mysql,pgsql,?]: mysql
Database server name or IP address: [127.0.0.1] mysql
Database port: [3306]
Database username: [root] craft
Database password:
Database name: craft_demo
Database table prefix:
Testing database credentials ... success!
Saving database credentials to your .env file ... done
</code></pre></div></li>
<li>
<p>Next, sort out some Craft file permission requirements with this command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">chmod -R o+w config storage web/cpresources
</code></pre></div><p>These are directories that Craft needs write access to.</p>
</li>
<li>
<p>Now start up Apache with <code>sudo service apache2 start</code>.</p>
</li>
</ol>
<p>And that’s it! Open a browser to <code>localhost</code> or <code>127.0.0.1</code> and you should see your Craft 3 homepage:</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/welcome_to_craft.jpg" alt="Welcome to Craft browser screenshot"></p>
<p>You can start playing with the control panel or the <code>templates/index.twig</code> right away.</p>
<h3 id="crafts-design-makes-sense">Craft’s design makes sense</h3>
<p>When it comes to content modeling, Craft offers a set of abstractions that make sense. The main concepts to understand are <a href="https://craftcms.com/docs/3.x/entries.html">sections and entries</a>. Entries are the main pieces of content. An “article” in a news site or a “post” in a blog. Sections are the way Craft groups entries together. They are useful when your site has multiple streams of content. You can, for example, have a site where you publish news, opinion pieces, and random thoughts. With Craft, that would translate neatly into three separate sections, each one with its own type of entries.</p>
<p>Craft also allows you to set up <a href="https://craftcms.com/docs/3.x/fields.html">custom fields</a> for every type of entry. For example, the entries on your news section may need to include a link to the original source of the news, while your opinion pieces need a short description instead. You can configure your entries using custom fields so that they include the data that makes sense for your use case.</p>
<p>Let’s see what that looks like in concrete terms. Now that we have a Craft instance running in <code>localhost</code>, go to <code>localhost/admin</code> in your browser. You should see Craft’s control panel. Click on the “Settings” option in the navigation bar to the left of the screen, then select the “Sections” item under “Content”:</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/control_panel_settings_sections.png" alt="Screenshot of Control Panel > Settings > Sections"></p>
<p>Next, click on the “+ New Section” button by the top of the screen and you’ll be shown the section creation form. We will create a “News” section so let’s fill in the form like this:</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/control_panel_new_section.png" alt="Screenshot of Create a new section"></p>
<p>The “Name” and “Handle” fields are pretty self explanatory. The “<a href="https://craftcms.com/docs/3.x/entries.html#section-types">Section Type</a>” is a concept we haven’t discussed yet. “<a href="https://craftcms.com/docs/3.x/entries.html#channels">Channel</a>” is the most appropriate for a news section, which is a stream of multiple entries with the same structure.</p>
<p>There are other types: “<a href="https://craftcms.com/docs/3.x/entries.html#singles">Single</a>” is a type which you would use for entries that are unique, like a home or contact page. For sections of type “Single”, there’s generally one single entry that fits in them. This is unlike “Channels” which fit multiple entries. The other section type is “<a href="https://craftcms.com/docs/3.x/entries.html#structures">Structure</a>”, which also accommodates multiple entries, but rather than a stream of ever-growing content, it’s more appropriate for similar entries that share a certain theme. A “Structure” section type is appropriate for things like services offered or projects in a portfolio.</p>
<p>Learn all about entries, sections, section types, and more in <a href="https://craftcms.com/docs/3.x/entries.html">Craft’s official docs</a>.</p>
<p>Now that we’ve filled the form, click the “Save and edit entry types” button. This has created our new “News” section and defined the “<a href="https://craftcms.com/docs/3.x/entries.html#entry-types">entry type</a>” that this section will be able to contain. The control panel now shows this:</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/control_panel_entry_type.png" alt="Screenshot of The section’s default entry type"></p>
<p>In Craft, a section can contain multiple types of entries. For our purposes with the news section, though, just the default one is enough. Click on it, and you’ll see an editor where you can select fields that make up that entry type:</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/control_panel_entry_type_fields.png" alt="Screenshot of The entry type’s fields"></p>
<p>The editor I mentioned before is below the “Field Layout” title. Here’s where we can pick and choose which fields make up the entries for the “News” section. We have a fresh installation of Craft though, so we don’t have any fields. Let’s create a few by going to Settings > Fields.</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/control_panel_new_fields.png" alt="Screenshot of Defining new fields"></p>
<p>This is where we can define new fields to be used for our entries throughout the site. Click the “+ New Field” button near the top of the screen and you’ll be presented with the field creation form where you can specify all manner of details. For now, we just care about “Name”, “Handle” and “Field Type”. Let’s create three fields:</p>
<ol>
<li>One named “Heading” with a type of “Plain Text”.</li>
<li>One named “Body” with a type of “Plain Text”.</li>
<li>One named “Source” with a type of “URL”.</li>
</ol>
<p>You should end up with something like this in the control panel’s “Fields” page (<code>localhost/admin/settings/fields</code>):</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/control_panel_three_fields.png" alt="Screenshot of New fields ready"></p>
<p>Now, if we go back to our “News” section’s default entry type at <code>http://localhost/admin/settings/sections/1/entrytypes/1</code> or Settings > Sections > News > Entry Types > News…</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/control_panel_field_layout.png" alt="Screenshot of New fields for News entries"></p>
<p>You can see how the new fields that we just created are present in the “Field Layout” panel. In order to make these fields available for our “News” entries, we just need to drag them into the box named “Content” inside the greyish area.</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/control_panel_field_layout_applied.png" alt="Screenshot of New fields for News entries assigned"></p>
<p>Click the “Save” button at the top, and that’s all it takes to set up a “Channel” section, an entry type for it, and a few fields.</p>
<p>Now that we’ve set up the blueprints for them, let’s actually create a few entries in the “News” section. To do so, click on the “Entries” link in the navigation bar to the left which should’ve revealed itself by now, and you’ll see this screen:</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/control_panel_entries.png" alt="Screenshot of The entries screen"></p>
<p>If you’re used to CMS back ends, this is pretty familiar. In this screen you can create new entries and browse existing ones.</p>
<p>Click the big red “+ New Entry” button and select “News” in the resulting pop-up menu. You should see a form with the fields that we defined in the “Field Layout” panel during previous steps. Feel free to create a few news entries. I’ve created these two:</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/control_panel_entries_created.png" alt="Screenshot of Entries created"></p>
<h3 id="craft-gives-you-complete-freedom-over-your-front-end">Craft gives you complete freedom over your front end</h3>
<p>Most CMSs can be thought of as having two components: a front end and a back end. The back end is where content is authored and the front end is where the style and structure in which the content is presented. In Craft, most of the effort has gone into creating a solid, highly customizable back end.</p>
<p>As we’ve just seen, Craft comes out of the box with a back end control panel where site administrators and content creators can author new content. As far as front end goes though, Craft has nothing. For a developer well versed in front end web technologies, this is freeing and transformative.</p>
<p>Craft makes no assumption and makes no decision for you when it comes to developing your site’s look and feel. It gets out of your way and lets you do your job. There’s no concept of “theme”. There’s no obscure framework to learn and integrate into. There’s no proprietary templating language to struggle with. In Craft, you are completely free to write HTML, CSS and JS as you see fit to obtain your desired effect for your site.</p>
<p>You can develop templates using the tried and true <a href="https://twig.symfony.com/">Twig</a> templating engine, which, if you have some experience with PHP, you’ve most likely already encountered and worked with. All the content created in the back end is exposed to the Twig templates via objects. Let’s see how.</p>
<p>First we need to specify a template for our sections. Continuing with our example, let’s assign a template to our “News” section. Go to Settings > Sections > News and scroll down to find the “Site Settings” area. In the table there, type <code>news</code> into the “Template” column. Now go to the <code>template</code> directory where craft was installed and create a new <code>news.twig</code> file. The contents can be simple, like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-html" data-lang="html"><<span style="color:#b06;font-weight:bold">html</span> <span style="color:#369">lang</span>=<span style="color:#d20;background-color:#fff0f0">"en"</span>>
<<span style="color:#b06;font-weight:bold">head</span>>
<<span style="color:#b06;font-weight:bold">meta</span> <span style="color:#369">charset</span>=<span style="color:#d20;background-color:#fff0f0">"UTF-8"</span>>
<<span style="color:#b06;font-weight:bold">meta</span> <span style="color:#369">name</span>=<span style="color:#d20;background-color:#fff0f0">"viewport"</span> <span style="color:#369">content</span>=<span style="color:#d20;background-color:#fff0f0">"width=device-width, initial-scale=1.0"</span>>
<<span style="color:#b06;font-weight:bold">title</span>>{{ entry.title }} - My Craft Demo</<span style="color:#b06;font-weight:bold">title</span>>
</<span style="color:#b06;font-weight:bold">head</span>>
<<span style="color:#b06;font-weight:bold">body</span>>
<<span style="color:#b06;font-weight:bold">h1</span>>{{ entry.heading }}</<span style="color:#b06;font-weight:bold">h1</span>>
<<span style="color:#b06;font-weight:bold">p</span>>{{ entry.body }}</<span style="color:#b06;font-weight:bold">p</span>>
<<span style="color:#b06;font-weight:bold">p</span>><<span style="color:#b06;font-weight:bold">a</span> <span style="color:#369">href</span>=<span style="color:#d20;background-color:#fff0f0">"{{ entry.source }}"</span>>Source</<span style="color:#b06;font-weight:bold">a</span>></<span style="color:#b06;font-weight:bold">p</span>>
</<span style="color:#b06;font-weight:bold">body</span>>
</<span style="color:#b06;font-weight:bold">html</span>>
</code></pre></div><p>The only noteworthy aspect of this template is how we are injecting the data that we defined in the back end into this template. We use double curly brackets to reference the <code>entry</code> variable. This is provided to Twig by Craft and contains all the fields that we defined for our entries in the “News” section.</p>
<p>With that done, save and visit any of the entries you created and you’ll see something like this:</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/website_entry.png" alt="Screenshot of Our first entry"></p>
<p>As you can see, this entry’s URL is <code>localhost/news/i-just-learned-that-craft-uses-twig</code>. Yours will obviously differ depending on the title (and slug) that you gave them.</p>
<p>What this example lacks in complexity, it more than makes up for in potential. This is a plain old HTML document that we’ve created, with a Twig template, of course. This is the complete freedom that I like about Craft. From this point on, you can do whatever you want in terms of front end development: use whatever CSS or JavaScript framework or library you want, organize your template files in a way that makes sense to you, your team, and your website, etc. The sky is the limit.</p>
<p>That’s good for individual news pages. But now let’s try to link to them from the homepage. To do so, we need to edit the <code>templates/index.twig</code> file. Around line 174, remove the <code><ul></code> that’s there along with all its <code><li></code>s and put this instead:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-html" data-lang="html">{% set entries = craft.entries().section('news').all() %}
<<span style="color:#b06;font-weight:bold">ul</span>>
{% for entry in entries %}
<<span style="color:#b06;font-weight:bold">li</span>><<span style="color:#b06;font-weight:bold">a</span> <span style="color:#369">href</span>=<span style="color:#d20;background-color:#fff0f0">"{{ entry.url }}"</span>>{{ entry.title }}</<span style="color:#b06;font-weight:bold">a</span>></<span style="color:#b06;font-weight:bold">li</span>>
{% endfor %}
</<span style="color:#b06;font-weight:bold">ul</span>>
</code></pre></div><p>Here, we leverage Twig’s templating engine capabilities, sprinkled with some of Craft’s features to obtain a list of all the entries in our “News” section. Then, we iterate over them to render links.</p>
<p>Effectively, Craft enhances what you can do with Twig by exposing an API for accessing the data that exists in the CMS back end.</p>
<p>If you’re familiar with any sort of templating language like those included in most web application frameworks like <a href="https://rubyonrails.org/">Ruby on Rails</a>, <a href="https://symfony.com/">Symfony</a>, <a href="https://docs.microsoft.com/en-us/aspnet/core/mvc/overview?view=aspnetcore-3.1">ASP.NET Core MVC</a>, etc., you’ll probably feel right at home with this.</p>
<p>Here’s what the homepage looks like now:</p>
<p><img src="/blog/2020/10/craft-a-cms-for-developers/homepage_with_links.jpg" alt="Screenshot of Homepage is ready"></p>
<p>You can click on any of the links and they will take you to the specific entry page.</p>
<p>You can learn more about querying entries in Craft’s <a href="https://craftcms.com/docs/3.x/entries.html#editing-entries">official documentation</a>.</p>
<h3 id="craft-is-cool-">Craft is cool 🕶️</h3>
<p>So, in conclusion, I’ve found that Craft is a cool tool to have in the toolbox. It is a full-fledged CMS with tons of customization opportunities for how to model and organize the content and data of your site. When it comes to developing the front end, though, it gets out of your way and lets you do your job. That, to me, is a big win.</p>
Containerizing Magento with Docker Compose: Elasticsearch, MySQL and Magentohttps://www.endpointdev.com/blog/2020/08/containerizing-magento-with-docker-compose-elasticsearch-mysql-and-magento/2020-08-27T00:00:00+00:00Kevin Campusano
<p><img src="/blog/2020/08/containerizing-magento-with-docker-compose-elasticsearch-mysql-and-magento/banner.jpg" alt="Banner"></p>
<p><a href="https://business.adobe.com/products/magento/open-source.html">Magento</a> is a complex piece of software, and as such, we need all the help we can get when it comes to developing customizations for it. A fully featured local development environment can do just that, but these can often times be very complex as well. It’d be nice to have some way to completely capture all the setup for such an environment and be able to get it all up and running quickly, repeatably… even with a single command. Well, <a href="https://www.docker.com/">Docker</a> containers can help with that. And they can be easily provisioned with the <a href="https://docs.docker.com/compose/">Docker Compose</a> tool.</p>
<p>In this post, we’re going to go in depth into how to fully containerize a Magento 2.4 installation for development, complete with its other dependencies <a href="https://www.elastic.co/">Elasticsearch</a> and <a href="https://www.mysql.com/">MySQL</a>. By the end of it, we’ll have a single command that sets up all the infrastructure needed to install and run Magento, and develop for it. Let’s get started.</p>
<h3 id="magento-24-application-components">Magento 2.4 application components</h3>
<p>The first thing that we need to know is what the actual components of a Magento application are. Starting with 2.4, <a href="https://devdocs.magento.com/guides/v2.4/install-gde/prereq/elasticsearch.html">Magento requires access to an Elasticsearch</a> service to power catalog searches. Other than that, we have the usual suspects for typical PHP applications. Here’s what we need:</p>
<ol>
<li>MySQL</li>
<li>Elasticsearch</li>
<li>A web server running the Magento application</li>
</ol>
<p>In terms of infrastructure, this is pretty straightforward. It would cleanly translate into three separate machines talking to each other via the network, but in the Docker world, each of these machines become containers. Since we need multiple containers for our infrastructure, things like Docker Compose can come in handy to orchestrate the creation of all that. So let’s get to it.</p>
<h3 id="creating-a-shared-network">Creating a shared network</h3>
<p>Since we want to create three separate containers that can talk to each other, we need to ask the Docker engine to create a network for them. This can be done with this self-explanatory command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker network create magento-demo-network
</code></pre></div><p><code>magento-demo-network</code> is the name I’ve chosen for my network but you can choose whatever is most appropriate.</p>
<p>You can run the following command to check your newly created network:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker network ls
</code></pre></div><p>Output usually looks like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ docker network ls
NETWORK ID NAME DRIVER SCOPE
bd562b9cf5a4 bridge bridge local
adb9ec2365c5 host host local
2dba8d97410e magento-demo-network bridge local
c3473c60ed52 none null local
</code></pre></div><p>There’s our <code>magento-demo-network</code> network among other networks that Docker creates by default.</p>
<h3 id="containerizing-mysql">Containerizing MySQL</h3>
<p>Getting a MySQL instance up and running is super easy these days thanks to Docker. There’s already <a href="https://hub.docker.com/_/mysql">an official image for MySQL</a> in <a href="https://hub.docker.com/">Docker Hub</a> so we will use that. We can set it up with this command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker run -d \
--name magento-demo-mysql \
--network magento-demo-network \
--network-alias mysql \
-p 3306:3306 \
-v magento-demo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=password \
-e MYSQL_USER=kevin \
-e MYSQL_PASSWORD=password \
-e MYSQL_DATABASE=magento_demo \
mysql:5.7
</code></pre></div><p>And just like that, we have a running MySQL instance. Running <code>docker ps</code> can get you a list of currently running containers. The one we just created should show up there.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b73739ad5d66 mysql:5.7 "docker-entrypoint.s…" 22 seconds ago Up 21 seconds 0.0.0.0:3306->3306/tcp, 33060/tcp magento-demo-mysql
</code></pre></div><p>Let’s go through each one of the options from that command now to understand it better.</p>
<ul>
<li><code>docker run -d</code>: Runs the container in detached mode. This means that it’s run in the background as a daemon. Control is returned to the console immediately.</li>
<li><code>--name magento-demo-mysql</code>: This is the name of our container. Normally, Docker will generate random names for containers. In this case, we want to give it a name to refer to it with other Docker commands.</li>
<li><code>--network magento-demo-network</code>: Tells Docker to run the container as part of the <code>magento-demo-network</code> network that we created earlier. This is the network that we will use for all of our containers.</li>
<li><code>--network-alias mysql</code>: This is the name of this container within the network. This is how other containers in the network will be able to reference it. We’ll see that come to life a bit later.</li>
<li><code>-p 3306:3306</code>: Sets up our new MySQL container to allow connections over port <code>3306</code>. This is MySQL’s default port, which Magento will use to connect to it. This basically says “requests coming over the network to port <code>3306</code> of this container are going to be handled by the service installed in this container that listens to port <code>3306</code>”. That service happens to be MySQL.</li>
<li><code>-v magento-demo-mysql-data:/var/lib/mysql</code>: Creates a Docker volume. Specifically, we’re setting this one up to store the data files from MySQL. We need to do this so that the data stored in our MySQL container is persisted across shutdowns. <code>magento-demo-mysql-data</code> is the name of the volume and <code>/var/lib/mysql</code> is the directory within the MySQL container where that volume is mounted. In other words, any files stored in that directory are going to be stored within the volume instead. The volume is stored by Docker in the host machine, outside the container. <code>/var/lib/mysql</code> is the default directory where MySQL stores databases.</li>
<li><code>-e MYSQL_ROOT_PASSWORD=password</code>: Is the password for the root user for MySQL. This is passed into the containerized MySQL via environment variables. Hence the <code>-e</code> option.</li>
<li><code>-e MYSQL_USER=kevin</code>: Creates a new login in MySQL with <code>kevin</code> as its username.</li>
<li><code>-e MYSQL_PASSWORD=password</code>: Sets the word <code>password</code> as the password for that <code>kevin</code> user.</li>
<li><code>-e MYSQL_DATABASE=magento_demo</code>: Creates a database named <code>magento_demo</code>.</li>
<li><code>mysql:5.7</code>: This is the image that we’re using for our container. <code>5.7</code> specifies the version that we want to run. <a href="https://hub.docker.com/_/mysql">The <code>mysql</code> image in Docker Hub</a> contains a few more versions. Or “tags”, in Docker words.</li>
</ul>
<h3 id="connecting-to-this-container">Connecting to this container</h3>
<p><code>docker ps</code> showed us that our container was running. We can also interact with it. Here are a couple of ways of doing it:</p>
<h4 id="connecting-from-within-the-container">Connecting from within the container</h4>
<p>The easiest way of connecting to the MySQL instance is by running <code>mysql</code> CLI client from within the container itself. You can do that with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker exec -it magento-demo-mysql mysql -u kevin -p
</code></pre></div><p>Here’s how that command works:</p>
<ul>
<li><code>docker exec -it</code> is used to run commands inside a container in interactive mode. Just what we need here in this case because we’re running <code>mysql</code>, which is an interactive CLI.</li>
<li><code>magento-demo-mysql</code> is the name we gave our container in the <code>docker run</code> command from before via the <code>--name magento-demo-mysql</code> option. This is why it’s useful to give names to containers: so we can use them in commands like this.</li>
<li><code>mysql -u kevin -p</code> is the command that’s run within the container. This is just the usual way of connecting to a MySQL server instance using the <code>mysql</code> CLI client. We use <code>kevin</code> because that’s what we set <code>MYSQL_USER</code> to when we created our container before.</li>
</ul>
<p>After running the previous command, the console will ask you for your password. We set that to <code>password</code> via <code>MYSQL_PASSWORD</code> so that’s what we need to type in. This will eventually result in the <code>mysql</code> prompt showing up. Run <code>show databases</code> to confirm that the <code>magento_demo</code> database that we specified via <code>MYSQL_DATABASE</code> got created.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| magento_demo |
+--------------------+
2 rows in set (0.00 sec)
</code></pre></div><p>You can <code>Ctrl + D</code> your way out of that when you’re done exploring the containerized MySQL instance.</p>
<h4 id="connecting-directly-from-the-host-machine">Connecting directly from the host machine</h4>
<p>We can also connect to the MySQL instance running in the container, directly from our host machine. We can use:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">mysql -h localhost -P 3306 --protocol=tcp -u kevin -p
</code></pre></div><blockquote>
<p>Note that it is required that the <code>mysql</code> CLI client is installed in the host machine for this to work.</p>
</blockquote>
<p>Same as before, <code>mysql</code> will ask you for the password and, once typed in, it will give you its prompt.</p>
<h3 id="containerizing-elasticsearch">Containerizing Elasticsearch</h3>
<p>Like MySQL, there’s an official <a href="https://hub.docker.com/_/elasticsearch">Elasticsearch Docker image up in Docker Hub</a>. As a result, getting a working Elasticsearch installation is a piece of cake. It’s done with a command like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker run -d \
--name magento-demo-elasticsearch \
--network magento-demo-network \
--network-alias elasticsearch \
-p 9200:9200 \
-p 9300:9300 \
-e "discovery.type=single-node" \
elasticsearch:7.8.1
</code></pre></div><p>You can validate that the Elasticsearch is running with <code>curl localhost:9200/_cat/health</code>. That should return something like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ curl localhost:9200/_cat/health
1597622135 23:55:35 docker-cluster green 1 1 0 0 0 0 0 0 - 100.0%
</code></pre></div><p>Alright! That was easy enough. Again, thanks to Docker, we have an application that’s somewhat complex to set up, up and running in a matter of seconds.</p>
<p>Like before, let’s dissect that command that we used. Very similar to the MySQL one, only with some Elasticsearch specific settings:</p>
<ul>
<li><code>docker run -d</code>: Same as with the MySQL container, runs it in detached mode.</li>
<li><code>--name magento-demo-elasticsearch</code>: Gives the container a friendly name.</li>
<li><code>--network magento-demo-network</code>: Puts the container in the same network as the rest of our infrastructure.</li>
<li><code>--network-alias elasticsearch</code>: Is the name by which other containers in the network can refer to this contianer.</li>
<li><code>-p 9200:9200</code>: Opens port <code>9200</code> so that other containers within the network can talk to this one.</li>
<li><code>-p 9300:9300</code>: Same thing but for a different port.</li>
<li><code>-e "discovery.type=single-node"</code>: Sets up the <code>discovery.type</code> environment variable that the image uses to configure Elasticsearch with.</li>
<li><code>elasticsearch:7.8.1</code>: Specifies that our container will be running version <code>7.8.1</code> of Elasticsearch.</li>
</ul>
<h3 id="containerizing-magento">Containerizing Magento</h3>
<p>Now this is the step where things get a little bit more involved. Nothing crazy however, so let’s get into it.</p>
<h3 id="the-dockerfile">The Dockerfile</h3>
<p>There’s no image of Magento 2 that would be able to get us up and running as quickly as with MySQL or Elasticsearch, at least not that I could find, so we’re going to have to create our own. We can create our own images with the help of <a href="https://docs.docker.com/engine/reference/builder/">Dockerfiles</a>. A Dockerfile is a file that contains all the specifications needed for a container. The Docker engine uses it to create images which can then be used as basis for running containers.</p>
<p>Here’s a Dockerfile for Magento 2.4 that I came up with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain"># /path/to/project/Dockerfile
# Our image is based on Ubuntu.
FROM ubuntu
# Here we define a few arguments to the Dockerfile. Specifically, the
# user, user id and group id for a new account that we will use to work
# as within our container.
ARG USER=docker
ARG UID=1000
ARG GID=1000
# Install PHP, composer and all extensions needed for Magento.
RUN apt-get update && apt-get install -y software-properties-common curl
RUN add-apt-repository ppa:ondrej/php
RUN apt-get update && apt-get install -y php
RUN apt-get update && apt-get install -y \
php-mysql php-xml php-intl php-curl \
php-bcmath php-gd php-mbstring php-soap php-zip \
composer
# Install Xdebug for a better developer experience.
RUN apt-get update && apt-get install -y php-xdebug
RUN echo "xdebug.remote_enable=on" >> /etc/php/7.4/mods-available/xdebug.ini
RUN echo "xdebug.remote_autostart=on" >> /etc/php/7.4/mods-available/xdebug.ini
# Install the mysql CLI client.
RUN apt-get update && apt-get install -y mysql-client
# Set up a non-root user with sudo access.
RUN groupadd --gid $GID $USER \
&& useradd -s /bin/bash --uid $UID --gid $GID -m $USER \
&& apt-get install -y sudo \
&& echo "$USER ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/$USER \
&& chmod 0440 /etc/sudoers.d/$USER
# Use the non-root user to log in as into the container.
USER ${UID}:${GID}
# Set this as the default directory when we connect to the container.
WORKDIR /workspaces/magento-demo
# This is a quick hack to make sure the container has something to run
# when it starts, preventing it from closing itself automatically when
# created. You could also remove this and run the container with `docker
# run -t -d` to get the same effect. More on `docker run` further below.
CMD ["sleep", "infinity"]
</code></pre></div><p>Feel free to go through the comments in the file above for more details, but essentially, this Dockerfile describes what a machine ready to run Magento should look like. It’s got PHP and all the necessary extensions, <a href="https://xdebug.org/">Xdebug</a>, and <a href="https://getcomposer.org/">Composer</a>. It also includes the <code>mysql</code> CLI client.</p>
<p>Importantly, it allows for creating a user account with sudo access. Later, we’ll use this capability to create a user account, inside the container that mimics the one we’re using in our host machine, effectively using the same user both inside and outside the container. The purpose of this is to make it possible to work on the Magento source code files from inside the container without having to deal with Linux permission issues when we try to do the same from outside the container (that is, directly via the host machine).</p>
<h3 id="the-image">The image</h3>
<p>Alright, now that we have our image defined in the form of our Dockerfile, let’s create it. To do that, we go into our project directory, create a new file named <code>Dockerfile</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">cd /path/to/project
touch Dockerfile
</code></pre></div><p>Then use a text editor to save the contents from above into it, and finally run this command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker build \
--build-arg USER=kevin \
--build-arg UID=$(id -u) \
--build-arg GID=$(id -g) \
-t magento-demo-web .
</code></pre></div><p>Here’s what this all means:</p>
<ul>
<li><code>docker build</code>: Is the command to build images from Dockerfiles.</li>
<li><code>--build-arg USER=kevin</code>: Specifies the username for the account with sudo access that we will log into our container as. I’ve chosen <code>kevin</code> here but you should use the one you’re logged in as on your machine.</li>
<li><code>--build-arg UID=$(id -u)</code>: Uses the <code>id -u</code> to pass in the Id of the currently logged in user.</li>
<li><code>--build-arg GID=$(id -g)</code>: Uses the <code>id -g</code> to pass in the Group Id of the currently logged in user.</li>
<li><code>-t magento-demo-web .</code>: Specifies the name of the resulting image to be <code>magento-demo-web</code>. The <code>.</code> is a reference to the current working directory from where we’re running the command, which is where our Dockerfile is located.</li>
</ul>
<p>Run <code>docker image ls</code> and you should see our new home grown <code>magento-demo-web</code> image along with the other ones that we’ve downloaded from Docker Hub:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">REPOSITORY TAG IMAGE ID CREATED SIZE
magento-demo-web latest 90d311df434f 22 minutes ago 452MB
mysql 5.7 718a6da099d8 12 days ago 448MB
ubuntu latest 1e4467b07108 3 weeks ago 73.9MB
elasticsearch 7.8.1 a529963ec236 3 weeks ago 811MB
</code></pre></div><h3 id="the-container">The container</h3>
<p>Ok, now that we have an image that’s capable of running Magento, let’s put it to work by creating a container based on it. We do that with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker run -d \
--name magento-demo-web \
--network magento-demo-network \
--network-alias web \
-p 5000:5000 \
-v ${PWD}:/workspaces/magento-demo \
magento-demo-web
</code></pre></div><p>Line by line, this is telling the Docker engine to:</p>
<ul>
<li><code>docker run -d</code>: Run the container in detached mode. You could also add the <code>-t</code> argument which makes sure the container stays up and running even if there’s no program or service running within it. We don’t need that in this case though, because we defined our Dockerfile with that nifty <code>sleep infinity</code> command.</li>
<li><code>--name magento-demo-web</code>: Set the name of our container to <code>magento-demo-web</code>.</li>
<li><code>--network magento-demo-network</code>: Make our container part of the same network as the MySQL and Elasticsearch ones.</li>
<li><code>--network-alias web</code>: Set our container’s name within the network.</li>
<li><code>-p 5000:5000</code>: Open port <code>5000</code> to access our soon-to-be-running Magento app.</li>
<li><code>-v ${PWD}:/workspaces/magento-demo</code>: Create a new volume that makes our current working directory the same as the <code>/workspaces/magento-demo</code> directory within the container. This is where we’ll store all the Magento files. Binding these directories makes it possible to access and modify the Magento files both from the container and from the host machine. This just makes things easier and more convenient for development purposes.</li>
<li><code>magento-demo-web</code>: Use this image.</li>
</ul>
<p>Running <code>docker container ls</code> will show a list of all running containers, including the one we just created:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4af35c42e0bb magento-demo-web "/bin/bash" 5 minutes ago Up 5 minutes 0.0.0.0:5000->5000/tcp magento-demo-web
6c5ea65a7bd6 elasticsearch:7.8.1 "/tini -- /usr/local…" 2 hours ago Up 2 hours 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp magento-demo-elasticsearch
b73739ad5d66 mysql:5.7 "docker-entrypoint.s…" 3 hours ago Up 3 hours 0.0.0.0:3306->3306/tcp, 33060/tcp magento-demo-mysql
</code></pre></div><h3 id="connecting-to-the-container">Connecting to the container</h3>
<p>With the container up and running, we can connect to it with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker exec -it magento-demo-web bash
</code></pre></div><p>You may remember this as the same command we used before to connect to the MySQL container. This time, however, we’re using it to connect to our <code>magento-demo-web</code> container, referenced by the name we gave it, and running <code>bash</code> on it in order to open a shell.</p>
<p>After that, a prompt like this should show up:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">kevin@4af35c42e0bb:/workspaces/magento-demo$
</code></pre></div><p>We’re now inside our container. Notice how we’re automatically taken to <code>/workspaces/magento-demo</code>. This is just like we specified in our Dockerfile with the <code>WORKDIR</code> command. Feel free to run <code>php -v</code> or <code>composer -V</code> to validate that the setup from our Dockerfile got all the way into our container:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">kevin@4af35c42e0bb:/workspaces/magento-demo$ php -v
PHP 7.4.9 (cli) (built: Aug 7 2020 14:30:01) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0, Copyright (c) Zend Technologies
with Zend OPcache v7.4.9, Copyright (c), by Zend Technologies
with Xdebug v2.9.6, Copyright (c) 2002-2020, by Derick Rethans
kevin@4af35c42e0bb:/workspaces/magento-demo$ composer -V
Composer 1.10.1 2020-03-13 20:34:27
</code></pre></div><h3 id="talking-to-other-containers-in-the-network">Talking to other containers in the network</h3>
<p>We also need to validate that our containers are actually able to talk to each other via the network that we set up. If all went according to plan, still from within our <code>magento-demo-web</code> container, this command should open a <code>mysql</code> session:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">mysql -h mysql -u kevin -p
</code></pre></div><p>Notice how this time we don’t use <code>localhost</code> or <code>127.0.0.1</code> to connect to our MySQL instance. This time, we use <code>mysql</code>. This is the network alias we gave out MySQL container, so this is how our <code>magento-demo-web</code> sees it. To <code>magento-demo-web</code>, the MySQL container is just another machine in the same network.</p>
<p>Same deal for the Elasticsearch container. We can do something like this to talk to it:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">curl elasticsearch:9200/_cat/health
</code></pre></div><p>Again, from the perspective of <code>magento-demo-web</code>, this is just another machine in the network which it can reach by using the <code>elasticsearch</code> network alias that we gave it when creating it.</p>
<h3 id="installing-magento-in-our-container">Installing Magento in our container</h3>
<p>Now that we have our environment ready for Magento, let’s install it. First order of business is to create the Composer project:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">composer create-project --repository-url=https://repo.magento.com/ magento/project-community-edition ./install
</code></pre></div><p>If you’re familiar with Composer, then this should look very familiar to you. This command will download all the Magento files as specified by the <code>magento/project-community-edition</code> project from the <code>https://repo.magento.com/</code> repository. There are a few gotchas though:</p>
<ol>
<li>First, Magento is not openly available to download just like that. As such, Composer will ask for authentication in order to do so. Follow <a href="https://devdocs.magento.com/guides/v2.4/install-gde/prereq/connect-auth.html">this guide</a> to obtain the authentication keys from the Magento Marketplace. When Composer asks for a username, type in the public key; when it asks for password, type in the private key.</li>
<li>Second, you’ll notice that I specified <code>./install</code> at the end of that command. This is where all the files will be downloaded. I’ve chosen this (an <code>install</code> directory inside our current one) because <code>composer create-project</code> will refuse to download the files in a directory that’s not empty. Ours isn’t, because we’ve got our Dockerfile in it. But that’s nothing to worry about, once Composer finishes downloading everything, we’ll just copy the files over to their rightful location at <code>/workspaces/magento-demo</code>. You can do so with some Linux sorcery like this:</li>
</ol>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">(shopt -s dotglob; mv -v ./install/* .)
</code></pre></div><p>This Composer operation will take a good while, but when it’s done, make sure to move all the contents of <code>./install</code> into <code>/workspaces/magento-demo</code>. We now need to actually install Magento:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">bin/magento setup:install \
--base-url=http://localhost:5000 \
--db-host=mysql \
--db-name=magento_demo \
--db-user=kevin \
--db-password=password \
--admin-firstname=admin \
--admin-lastname=admin \
--admin-email=admin@admin.com \
--admin-user=admin \
--admin-password=admin123 \
--language=en_US \
--currency=USD \
--timezone=America/New_York \
--use-rewrites=1 \
--elasticsearch-host=elasticsearch \
--elasticsearch-port=9200
</code></pre></div><p>Even if you have never installed Magento before, the command above should be pretty straightforward. An interesting thing to note is how we’ve set up our database and Elasticsearch settings here:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain"> --db-host=mysql \
--db-name=magento_demo \
--db-user=kevin \
--db-password=password \
</code></pre></div><p>and</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain"> --elasticsearch-host=elasticsearch \
--elasticsearch-port=9200
</code></pre></div><p><code>--db-host</code> is the hostname of the machine where the MySQL server is running. We use our container’s network alias here. <code>--db-name</code> is the name of the database we created when initializing our container via the <code>MYSQL_DATABASE</code> environment variable. <code>--db-user</code> and <code>--db-password</code> are the credentials for the login that we created in the same manner. <code>--elasticsearch-host</code> is the network alias of our Elasticsearch container, and finally <code>--elasticsearch-port</code> is the port that we configured it to listen to.</p>
<p>As you can see, these are the same settings that we used to configure our MySQL and Elasticsearch containers. So make sure to do the same if you’ve been following along and decided to go with different values.</p>
<p>Once that command is done, we’re ready. We have a working Magento. Try it out by running this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">php -S 0.0.0.0:5000 -t ./pub/ ./phpserver/router.php
</code></pre></div><p>And navigating to <code>localhost:5000</code> in your browser. You should see your empty Magento homepage.</p>
<h3 id="optional-installing-the-sample-data">Optional: Installing the sample data</h3>
<p>If you’re planning some custom extension, or to just play with Magento to get to know it better, you may want to add some sample data. Luckily, the Magento devs have graciously provided such a thing in the form of a Composer package. If you want, you can install it with this recipe:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">bin/magento sampledata:deploy
bin/magento setup:upgrade
bin/magento indexer:reindex
bin/magento cache:flush
</code></pre></div><p><code>bin/magento sampledata:deploy</code> will also ask you for your Magento Makerplace keys so have them ready.</p>
<p>So turn off the built-in PHP server, run these, wait a good while, and fire up the built in server once more. Your Magento app should now have a catalog and all sorts of other data loaded in.</p>
<h3 id="composing-it-all-together">Composing it all together</h3>
<p>Now that was a lot. It was much easier than having to set everything up from scratch without Docker, but still, I promised a minimal setup overhead. A single command. With Docker Compose we can do just that.</p>
<p>For containers, the usual workflow is a three step process:</p>
<ol>
<li>Create the Dockerfile (sometimes omitted if we have a readily available image like it was the case with MySQL and Elasticsearch).</li>
<li>Create or download an image.</li>
<li>Run the container.</li>
</ol>
<p>Docker Compose can help us by capturing all the settings needed to create containers in a single YAML file; which then can be taken by a CLI tool (i.e. <code>docker-compose</code>) and it can set up the complete infrastructure. This single file is named <code>docker-compose.yml</code> and this is what it may look like for our current setup:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#b06;font-weight:bold">version</span>:<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"3.8"</span><span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#888"># Listing our three containers. Or "services", as known by Docker Compose.</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">services</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#888"># Defining our MySQL container.</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#888"># "mysql" will be the network alias for this container.</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">mysql</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>mysql:5.7<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">container_name</span>:<span style="color:#bbb"> </span>magento-demo-mysql<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">networks</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- magento-demo-network<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#d20;background-color:#fff0f0">"3306:3306"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- magento-demo-mysql-data:/var/lib/mysql<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">environment</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">MYSQL_ROOT_PASSWORD</span>:<span style="color:#bbb"> </span>password<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">MYSQL_USER</span>:<span style="color:#bbb"> </span>kevin<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">MYSQL_PASSWORD</span>:<span style="color:#bbb"> </span>password<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">MYSQL_DATABASE</span>:<span style="color:#bbb"> </span>magento_demo<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#888"># Defining our Elasticsearch container</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#888"># "elasticsearch" will be the network alias for this container.</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">elasticsearch</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>elasticsearch:7.8.1<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">container_name</span>:<span style="color:#bbb"> </span>magento-demo-elasticsearch<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">networks</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- magento-demo-network<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#d20;background-color:#fff0f0">"9200:9200"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#d20;background-color:#fff0f0">"9300:9300"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">environment</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">discovery.type</span>:<span style="color:#bbb"> </span>single-node<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#888"># Defining our custom Magento 2 container.</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#888"># "web" will be the network alias for this container.</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">web</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#888"># The build section tells Docker Compose how to build the image.</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#888"># This essentially runs a "docker build" command.</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">build</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">context</span>:<span style="color:#bbb"> </span>.<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">dockerfile</span>:<span style="color:#bbb"> </span>Dockerfile<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">args</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">USER</span>:<span style="color:#bbb"> </span>kevin<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">UID</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">1000</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">GID</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">1000</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">container_name</span>:<span style="color:#bbb"> </span>magento-demo-web<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">networks</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- magento-demo-network<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#d20;background-color:#fff0f0">"5000:5000"</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- .:/workspaces/magento-demo<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#888"># The volume that is used by the MySQL container</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">volumes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">magento-demo-mysql-data</span>:<span style="color:#bbb">
</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#888"># The network where all the containers will live</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">networks</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">magento-demo-network</span>:<span style="color:#bbb">
</span></code></pre></div><p>As you can see, most of <code>docker-compose.yml</code> is more or less rewriting the <code>docker run</code> commands in a YAML format. With the exception of the <code>web</code> container/service which includes a <code>build</code> section that reflects the <code>docker build</code> command that was used to take the Dockerfile and turn it into an image.</p>
<p>If you want to try it out, make sure to remove all the infrastructure we’ve created, to avoid any conflicts. You can do so from your host machine with these commands:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker container rm -f magento-demo-web magento-demo-elasticsearch magento-demo-mysql
docker image rm magento-demo-web
docker network rm magento-demo-network
docker volume rm magento-demo-mysql-data
</code></pre></div><p>Make sure you’re in the directory where the Dockerfile lives in the host machine. Then create a new <code>docker-compose.yml</code> file and put all the content above into it. Finally, run:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker-compose up -d
</code></pre></div><p>This will take a little while, but by the end of it, you’ll have a complete infrastructure with the three containers that we’ve created step by step throughout this article. With the <code>docker-compose.yml</code> file, <code>docker-compose up</code> essentially takes care of running all of our <code>docker build</code> and <code>docker run</code> commands.</p>
<p>The <code>-d</code> option means that the the command will run in the background and give you back control of your console. You can also run it without it if you want the console to show the logs from the containers.</p>
<p>You can still see the logs even in detached mode with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker-compose logs
</code></pre></div><p>You can also inspect the running containers. For that, you can use:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker-compose ps
</code></pre></div><p>Output will look something like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------------------------
magento-demo-elasticsearch /tini -- /usr/local/bin/do ... Up 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
magento-demo-mysql docker-entrypoint.sh mysqld Up 0.0.0.0:3306->3306/tcp, 33060/tcp
magento-demo-web sleep infinity Up 0.0.0.0:5000->5000/tcp
</code></pre></div><p>Notice how <code>docker-compose ps</code> gives us our container names just as we specified them in the <code>docker-compose.yml</code> file.</p>
<p><code>docker-compose</code> has many other utilities, check them out with <code>docker-compose --help</code>.</p>
<p>Now, same as before, we still need to open a terminal into our Magento container to run some installation commands on it. To do so, we can run the following command:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker-compose exec web bash
</code></pre></div><p>Notice how with <code>docker-compose</code> we refer to the container via its service name. That is, the name we gave the container under the <code>services</code> section of <code>docker-compose.yml</code>.</p>
<p>Of course, we can still use the same command that we used before, when we created our container directly with <code>docker</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">docker exec -it magento-demo-web bash
</code></pre></div><p>Now, once inside our container we need to install Magento again. Remember that we wiped out all the infrastructure we created manually, so these are fresh new containers; akin to new machines.</p>
<p>If you were running this from scratch you would just go ahead and do…</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">composer create-project --repository-url=https://repo.magento.com/ magento/project-community-edition ./install
</code></pre></div><p>and</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">(shopt -s dotglob; mv -v ./install/* .)
</code></pre></div><p>In this case, however, we already have all the Magento files in our directory, So we can save time and skip this step. We can reuse these files and just run <code>bin/magento setup:install</code>.</p>
<p>But since this is a new Magento installation, we do need to remove the config file before <code>setup:install</code>’ing. So go ahead and…</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">rm app/etc/env.php
</code></pre></div><p>…then:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">bin/magento setup:install \
--base-url=http://localhost:5000 \
--db-host=mysql \
--db-name=magento_demo \
--db-user=kevin \
--db-password=password \
--admin-firstname=admin \
--admin-lastname=admin \
--admin-email=admin@admin.com \
--admin-user=admin \
--admin-password=admin123 \
--language=en_US \
--currency=USD \
--timezone=America/New_York \
--use-rewrites=1 \
--elasticsearch-host=elasticsearch \
--elasticsearch-port=9200
</code></pre></div><p>After a while, Magento will be fully installed in our new infrastructure created by Docker Compose and ready to be fired up via the PHP built in server:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">php -S 0.0.0.0:5000 -t ./pub/ ./phpserver/router.php
</code></pre></div><h3 id="bonus-interactive-debugging-with-visual-studio-code">Bonus: Interactive debugging with Visual Studio Code</h3>
<p>So this is a fully functioning Magento installation with files that we can edit to our heart’s content. In terms of a “fully featured” development environment, however, we need to spruce it up a bit.</p>
<p>So install VS Code from <a href="https://code.visualstudio.com/">https://code.visualstudio.com/</a> and install the <a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack">Remote Development plugin</a>.</p>
<p>Open a new VS Code window and open the command palette with <code>Ctrl + Shift + P</code>. In there, type in <code>Remote-Containers: Attach to Running Container...</code> and press <code>Enter</code>. In the menu that shows up, select our <code>magento-demo-web</code> container.</p>
<p>That will result in a new VS Code instance that is connected to the container. Open an integrated terminal in VS Code and you’ll see:</p>
<p><img src="/blog/2020/08/containerizing-magento-with-docker-compose-elasticsearch-mysql-and-magento/vscode.png" alt="VS Code with Remote Development"></p>
<p>Now, install the <a href="https://marketplace.visualstudio.com/items?itemName=felixfbecker.php-debug">PHP Debug extension</a> so that we can take advantage of that Xdebug that we installed in our container via our Dockerfile.</p>
<p>Create a new launch configuration for interactive debugging with PHP by clicking on the “Run” button in the action bar to the left (<code>Ctrl + Shift + D</code> also works). Click the “create a launch.json file” link in the pane that appears. Then, in the resulting menu at the top of the window, select the “PHP” option. Here’s a screen capture for guidance:</p>
<p><img src="/blog/2020/08/containerizing-magento-with-docker-compose-elasticsearch-mysql-and-magento/opening_debug.jpg" alt="Setting up debugging in VS Code"></p>
<p>That will result in a new <code>.vscode/launch.json</code> file created that contains the launch configuration for the PHP debugger.</p>
<p>Now let’s put a breakpoint anywhere, like in line 13 of the <code>pub/index.php</code> file; press the “Start debugging” button in the “Run” pane, near the top left of the screen (making sure that the “Listen to XDebug” option is selected), and start up the PHP built in server from VS Code’s integrated terminal with <code>php -S 0.0.0.0:5000 -t ./pub/ ./phpserver/router.php</code>. Now navigate to <code>localhost:5000</code> in your browser and enjoy VS Code’s interactive debugging experience:</p>
<p><img src="/blog/2020/08/containerizing-magento-with-docker-compose-elasticsearch-mysql-and-magento/debugging.png" alt="Debugging Magento in VS codeCode"></p>
<h3 id="summary">Summary</h3>
<p>Whew! That was quite a bit. In this blog post, we’ve done a deep dive into how to set up all the pieces of a Magento application using Docker containers: MySQL, Elasticsearch, and Magento itself. Then, we captured all that knowledge into a single <code>docker-compose.yml</code> file which can be run with a single <code>docker-compose up</code> command to provision all the infrastructure in our local machine. As a cherry on top, we set up interactive debugging of our brand new Magento application with VS Code. Thanks to the safety net provided by these tools, I feel like I’m ready to really dig into Magento and start developing customizations, or debugging existing websites. If you’ve been following along this far, dear reader, I hope you do too.</p>
Linux Development in Windows 10 with Docker and WSL 2https://www.endpointdev.com/blog/2020/06/linux-development-in-windows-10-docker-wsl-2/2020-06-18T00:00:00+00:00Kevin Campusano
<p><img src="/blog/2020/06/linux-development-in-windows-10-docker-wsl-2/banner.png" alt="Banner"></p>
<p>I’m first and foremost a Windows guy. But for a few years now, moving away from working mostly with .NET and into a plethora of open source technologies has given me the opportunity to change platforms and run a Linux-based system as my daily driver. Ubuntu, which I honestly love for work, has been serving me well by supporting my development workflow with languages like <a href="https://www.php.net/">PHP</a>, <a href="https://www.javascript.com/">JavaScript</a> and <a href="https://www.ruby-lang.org/en/">Ruby</a>. And with the help of the excellent <a href="https://code.visualstudio.com/">Visual Studio Code</a> editor, I’ve never looked back. There’s always been an inclination in the back of my mind though, to take some time and give Windows another shot.</p>
<p>With the latest improvements coming to the Windows Subsystem for Linux with <a href="https://docs.microsoft.com/en-us/windows/wsl/compare-versions#whats-new-in-wsl-2">its second version</a>, the new and exciting <a href="https://github.com/microsoft/terminal">Windows Terminal</a>, and <a href="https://docs.docker.com/docker-for-windows/wsl/">Docker support for running containers inside WSL2</a>, I think the time is now.</p>
<p>In this post, we’ll walk through the steps I took to set up a PHP development environment in Windows, running in a Ubuntu Docker container running on WSL 2, and VS Code. Let’s go.</p>
<blockquote>
<p>Note: You have to be on the latest version of Windows 10 Pro (Version 2004) in order to install WSL 2 by the usual methods. If not, you’d need to be part of the Windows Insider Program to have access to the software.</p>
</blockquote>
<h3 id="whats-new-with-wsl-2">What’s new with WSL 2</h3>
<p>This is best explained by the <a href="https://docs.microsoft.com/en-us/windows/wsl/compare-versions#whats-new-in-wsl-2">official documentation</a>. However, being a WSL 1 veteran, I’ll mention a few improvements made which have sparked my interest in trying it again.</p>
<h4 id="1-its-faster-and-more-compatible">1. It’s faster and more compatible</h4>
<p>WSL 2 introduces a complete architectural overhaul. Now, Windows ships with a full Linux Kernel which WSL 2 distributions run on. This results in greatly improved file system performance and much better compatibility with Linux programs. It’s no longer running a Linux look-alike, but actual Linux.</p>
<h4 id="2-its-better-integrated-with-windows">2. It’s better integrated with Windows</h4>
<p>This is a small one: we can now use the Windows explorer to browse files within a WSL distribution. This is not a WSL 2 exclusive feature, it has been there for a while now. I think it’s worth mentioning though, because it truly is a great convenience and a far cry from WSL’s first release, where Microsoft specifically advised against manipulating WSL distribution file systems from Windows. If nothing else, this makes WSL feel like a first class citizen in the Windows ecosystem and shows that Microsoft actually cares about making it a good experience.</p>
<h4 id="3-it-can-run-docker">3. It can run Docker</h4>
<p>I’ve recently been learning more and more about Docker and it’s quickly becoming my preferred way of setting up development environments. Due to its lightweightness, ease of use, repeatability, and VM-like compartmentalization, I find it really convenient to develop against a purpose-built Docker container, rather than directly in my local machine. And with VS Code’s Remote development extension, the whole thing is very easy to set up. Docker for Windows now supports running containers within WSL, so I’m eager to try that out and see how it all works.</p>
<h4 id="4-a-newer-version-means-several-bugfixes">4. A newer version means several bugfixes</h4>
<p>Performance notwithstanding, WSL’s first release was pretty stable. I did, however, encounter some weird bugs and gotchas when working with the likes of SSH and Ruby during certain tasks. It was nothing major, as workarounds were readily available. I’ve already discussed some of them <a href="/blog/2019/04/rails-development-in-windows-10-pro-with-visual-studio-code-and-wsl/">here</a>, so I won’t bother mentioning them here again. But thanks to the fact that the technology has matured since last time I saw it, and considering the architectural direction it is going in, I’m excited to not have to deal with any number of quirks.</p>
<h3 id="the-development-environment">The development environment</h3>
<p>Ok, now with some of the motivation out of the way, let’s try and build a quick PHP Hello World app running in a Docker container inside WSL 2, make sure we can edit and debug it with VS Code, and access it in a browser from Windows.</p>
<h4 id="step-1-install-wsl-2-and-ubuntu">Step 1: Install WSL 2 and Ubuntu</h4>
<p>Step 1 is obviously to install WSL and a Linux distribution that we like. <a href="https://docs.microsoft.com/en-us/windows/wsl/install-win10">Microsoft’s own documentation</a> offers an excellent guide on how to do just that. But in summary, we need to:</p>
<ol>
<li>Enable the “Windows Subsystem for Linux” and “Virtual Machine Platform” features by running these on an elevated PowerShell:</li>
</ol>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
</code></pre></div><ol start="2">
<li>Restart your machine.</li>
<li>Set WSL 2 as the default version with: <code>wsl --set-default-version 2</code>, also from PowerShell.</li>
<li>Install your desired distribution from the Microsoft Store. I chose <a href="https://www.microsoft.com/en-us/p/ubuntu-2004-lts/9n6svws3rx71?rtc=2&activetab=pivot:overviewtab">Ubuntu 20.04 LTS</a>.</li>
<li>After installing, open the “Ubuntu 20.04 LTS” app from the Start menu and it should come up with a command line console. Wait for it to finish installing. It should prompt for a username and password along the way. Choose something you won’t forget.</li>
</ol>
<p>Optionally, you can install the <a href="https://github.com/microsoft/terminal">Windows Terminal</a> app to get a better command line experience. Windows Terminal can be used to interact with PowerShell and the classic CMD, as well as with our WSL distributions.</p>
<h4 id="step-2-install-docker">Step 2: Install Docker</h4>
<p>Installing Docker is very straightforward. Just download the installer for <a href="https://hub.docker.com/editions/community/docker-ce-desktop-windows/">Docker Desktop for Windows</a>, execute it, and follow the wizard’s steps. Make sure that during setup the “Use the WSL 2 based engine” option is selected. In most cases, the installer will detect WSL 2 and automatically have the option selected.</p>
<p>Follow the <a href="https://docs.docker.com/docker-for-windows/wsl/">official instructions</a> for more details on the process, but it really is that simple.</p>
<h4 id="step-3-install-some-useful-vs-code-extensions">Step 3: Install some useful VS Code extensions</h4>
<p>Our objective is to create a new development environment inside a Docker container and connect to it directly with VS Code. To do that, we use a few useful extensions:</p>
<ol>
<li><a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker">The Docker extension</a> which allows us to browse and manage images and containers and other types of Docker assets.</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-wsl">The Remote - WSL extension</a> which allows VS Code to connect to a WSL distribution.</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers">The Remote - Containers extension</a> which allows VS Code to connect to a container.</li>
</ol>
<h4 id="step-4-create-the-development-container">Step 4: Create the development container</h4>
<p>The extensions that we installed will allow us to use VS Code to work on code from within our WSL Ubuntu as well as from the container. Specifically, we want to connect VS Code to a container. There are a few ways to do this, but I will describe the one I think is the easiest, most convenient and “automagic” by fully leveraging the tools.</p>
<p>Let’s begin by opening a WSL Ubuntu terminal session, which will show something like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">Welcome to Ubuntu 20.04 LTS (GNU/Linux 4.19.104-microsoft-standard x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
...
kevin@kevin-thinkpad:/mnt/c/Users/kevin$
</code></pre></div><h4 id="the-project-directory">The project directory</h4>
<p>Let’s change to our home, create a new directory for our new project, and change into it.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ <span style="color:#038">cd</span>
$ mkdir php-in-docker-demo
$ <span style="color:#038">cd</span> php-in-docker-demo
</code></pre></div><p>Because we installed the Remote - WSL extension, we can open up this directory in VS Code with <code>code .</code>. Opening a terminal (Ctrl + `) in this VS Code instance opens WSL console, not Windows.</p>
<h4 id="the-dockerfile">The Dockerfile</h4>
<p>Now let’s create a new file called <code>Dockerfile</code> which will define what our development environment image will look like. For a no-frills PHP environment, mine looks like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain"># The FROM statement says that our image will be based on the official Ubuntu Docker image from Docker Hub: https://hub.docker.com/_/ubuntu
FROM ubuntu
# Install packages, not allowing apt to ask any questions since we can't answer.
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y software-properties-common php php-xdebug composer
# Configure Xdebug so that the VS Code debugger can use it.
RUN echo "xdebug.remote_enable=on" >> /etc/php/7.4/mods-available/xdebug.ini
RUN echo "xdebug.remote_autostart=on" >> /etc/php/7.4/mods-available/xdebug.ini
# The CMD statement tells Docker which command to run when it starts up the container.
# Here, we just call bash
CMD ["bash"]
</code></pre></div><p>This script will later be used to create our development container. It will have PHP, <a href="https://xdebug.org/">Xdebug</a> and <a href="https://getcomposer.org/">Composer</a>. This is all we need for our simple Hello World app. For more complex scenarios, other software like database clients or PHP extensions can be easily installed with additional <code>RUN</code> statements that call upon the <code>apt</code> package manager.</p>
<p>Consider reading through <a href="https://docs.docker.com/engine/reference/builder/">Docker’s official documentation</a> on Dockerfiles to learn more.</p>
<h4 id="the-configuration-file">The configuration file</h4>
<p>Now, to leverage VS Code’s capabilities, let’s add a development container configuration file. In our current location, we need to create a new directory called <code>.devcontainer</code> and, inside that, a new file called <code>devcontainer.json</code>. I put these contents in mine:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">The</span> <span style="color:#a61717;background-color:#e3d2d2">name</span> <span style="color:#a61717;background-color:#e3d2d2">used</span> <span style="color:#a61717;background-color:#e3d2d2">by</span> <span style="color:#a61717;background-color:#e3d2d2">VS</span> <span style="color:#a61717;background-color:#e3d2d2">Code</span> <span style="color:#a61717;background-color:#e3d2d2">to</span> <span style="color:#a61717;background-color:#e3d2d2">identify</span> <span style="color:#a61717;background-color:#e3d2d2">this</span> <span style="color:#a61717;background-color:#e3d2d2">development</span> <span style="color:#a61717;background-color:#e3d2d2">environment</span>
<span style="color:#b06;font-weight:bold">"name"</span>: <span style="color:#d20;background-color:#fff0f0">"PHP in Docker Demo"</span>,
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">Sets</span> <span style="color:#a61717;background-color:#e3d2d2">the</span> <span style="color:#a61717;background-color:#e3d2d2">run</span> <span style="color:#a61717;background-color:#e3d2d2">context</span> <span style="color:#a61717;background-color:#e3d2d2">to</span> <span style="color:#a61717;background-color:#e3d2d2">one</span> <span style="color:#a61717;background-color:#e3d2d2">level</span> <span style="color:#a61717;background-color:#e3d2d2">up</span> <span style="color:#a61717;background-color:#e3d2d2">instead</span> <span style="color:#a61717;background-color:#e3d2d2">of</span> <span style="color:#a61717;background-color:#e3d2d2">the</span> <span style="color:#a61717;background-color:#e3d2d2">.devcontainer</span> <span style="color:#a61717;background-color:#e3d2d2">folder.</span>
<span style="color:#b06;font-weight:bold">"context"</span>: <span style="color:#d20;background-color:#fff0f0">".."</span>,
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">Update</span> <span style="color:#a61717;background-color:#e3d2d2">the</span> <span style="color:#a61717;background-color:#e3d2d2">'dockerFile'</span> <span style="color:#a61717;background-color:#e3d2d2">property</span> <span style="color:#a61717;background-color:#e3d2d2">if</span> <span style="color:#a61717;background-color:#e3d2d2">you</span> <span style="color:#a61717;background-color:#e3d2d2">aren't</span> <span style="color:#a61717;background-color:#e3d2d2">using</span> <span style="color:#a61717;background-color:#e3d2d2">the</span> <span style="color:#a61717;background-color:#e3d2d2">standard</span> <span style="color:#a61717;background-color:#e3d2d2">'Dockerfile'</span> <span style="color:#a61717;background-color:#e3d2d2">filename.</span>
<span style="color:#b06;font-weight:bold">"dockerFile"</span>: <span style="color:#d20;background-color:#fff0f0">"../Dockerfile"</span>,
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">Add</span> <span style="color:#a61717;background-color:#e3d2d2">the</span> <span style="color:#a61717;background-color:#e3d2d2">IDs</span> <span style="color:#a61717;background-color:#e3d2d2">of</span> <span style="color:#a61717;background-color:#e3d2d2">extensions</span> <span style="color:#a61717;background-color:#e3d2d2">you</span> <span style="color:#a61717;background-color:#e3d2d2">want</span> <span style="color:#a61717;background-color:#e3d2d2">installed</span> <span style="color:#a61717;background-color:#e3d2d2">when</span> <span style="color:#a61717;background-color:#e3d2d2">the</span> <span style="color:#a61717;background-color:#e3d2d2">container</span> <span style="color:#a61717;background-color:#e3d2d2">is</span> <span style="color:#a61717;background-color:#e3d2d2">created.</span>
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">This</span> <span style="color:#a61717;background-color:#e3d2d2">is</span> <span style="color:#a61717;background-color:#e3d2d2">the</span> <span style="color:#a61717;background-color:#e3d2d2">VS</span> <span style="color:#a61717;background-color:#e3d2d2">Code</span> <span style="color:#a61717;background-color:#e3d2d2">PHP</span> <span style="color:#a61717;background-color:#e3d2d2">Debug</span> <span style="color:#a61717;background-color:#e3d2d2">extension.</span>
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">It</span> <span style="color:#a61717;background-color:#e3d2d2">needs</span> <span style="color:#a61717;background-color:#e3d2d2">to</span> <span style="color:#a61717;background-color:#e3d2d2">be</span> <span style="color:#a61717;background-color:#e3d2d2">installed</span> <span style="color:#a61717;background-color:#e3d2d2">in</span> <span style="color:#a61717;background-color:#e3d2d2">the</span> <span style="color:#a61717;background-color:#e3d2d2">container</span> <span style="color:#a61717;background-color:#e3d2d2">for</span> <span style="color:#a61717;background-color:#e3d2d2">us</span> <span style="color:#a61717;background-color:#e3d2d2">to</span> <span style="color:#a61717;background-color:#e3d2d2">have</span> <span style="color:#a61717;background-color:#e3d2d2">access</span> <span style="color:#a61717;background-color:#e3d2d2">to</span> <span style="color:#a61717;background-color:#e3d2d2">it.</span>
<span style="color:#b06;font-weight:bold">"extensions"</span>: [
<span style="color:#d20;background-color:#fff0f0">"felixfbecker.php-debug"</span>
],
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">Use</span> <span style="color:#a61717;background-color:#e3d2d2">'forwardPorts'</span> <span style="color:#a61717;background-color:#e3d2d2">to</span> <span style="color:#a61717;background-color:#e3d2d2">make</span> <span style="color:#a61717;background-color:#e3d2d2">a</span> <span style="color:#a61717;background-color:#e3d2d2">list</span> <span style="color:#a61717;background-color:#e3d2d2">of</span> <span style="color:#a61717;background-color:#e3d2d2">ports</span> <span style="color:#a61717;background-color:#e3d2d2">inside</span> <span style="color:#a61717;background-color:#e3d2d2">the</span> <span style="color:#a61717;background-color:#e3d2d2">container</span> <span style="color:#a61717;background-color:#e3d2d2">available</span> <span style="color:#a61717;background-color:#e3d2d2">locally.</span>
<span style="color:#a61717;background-color:#e3d2d2">//</span> <span style="color:#a61717;background-color:#e3d2d2">When</span> <span style="color:#a61717;background-color:#e3d2d2">we</span> <span style="color:#a61717;background-color:#e3d2d2">run</span> <span style="color:#a61717;background-color:#e3d2d2">our</span> <span style="color:#a61717;background-color:#e3d2d2">PHP</span> <span style="color:#a61717;background-color:#e3d2d2">app,</span> <span style="color:#a61717;background-color:#e3d2d2">we</span> <span style="color:#a61717;background-color:#e3d2d2">will</span> <span style="color:#a61717;background-color:#e3d2d2">use</span> <span style="color:#a61717;background-color:#e3d2d2">this</span> <span style="color:#a61717;background-color:#e3d2d2">port.</span>
<span style="color:#b06;font-weight:bold">"forwardPorts"</span>: [<span style="color:#00d;font-weight:bold">5000</span>],
}
</code></pre></div><p>A default version of this file can be automatically generated by running the “Remote-Containers: Add Development Container Configuration Files…” command in VS Code’s Command Palette (Ctrl + Shift + P).</p>
<h4 id="the-development-container">The development container</h4>
<p>Now that we have all that in place, we can create our image, run our container, and start coding our app. Bring up the VS Code Command Palette with Ctrl + Shift + P and run the “Remote-Containers: Reopen in Container” command. The command will:</p>
<ol>
<li>Read the Dockerfile and create an image based on that. This is like running <code>docker build -t AUTOGENERATED_IMAGE_ID .</code></li>
<li>Run a container based on that image with the settings specified in <code>.devcontainer/devcontainer.json</code>. In our case, all it will do is enable the container’s port 5000 to be accessible by the host. This is more or less like running: <code>docker run -d -p 5000:5000 -v ${PWD}:/workspaces/php-in-docker-demo AUTOGENERATED_IMAGE_ID</code></li>
<li>Open a new VS Code instance connected to the container with the <code>/workspaces/php-in-docker-demo</code> directory open.</li>
</ol>
<p>It will take a while, but after it’s done, we will have a VS Code instance running directly in the container. Open the VS Code terminal with Ctrl + ` and see for yourself. It will show a prompt looking like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-shell" data-lang="shell">root@ec5be7dd0b9b:/workspaces/php-in-docker-demo#
</code></pre></div><p>You can for example, run <code>php -v</code> in this terminal, and expect something along these lines:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">PHP 7.4.3 (cli) (built: May 26 2020 12:24:22) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0, Copyright (c) Zend Technologies
with Zend OPcache v7.4.3, Copyright (c), by Zend Technologies
</code></pre></div><p>This is PHP running, not in Windows, not in our WSL Ubuntu, but in the Docker container.</p>
<h4 id="hello-windows--wsl-2--ubuntu--docker--php--vs-code">Hello Windows + WSL 2 + Ubuntu + Docker + PHP + VS Code</h4>
<p>Let’s now create our app. Add a new <code>index.php</code> file containing something silly like:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-php" data-lang="php"><?php
<span style="color:#080;font-weight:bold">echo</span> <span style="color:#d20;background-color:#fff0f0">"Hello Windows + WSL 2 + Ubuntu + Docker + PHP + Visual Studio Code!"</span>;
</code></pre></div><p>Then, in the VS Code console (remember, Ctrl + `), start up an instance of the built in PHP development server wth <code>php -S 0.0.0.0:5000</code>. It’s important that we use port 5000 because that’s the one that we configured our container to use.</p>
<p>Navigate to <code>http://localhost:5000/</code> in your browser and feel good about a job well done.</p>
<p><img src="/blog/2020/06/linux-development-in-windows-10-docker-wsl-2/running.png" alt="Running app"></p>
<h4 id="interactive-debugging">Interactive debugging</h4>
<p>When configuring our development container, we added Xdebug and the PHP Debug VS Code extension. This means that VS Code can leverage Xdebug to provide an interactive debugging experience for PHP code.</p>
<p>Almost everyting is set up at this point, we just need to do the usual VS Code configuration and add a <code>launch.json</code> file. To do so, in VS Code, press Ctrl + Shift + D to bring up the “Run” panel, click on the “create a launch.json file” link, and in the resulting “Select Environment” menu, select “PHP”.</p>
<p><img src="/blog/2020/06/linux-development-in-windows-10-docker-wsl-2/vscode-run.png" alt="Running app"></p>
<p>After that, the “Run” panel will show a green triangular “Start Debugging” button next to a “Listen to XDebug” text. If you haven’t already, start up a dev web server with <code>php -S 0.0.0.0:5000</code>, click on the “Start Debugging” button, put a breakpoint somewhere in your <code>index.php</code> file, and finally open up <code>http://localhost:5000/</code> in a browser.</p>
<p><img src="/blog/2020/06/linux-development-in-windows-10-docker-wsl-2/debug.png" alt="Running app"></p>
<p>We’re interactively debugging PHP code running on a Docker container in WSL from our Windows IDE/editor. Pretty cool, huh?</p>
<p>And that’s all for now. In this article we’ve learned how to set up a Linux development environment using Docker containers and WSL 2, with Windows 10 Pro. This is a nice approach for anybody who’s confortable on Windows and needs access to a Linux environment for development; and have that environment be easy to reproduce.</p>
<h3 id="resources">Resources:</h3>
<ul>
<li><a href="https://docs.microsoft.com/en-us/windows/wsl/install-win10">Windows Subsystem for Linux Installation Guide for Windows 10</a></li>
<li><a href="https://code.visualstudio.com/blogs/2020/03/02/docker-in-wsl2">Using Docker in WSL 2</a></li>
<li><a href="https://docs.docker.com/docker-for-windows/wsl/">Docker Desktop WSL 2 backend</a></li>
<li><a href="https://code.visualstudio.com/docs/remote/remote-overview">VS Code Remote Development</a></li>
</ul>
Designing flexible CI pipelines with Jenkins and Dockerhttps://www.endpointdev.com/blog/2020/05/flexible-ci-pipelines-jenkins-docker/2020-05-25T00:00:00+00:00Will Plaut
<p><img src="/blog/2020/05/flexible-ci-pipelines-jenkins-docker/pipes.jpg" alt="Pipes"></p>
<p><a href="https://unsplash.com/photos/9AxFJaNySB8">Photo</a> by <a href="https://unsplash.com/@realaxer">Tian Kuan</a> on <a href="https://unsplash.com/">Unsplash</a></p>
<p>When deciding on how to implement continuous integration (CI) for a new project, you are presented with lots of choices. Whatever you end up choosing, your CI needs to work for you and your team. Keeping the CI process and its mechanisms clear and concise helps everyone working on the project. The setup we are currently employing, and what I am going to showcase here, has proven to be flexible and powerful. Specifically, I’m going to highlight some of the things Jenkins and Docker do that are really helpful.</p>
<h3 id="jenkins">Jenkins</h3>
<p><a href="https://www.jenkins.io/">Jenkins</a> provides us with all the CI functionality we need and it can be easily configured to connect to projects on GitHub and our internal GitLab. Jenkins has support for something it calls a multibranch pipeline. A Jenkins project follows a repo and builds any branch that has a <code>Jenkinsfile</code>. A <code>Jenkinsfile</code> configures an individual pipeline that Jenkins runs against a repo on a branch, tag or merge request (MR).</p>
<p>To keep it even simpler, we condense the steps that a <code>Jenkinsfile</code> runs into shell scripts that live in <code>/scripts/</code> at the root of the source repo to do things like test or build or deploy, such as <code>/scripts/test.sh</code>. If a team member wants to know how the tests are run, it is right in that file to reference.</p>
<p>The <code>Jenkinsfile</code> can be written in a declarative syntax or in plain Groovy. We have landed on the scripted Groovy syntax for its more fine-grained control of Docker containers. Jenkins also provides several ways to inspect and debug the pipelines with things like “Replay” in its GUI and using <code>input('wait here')</code> in a pipeline to debug a troublesome step. The <code>input()</code> function is especially useful when paired with Docker. The function allows us to pause the job and go to the Jenkins server where we use <code>docker ps</code> to find the running container’s name. Then we use <code>docker exec -it {container name} bash</code> to debug inside of the container with all of the Jenkins environment variables loaded. This has proven to be a great way to figure out why something isn’t working in our test stages.</p>
<h3 id="docker">Docker</h3>
<p>We love using <a href="https://www.docker.com/">Docker</a> for our development and deployment for a variety of reasons. First, creating a Dockerfile for a project is essentially an exercise in figuring out how a project is built with a minimum of dependencies. Once a Docker container is built, the running container provides a great place to run tests as it is a clean checkout with little to no extra cruft.</p>
<p>Using our Jenkins pipeline, we can take builds triggered by tags and push an associated tagged Docker image up to our registry. With Docker’s layering, pushes are often the shortest stage of the Jenkins job. Deploying that tag is as simple as doing a <code>docker pull</code> on the target system. For the application deployment, we create a basic <code>docker-compose.yml</code> to start and serve the project from within the container, forwarding whatever ports we need on the local system.</p>
<h3 id="example-jenkinsfile">Example Jenkinsfile</h3>
<p>Let’s take a look at a basic scripted <code>Jenkinsfile</code> (scripted in Groovy) that utilizes a <code>Dockerfile</code> in the source repo to build, test, and deploy a project:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">node() {
properties([gitLabConnection('gitlab-connect')])
def vueImage
def dockerTagName
stage('Checkout') {
checkout scm
}
stage('Build') {
vueImage = docker.build("endpoint/vue-test")
}
vueImage.inside('-u 0') {
stage('Test') {
sh './scripts/test.sh'
}
}
stage('Tag/Push') {
docker.withRegistry('https://registry.hub.docker.com', 'ep_dockerhub_creds') {
if (env.TAG_NAME != null) {
vueImage.push("${env.TAG_NAME}")
} else {
vueImage.push("${env.BRANCH_NAME}")
}
}
}
}
</code></pre></div><p>The script’s first stage, <code>Checkout</code>, checks out the repo using our <code>gitlab-connect</code> credentials that are stored on the Jenkins server. It then moves to the <code>Build</code> stage where it builds the image using the <code>Dockerfile</code> in our repo and names it after the org/repo it will use on DockerHub. Then, inside of the running container we enter the <code>Test</code> stage where we run the repo script <code>./scripts/test.sh</code>. After the <code>.inside</code> code block is closed the running container is stopped and removed. Finally, we get to the <code>Tag/Push</code> stage where we push our Docker image up to DockerHub using another set of stored credentials. We tag it with either the <code>TAG_NAME</code> or the <code>BRANCH_NAME</code>.</p>
<p>This <code>Jenkinsfile</code> provides us with a solid base to expand on. During development as requirements change, it’s easy to modify and update the <code>Jenkinsfile</code>. We have the ability to run steps inside and outside of the Docker. Combined with bash scripts that live in the repo, we can do almost anything. Most of the job mechanics can be tuned, down to the specific status updates GitLab receives during a run.</p>
<p>Say we want to handle a push a bit differently if the branch is named <code>Master</code> or we want to add another stage and break out the <code>Test</code> stage into <code>Unit Tests</code> and <code>E2E Tests</code>. These things are easily changed in the <code>Jenkinsfile</code> and then run on Jenkins when pushed. There’s no need to merge to see the pipeline change. Every branch/tag/MR has its own pipeline. Deploying the Docker you just built is easy; just use your <code>TAG_NAME</code> or <code>BRANCH_NAME</code> with <code>docker pull endpoint/vue-test:{}</code>.</p>
<h3 id="conclusion">Conclusion</h3>
<p>Although the above script is just an example script, the <code>Jenkinsfile</code>s we use in production are not far off from this in functionality and the ideas remain the same.</p>
<p>Jenkins is not the easiest to configure as some of the required functionality comes from plugins, and getting the correct combination of plugins can be a challenge. That being said, the functionality it provides paired with Docker is amazing and definitely worth considering when setting up CI for a new project.</p>
GraphQL Server Librarieshttps://www.endpointdev.com/blog/2019/07/graphql-server-libraries/2019-07-12T00:00:00+00:00Zed Jensen
<p><img src="/blog/2019/07/graphql-server-libraries/image-0.jpg" alt="Eroded Icelandic mountain" /><br>Photo by <a href="https://unsplash.com/photos/t07FAEn9wAA">Jon Flobrant</a> on Unsplash</p>
<p>This post is a followup to my previous post, <a href="/blog/2019/05/graphql-an-alternative-to-rest/">GraphQL — An Alternative to REST</a>. Please check that out for an introduction to GraphQL and what makes it different from other API solutions. I’ve collected a list of some of the currently-maintained GraphQL libraries for a few different languages, along with some examples (most of which aren’t fully functional on their own, they’d need more configuration) so you can see what it might be like to use GraphQL in your project. I’ll be focusing on the ways each of these libraries implement GraphQL and what you’d need to do to start a project with each of them, so if you have questions about GraphQL itself, please check out my other blog post.</p>
<h3 id="apollo-server-javascripttypescript">Apollo Server (JavaScript/TypeScript)</h3>
<p><a href="https://www.apollographql.com/">Apollo GraphQL</a> has libraries for both a GraphQL server and client (which I’ll discuss later). <a href="https://www.apollographql.com/docs/apollo-server/">Apollo Server</a> can be used both as a standalone server as well as with libraries like <a href="https://expressjs.com/">Express</a>. Apollo Server is the server library I have the most experience with—I wrote a server last year using Express and Apollo Server, along with a client that used Apollo Client. I’m a fan of the flexibiliy of Apollo, but it takes more work to set up than some of the alternatives.</p>
<p>Setting up Apollo Server as a standalone can be done fairly simply following the directions on <a href="https://www.apollographql.com/docs/apollo-server/getting-started/">their website</a>. However, I’m going to go over the basics of integrating with Express. There are two main parts to writing a server with Apollo: your GraphQL schema and your resolvers. These stay more or less the same whether you’re using Apollo as a standalone server or combining it with Express. You do have to have a database set up separately; I’ll show examples with MongoDB, but you could easily swap it out with PostgreSQL or another database. I’ll show an example resolver along with its GraphQL schema for a blog post. The schema follows the GraphQL schema rules and might look like the following:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-js" data-lang="js"><span style="color:#080;font-weight:bold">const</span> typeDefs = [gql<span style="color:#d20;background-color:#fff0f0">`
</span><span style="color:#d20;background-color:#fff0f0"> type Post {
</span><span style="color:#d20;background-color:#fff0f0"> id: String!
</span><span style="color:#d20;background-color:#fff0f0"> body: String!
</span><span style="color:#d20;background-color:#fff0f0"> }
</span><span style="color:#d20;background-color:#fff0f0">
</span><span style="color:#d20;background-color:#fff0f0"> query {
</span><span style="color:#d20;background-color:#fff0f0"> post(id: String!): Post
</span><span style="color:#d20;background-color:#fff0f0"> }
</span><span style="color:#d20;background-color:#fff0f0">`</span>];
</code></pre></div><p>Now for the resolver. Resolvers are functions that take information from the query (like arguments) and return the relevant data, usually from a database. For our blog post, a resolver might look like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-js" data-lang="js"><span style="color:#080;font-weight:bold">const</span> resolvers = {
post: (root, args, context, info) => {
<span style="color:#080;font-weight:bold">return</span> Post.findById(args.id);
}
};
</code></pre></div><p>Simple! We just get the data from the database and return it—as long as the property names match those of our schema, Apollo will automatically format it according to the frontend’s request and return it to them.</p>
<p>OK, next we create the server:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-js" data-lang="js"><span style="color:#080;font-weight:bold">import</span> express from <span style="color:#d20;background-color:#fff0f0">'express'</span>;
<span style="color:#080;font-weight:bold">import</span> { ApolloServer } from <span style="color:#d20;background-color:#fff0f0">'apollo-server-express'</span>;
<span style="color:#080;font-weight:bold">const</span> PORT = <span style="color:#00d;font-weight:bold">3000</span>;
<span style="color:#080;font-weight:bold">const</span> app = express();
<span style="color:#080;font-weight:bold">const</span> server = ApolloServer({
typeDefs,
resolvers
});
server.applyMiddleware(app);
app.listen(PORT, () => {
console.log(
<span style="color:#d20;background-color:#fff0f0">`Server running at http://localhost:</span><span style="color:#33b;background-color:#fff0f0">${</span>PORT<span style="color:#33b;background-color:#fff0f0">}</span><span style="color:#d20;background-color:#fff0f0">/graphql`</span>
);
});
</code></pre></div><p>And that’s it! Note that these examples are missing a few things like imports, and we didn’t add authentication of any kind, but this is the general format for creating a server with Apollo.</p>
<h3 id="prisma">Prisma</h3>
<p>Prisma is a cool library developed by the same people as Graph.cool that does much of the work for you in enabling GraphQL access to your database.</p>
<p>Prisma offers configuration for existing databases, but unfortunately I had trouble getting it to work on my Ubuntu system—I ran into issues getting the Docker container to connect to my local Postgres and MongoDB databases. However, following the quick guide found <a href="https://www.prisma.io/docs/get-started/01-setting-up-prisma-new-database-JAVASCRIPT-a002/">here</a> on the Prisma website, I was able to get a GraphQL server up and running inside a Docker container with a new database. The process was simple:</p>
<p>First, you have to install the Prisma command line utility:</p>
<pre tabindex="0"><code>npm install -g prisma
</code></pre><p>You also need to have Docker installed. Documentation for Docker can be found <a href="https://docs.docker.com/get-started/">here</a>.</p>
<p>Next, you need to configure Prisma. Create a directory for your Prisma server, and create a new file named <code>docker-compose.yml</code>:</p>
<pre tabindex="0"><code>mkdir hello-world
cd hello-world
touch docker-compose.yml
</code></pre><p>Then, paste the following into it:</p>
<pre tabindex="0"><code>version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- '4466:4466'
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: mongo
uri: mongodb://prisma:prisma@mongo
mongo:
image: mongo:3.6
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: prisma
MONGO_INITDB_ROOT_PASSWORD: prisma
ports:
- '27017:27017'
volumes:
- mongo:/var/lib/mongo
volumes:
mongo: ~
</code></pre><p>I used Mongo here, but Prisma’s site has guides for PostgreSQL and MySQL as well. It’s important to make sure now that you don’t have any conflicts with currently running databases—on my machine I had already had a MongoDB server running on port 27017. I fixed this by just stopping my local MongoDB server, but I’m sure you could configure the Docker containers to work with different ports as well. Running Ubuntu, I just ran <code>sudo service mongodb stop</code> and then the Prisma Docker containers worked just fine. When I was done, I ran <code>sudo service mongodb start</code> to start it up again.</p>
<p>Next, you’ll start the Prisma containers and initialize the Prisma server configuration:</p>
<pre tabindex="0"><code>docker-compose up -d
prisma init --endpoint http://localhost:4466
</code></pre><p>The final step is to deploy the service:</p>
<pre tabindex="0"><code>prisma deploy
</code></pre><p>If all goes well, you’ll see a message that includes a URL to the Prisma Admin, which is a browser tool to interact with your GraphQL endpoints. I used it for a little while when I was testing Prisma out, and it seems to work well and is easy to use.</p>
<p>All in all, Prisma seems like a great way to start if you don’t want to handle the messy details of setup. However, I did have issues getting it to play nice with my already-existing databases (including both PostgreSQL and MongoDB). However, it is still relatively new, so I would expect support to get better over time.</p>
<h3 id="graphene-python">Graphene (Python)</h3>
<p>Graphene is a GraphQL framework for Python. It has integrations for a few different server frameworks (a list can be found <a href="https://github.com/graphql-python/graphene">here</a>), but I’ll show examples from <code>graphene-django</code>, since Django is fairly common and something that we use fairly often at End Point.</p>
<p>Because you’re also setting up a Django project, the tutorial for graphene-django is a little more involved, so I’ll just share the relevant GraphQL sections so you can compare to the other libraries in this post. The most important part, the schema, is defined in Python with a similar format to Django models:</p>
<pre tabindex="0"><code>import graphene
from graphene_django.types import DjangoObjectType
from app.models import Category, Ingredient
class CategoryType(DjangoObjectType):
class Meta:
model = Category
class IngredientType(DjangoObjectType):
class Meta:
model = Ingredient
class Query(object):
all_categories = graphene.List(CategoryType)
all_ingredients = graphene.List(IngredientType)
def resolve_all_categories(self, info, **kwargs):
return Category.objects.all()
def resolve_all_ingredients(self, info, **kwargs):
# We can easily optimize query count in the resolve method
return Ingredient.objects.select_related('category').all()
</code></pre><p>As you can see, the format for defining your GraphQL schema is quite different from some other libraries, but you have the advantage of it looking similar to Django’s model definitions. You’ll also need a higher-level Query definition:</p>
<pre tabindex="0"><code>import graphene
import cookbook.ingredients.schema
class Query(cookbook.ingredients.schema.Query, graphene.ObjectType):
# This class will inherit from multiple Queries
# as we begin to add more apps to our project
pass
schema = graphene.Schema(query=Query)
</code></pre><p>Now that we have a schema defined, we need to add a few things to <code>settings.py</code>:</p>
<pre tabindex="0"><code>INSTALLED_APPS = [
...
# This will also make the `graphql_schema` management command available
'graphene_django',
]
GRAPHENE = {
'SCHEMA': 'cookbook.schema.schema'
}
</code></pre><p>The last piece needed to use your GraphQL schema is in <code>urls.py</code>:</p>
<pre tabindex="0"><code>from graphene_django.views import GraphQLView
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^graphql$', GraphQLView.as_view(graphiql=True)),
]
</code></pre><p>And finally we run the server:</p>
<pre tabindex="0"><code>$ python manage.py runserver
</code></pre><p>Now you should be able to use your GraphQL schema at http://localhost:8000/graphql just like with any other GraphQL server.</p>
<p>Graphene for Django seems like a good solution in that it uses a similar format to other aspects of Django, like the model definitions. However, its format (especially for schema definition) is rather different-looking from the GraphQL standard used by most other libraries, and it seems like it might make it more work to keep your frontend and backend in sync.</p>
<h3 id="graphcool">Graph.cool</h3>
<p>I won’t discuss Graph.cool in detail here, because I went over it in my previous blog post. However, it still merits mention here as an option for your GraphQL server. Essentially, Graph.cool lets you define a GraphQL schema and then handles the work of setting up a database and even hosting for you. If you just want to get a basic GraphQL server set up for testing, or if you don’t need too many features beyond data storage and retrieval, Graph.cool is a great choice.</p>
<h3 id="additional-links">Additional links</h3>
<p>For server libraries in other languages, these seem like good options:</p>
<ul>
<li><a href="https://graphql-ruby.org/">GraphQL Ruby</a></li>
<li><a href="https://www.graphql-java.com/">GraphQL Java</a></li>
<li><a href="https://github.com/graph-gophers/graphql-go">graphql-go</a></li>
</ul>
<p>Thanks for reading! Keep an eye out next week for a second post which will cover GraphQL client libraries.</p>
Deploying production Machine Learning pipelines to Kubernetes with Argohttps://www.endpointdev.com/blog/2019/06/deploying-production-pipelines-to-kubernetes/2019-06-28T00:00:00+00:00Kamil Ciemniewski
<p><img src="/blog/2019/06/deploying-production-pipelines-to-kubernetes/image-0.jpg" alt="Rube Goldberg machine" /><br><a href="https://commons.wikimedia.org/wiki/File:Rube_Goldberg_Machine_(278696130).jpg">Image by Wikimedia Commons</a></p>
<p>In some sense, most machine learning projects look exactly the same. There are 4 stages to be concerned with no matter what the project is:</p>
<ol>
<li>Sourcing the data</li>
<li>Transforming it</li>
<li>Building the model</li>
<li>Deploying it</li>
</ol>
<p>It’s been said that #1 and #2 take most of ML engineers’ time. This is to emphasize how little time it sometimes feels the most fun part—#3—gets.</p>
<p>In the real world, though, #4 over time can take almost as much as the previous three.</p>
<p>Deployed models sometimes need to be rebuilt. They consume data that need to constantly go through points #1 and #2. It certainly isn’t always what’s shown in the classroom, where datasets perfectly fit in the memory and model training takes at most a couple hours on an old laptop.</p>
<p>Working with gigantic datasets isn’t the only problem. Data pipelines can take long hours to complete. What if some part of your infrastructure has an unexpected downtime? Do you just start it all over again from the very beginning?</p>
<p>Many solutions of course exist. With this article, I’d like to go over this problem space and present an approach that feels really nice and clean.</p>
<h3 id="project-description">Project description</h3>
<p>End Point Corporation was founded in 1995. That’s 24 years! About 9 years later, <a href="/blog/2004/10/red-hat-enterprise-linux-3-update-3/">the oldest article</a> on the company’s blog was published. Since that time, a staggering number of 1435 unique articles have been published. That’s a lot of words! This is something we can definitely use in a smart way.</p>
<p>For the purpose of having fun with building a production-grade data pipeline, let’s imagine the following project:</p>
<ul>
<li>A <a href="https://cs.stanford.edu/~quocle/paragraph_vector.pdf">doc2vec</a> model trained on the corpus of End Point’s blog articles</li>
<li>Use of the paragraph vectors for each article to find the 10 other, most similar articles</li>
</ul>
<p>I blogged about using the <a href="/blog/2018/07/recommender-mxnet/">matrix factorization</a> as a simple <a href="https://en.wikipedia.org/wiki/Recommender_system#Collaborative_filtering">collaborative filtering</a> style of the recommender system. We can think about today’s doc2vec-based model as an example of the <a href="https://en.wikipedia.org/wiki/Recommender_system#Content-based_filtering">content based filtering</a>. The business value would be the potentially increased blog traffic from users staying longer on the website.</p>
<h3 id="scalable-pipelines">Scalable pipelines</h3>
<p>The data pipelines problem certainly found some really great solutions. The <a href="http://hadoop.apache.org">Hadoop</a> project brought in the HDFS—a distributed file system for huge data artifacts. Its MapReduce component plays a vital role in distributed data processing.</p>
<p>Then, the fantastic <a href="https://spark.apache.org">Spark</a> project came in. Its architecture makes data reside in memory by default—with explicit caching of the data on disks. The project claims to be running workloads 100 times faster than Hadoop.</p>
<p>Both projects though require the developer to use a very specific set of libraries. It’s not easy, for example, to distribute <a href="https://spacy.io">spaCy</a> training and inference on Spark.</p>
<h3 id="containers">Containers</h3>
<p>On the other side of the spectrum, there’s <a href="https://dask.org">Dask</a>. It’s a Python package that wraps <a href="https://www.numpy.org">Numpy</a>, <a href="https://pandas.pydata.org">Pandas</a> and <a href="https://scikit-learn.org/stable/">Scikit-Learn</a>. It enables developers to load huge piles of data, just as they would with the smaller datasets. The data is partitioned and distributed among the cluster nodes. It can work with groups of processes as well as clusters of containers. The APIs of the above-mentioned projects are (mostly) preserved while all the processing is suddenly distributed.</p>
<p>Some teams like to use Dask along with <a href="https://luigi.readthedocs.io/en/stable/">Luigi</a> and build production pipelines around <a href="https://www.docker.com">Docker</a> or <a href="https://kubernetes.io">Kubernetes</a>.</p>
<p>In this article, I’d like to present another Dask-friendly solution: Kubernetes-native workflows using <a href="https://argoproj.github.io">Argo</a>. What’s great about it compared to Luigi, is that you don’t even need to care about having a certain version of Python and Luigi installed to orchestrate the pipeline. All you need is the Kubernetes cluster and Argo installed on it.</p>
<h3 id="hands-down-work-on-the-project">Hands down work on the project</h3>
<p>The first thing to do when developing this project is to get access to the Kubernetes cluster. For the development, you can set up a one-node cluster using either one of:</p>
<ul>
<li><a href="https://microk8s.io">Microk8s</a></li>
<li><a href="https://github.com/kubernetes/minikube">Minikube</a></li>
</ul>
<p>I love them both. The first is developed by Canonical while the second by the Kubernetes team itself.</p>
<p>This isn’t going to be a step-by-step tutorial on using Kubernetes. I encourage you to read the documentation or possibly seek out a good online course if you don’t know anything yet. Read on even in this case though—it’s nothing that would be overly complex.</p>
<p>Next, you’ll need the Argo Workflows. The installation is really easy. The full yet simple documentation can be found <a href="https://argoproj.github.io/docs/argo/demo.html">here</a>.</p>
<h4 id="the-project-structure">The project structure</h4>
<p>Here’s what the project looks like in the end:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">.
├── Makefile
├── notebooks
│ └── scratch.ipynb
├── notebooks.yml
├── pipeline.yaml
├── tasks
├── base
│ ├── Dockerfile
│ └── requirements.txt
├── build_model
│ ├── Dockerfile
│ └── run.py
├── clone_repo
│ ├── Dockerfile
│ └── run.sh
├── infer
│ ├── Dockerfile
│ └── run.py
├── notebooks
│ └── Dockerfile
└── preprocess
├── Dockerfile
└── run.py
</code></pre></div><p>The main parts are as follows:</p>
<ul>
<li><code>Makefile</code> provides easy to use helpers for building images, sending them into the Docker repository and running the Argo workflow</li>
<li><code>notebooks.yml</code> defines a Kubernetes service and deployment for exploratory <a href="https://github.com/jupyterlab/jupyterlab">Jupyter Lab</a> instance</li>
<li><code>notebooks</code> contains individual Jupyter notebooks</li>
<li><code>pipeline.yaml</code> defines our Machine Learning pipeline in the form of the Argo workflow</li>
<li><code>tasks</code> contains workflow steps as containers along with their Dockerfiles</li>
<li><code>tasks/base</code> defines the base Docker image for other tasks</li>
<li><code>tasks/**/run.(py|sh)</code> is a single entry point for a given pipeline step</li>
</ul>
<p>The idea is to minimize the boilerplate while retaining the features offered e.g. by Luigi.</p>
<h4 id="makefile">Makefile</h4>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-makefile" data-lang="makefile"><span style="color:#369">SHELL</span> := /bin/bash
<span style="color:#369">VERSION</span>?=latest
<span style="color:#369">TASK_IMAGES</span>:=<span style="color:#080;font-weight:bold">$(</span>shell find tasks -name Dockerfile -printf <span style="color:#d20;background-color:#fff0f0">'%h '</span><span style="color:#080;font-weight:bold">)</span>
<span style="color:#369">REGISTRY</span>=base:5000
<span style="color:#06b;font-weight:bold">tasks/%</span>: FORCE
<span style="color:#038">set</span> -e ;<span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> docker build -t blog_pipeline_<span style="color:#080;font-weight:bold">$(</span>@F<span style="color:#080;font-weight:bold">)</span>:<span style="color:#080;font-weight:bold">$(</span>VERSION<span style="color:#080;font-weight:bold">)</span> <span style="color:#369">$@</span> ;<span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> docker tag blog_pipeline_<span style="color:#080;font-weight:bold">$(</span>@F<span style="color:#080;font-weight:bold">)</span>:<span style="color:#080;font-weight:bold">$(</span>VERSION<span style="color:#080;font-weight:bold">)</span> <span style="color:#080;font-weight:bold">$(</span>REGISTRY<span style="color:#080;font-weight:bold">)</span>/blog_pipeline_<span style="color:#080;font-weight:bold">$(</span>@F<span style="color:#080;font-weight:bold">)</span>:<span style="color:#080;font-weight:bold">$(</span>VERSION<span style="color:#080;font-weight:bold">)</span> ;<span style="color:#04d;background-color:#fff0f0">\
</span><span style="color:#04d;background-color:#fff0f0"></span> docker push <span style="color:#080;font-weight:bold">$(</span>REGISTRY<span style="color:#080;font-weight:bold">)</span>/blog_pipeline_<span style="color:#080;font-weight:bold">$(</span>@F<span style="color:#080;font-weight:bold">)</span>:<span style="color:#080;font-weight:bold">$(</span>VERSION<span style="color:#080;font-weight:bold">)</span>
<span style="color:#06b;font-weight:bold">images</span>: <span style="color:#080;font-weight:bold">$(</span><span style="color:#369">TASK_IMAGES</span><span style="color:#080;font-weight:bold">)</span>
<span style="color:#06b;font-weight:bold">run</span>: images
argo submit pipeline.yaml --watch
<span style="color:#06b;font-weight:bold">start_notebooks</span>:
kubectl apply -f notebooks.yml
<span style="color:#06b;font-weight:bold">stop_notebooks</span>:
kubectl delete deployment jupyter-notebook
<span style="color:#06b;font-weight:bold">FORCE</span>: ;
</code></pre></div><p>When using this Makefile with <code>make run</code>, it will need to resolve the <code>images</code> dependency. This, in turn, will ask to resolve all of the <code>task/**/Dockerfile</code> dependencies too. Notice how the <code>TASK_IMAGES</code> variable is constructed: it uses the make’s <code>shell</code> command to use the Unix’s <code>find</code> to find the subdirectories of <code>tasks</code> that contain the Dockerfile. Here’s what the output would be if you were to use it directly:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ find tasks -name Dockerfile -printf <span style="color:#d20;background-color:#fff0f0">'%h '</span>
tasks/notebooks tasks/base tasks/preprocess tasks/infer tasks/build_model tasks/clone_repo
</code></pre></div><h4 id="setting-up-jupyter-notebooks-as-a-scratch-pad-and-for-eda">Setting up Jupyter Notebooks as a scratch pad and for EDA</h4>
<p>Let’s start off by defining our base Docker image:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-dockerfile" data-lang="dockerfile"><span style="color:#080;font-weight:bold">FROM</span><span style="color:#d20;background-color:#fff0f0"> python:3.7</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">COPY</span> requirements.txt /requirements.txt<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> pip install -r /requirements.txt<span style="color:#a61717;background-color:#e3d2d2">
</span></code></pre></div><p>Following is the Dockerfile that extends it and adds the Jupyter Lab:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-dockerfile" data-lang="dockerfile"><span style="color:#080;font-weight:bold">FROM</span><span style="color:#d20;background-color:#fff0f0"> endpoint-blog-pipeline/base:latest</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> pip install jupyterlab<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> mkdir ~/.jupyter<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> <span style="color:#038">echo</span> <span style="color:#d20;background-color:#fff0f0">"c.NotebookApp.token = ''"</span> >> ~/.jupyter/jupyter_notebook_config.py<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> <span style="color:#038">echo</span> <span style="color:#d20;background-color:#fff0f0">"c.NotebookApp.password = ''"</span> >> ~/.jupyter/jupyter_notebook_config.py<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">RUN</span> mkdir /notebooks<span style="color:#a61717;background-color:#e3d2d2">
</span><span style="color:#a61717;background-color:#e3d2d2"></span><span style="color:#080;font-weight:bold">WORKDIR</span><span style="color:#d20;background-color:#fff0f0"> /notebooks</span><span style="color:#a61717;background-color:#e3d2d2">
</span></code></pre></div><p>The last step is to add the Kubernetes service and deployment definition in <code>notebooks.yml</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>apps/v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Deployment<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>jupyter-notebook<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">labels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>jupyter-notebook<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">replicas</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">1</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">selector</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">matchLabels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>jupyter-notebook<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">template</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">labels</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>jupyter-notebook<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">containers</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>minimal-notebook<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>base:5000/blog_pipeline_notebooks<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">containerPort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">8888</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">command</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"/usr/local/bin/jupyter"</span>]<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">args</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"lab"</span>,<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"--allow-root"</span>,<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"--port"</span>,<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"8888"</span>,<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"--ip"</span>,<span style="color:#bbb"> </span><span style="color:#d20;background-color:#fff0f0">"0.0.0.0"</span>]<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">---</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>Service<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>v1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">metadata</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>jupyter-notebook<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">spec</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">type</span>:<span style="color:#bbb"> </span>NodePort<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">selector</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">app</span>:<span style="color:#bbb"> </span>jupyter-notebook<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">ports</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">protocol</span>:<span style="color:#bbb"> </span>TCP<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">nodePort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">30040</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">port</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">8888</span><span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">targetPort</span>:<span style="color:#bbb"> </span><span style="color:#00d;font-weight:bold">8888</span><span style="color:#bbb">
</span></code></pre></div><p>This can be run using our Makefile with <code>make start_notebooks</code> or directly with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ kubectl apply -f notebooks.yml
</code></pre></div><h4 id="exploration">Exploration</h4>
<p>The <a href="https://github.com/kamilc/endpoint-blog-nlp/blob/master/notebooks/scratch.ipynb">notebook itself</a> feels more like a scratch pad than an exploratory data analysis. You can see that it’s very informal and doesn’t include much of the exploration or visualization. You’re likely not to omit those in more real-world code.</p>
<p>I used it to ensure the model would work at all. I then was able to grab portions of the code and paste it directly into step definitions.</p>
<h4 id="implementation">Implementation</h4>
<h5 id="step-1-source-blog-articles">Step 1: Source blog articles</h5>
<p>The blog’s articles are stored on <a href="https://github.com/EndPointCorp/end-point-blog">GitHub</a> in Markdown files.</p>
<p>Our first pipeline task will need to either clone the repo or pull from it if it’s present in the pipeline’s shared volume.</p>
<p>We’ll use the Kubernetes <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath">hostPath</a> as the cross-step volume. What’s nice about it is that it’s easy to peek into the volume during development to see if the data artifacts are being generated correctly.</p>
<p>In our example here, I’m hardcoding the path on my local system:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># ...</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">volumes</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>endpoint-blog-src<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">hostPath</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">path</span>:<span style="color:#bbb"> </span>/home/kamil/data/endpoint-blog-src<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">type</span>:<span style="color:#bbb"> </span>Directory<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#888"># ...</span><span style="color:#bbb">
</span></code></pre></div><p>This is one of the downsides of the <code>hostPath</code>—it only accepts absolute paths. This will do just fine for now though.</p>
<p>In the <code>pipeline.yml</code> we define the task container with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># ...</span><span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#b06;font-weight:bold">templates</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>clone-repo<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">container</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">image</span>:<span style="color:#bbb"> </span>base:5000/blog_pipeline_clone_repo<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">command</span>:<span style="color:#bbb"> </span>[bash]<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">args</span>:<span style="color:#bbb"> </span>[<span style="color:#d20;background-color:#fff0f0">"/run.sh"</span>]<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">volumeMounts</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">mountPath</span>:<span style="color:#bbb"> </span>/data<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>endpoint-blog-src<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#888"># ...</span><span style="color:#bbb">
</span></code></pre></div><p>The full pipeline forms a tree which is expressed conveniently as a directed acyclic graph within the Argo. Here’s the definition of the whole pipeline (some steps were not shown yet):</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml"><span style="color:#888"># ...</span><span style="color:#bbb">
</span><span style="color:#bbb"></span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>article-vectors<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">dag</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">tasks</span>:<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>src<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">template</span>:<span style="color:#bbb"> </span>clone-repo<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>dataframe<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">template</span>:<span style="color:#bbb"> </span>preprocess<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">dependencies</span>:<span style="color:#bbb"> </span>[src]<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>model<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">template</span>:<span style="color:#bbb"> </span>build-model<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">dependencies</span>:<span style="color:#bbb"> </span>[dataframe]<span style="color:#bbb">
</span><span style="color:#bbb"> </span>- <span style="color:#b06;font-weight:bold">name</span>:<span style="color:#bbb"> </span>infer<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">template</span>:<span style="color:#bbb"> </span>infer<span style="color:#bbb">
</span><span style="color:#bbb"> </span><span style="color:#b06;font-weight:bold">dependencies</span>:<span style="color:#bbb"> </span>[model]<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#888"># ...</span><span style="color:#bbb">
</span></code></pre></div><p>Notice how the <code>dependencies</code> field makes it easy to tell Argo what order to take when executing the tasks. The Argo steps can also define inputs and outputs—just like Luigi. For this simple example, I decided to omit them and enforce the convention for the steps to expect data artifacts in a certain location in the mounted volume. If you’re curious about other Argo features though, <a href="https://argoproj.github.io/docs/argo/examples/readme.html#parameters">here</a> is its documentation.</p>
<p>The entry point script for the task is pretty simple:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#c00;font-weight:bold">#!/bin/bash
</span><span style="color:#c00;font-weight:bold"></span>
<span style="color:#038">cd</span> /data
<span style="color:#080;font-weight:bold">if</span> [ -d ./blog ]
<span style="color:#080;font-weight:bold">then</span>
<span style="color:#038">cd</span> blog
git pull origin master
<span style="color:#080;font-weight:bold">else</span>
git clone https://github.com/EndPointCorp/end-point-blog.git blog
<span style="color:#080;font-weight:bold">fi</span>
</code></pre></div><h5 id="step-2-data-wrangling">Step 2: Data wrangling</h5>
<p>At this point, we’d have the source files for the blog articles in Markdown files. To be able to run them through any kind of machine learning modeling, we need to source it into the data frame. We’ll also need to clean the text a bit. Here is the reasoning behind the cleanup routine:</p>
<ul>
<li>I want the relations between the articles to omit the code snippets: <strong>not</strong> to group them by the used programming language or a library just by the keywords they contain</li>
<li>I also want the metadata about the tags and authors to be omitted too as I don’t want to see only e.g. my articles listed as similar to my other ones</li>
</ul>
<p>The full source for the <code>run.py</code> of the “preprocess” task can be viewed <a href="https://github.com/kamilc/endpoint-blog-nlp/blob/master/tasks/preprocess/run.py">here</a>.</p>
<p>Notice that unlike make or Luigi, the Argo workflows would run the same task fully again even with the step artifact already being created. I <strong>like</strong> this flexibility—it’s extremely easy after all to just skip the processing in Python or shell script if it already exists.</p>
<p>At the end of this step, the data frame is written as the <a href="https://parquet.apache.org">Apache Parquet</a> file.</p>
<h5 id="step-3-building-the-model">Step 3: Building the model</h5>
<p>The model from the paper mentioned earlier has already been implemented in a variety of other projects. There are implementations for each major deep learning framework on GitHub. There’s also a pretty good one included in <a href="https://radimrehurek.com/gensim/index.html">Gensim</a>. Its documentation can be found <a href="https://radimrehurek.com/gensim/models/doc2vec.html">here</a>.</p>
<p>The <a href="https://github.com/kamilc/endpoint-blog-nlp/blob/master/tasks/build_model/run.py">run.py</a> is pretty short and straight forward as well. This is one of the goals for the pipeline. In the end, it’s writing the trained model into the shared volume as well.</p>
<p>Notice that re-running the pipeline with the model already stored will not trigger the training again. This is what we want. Imagine a new article being pushed into the repository. It’s very unlikely that retraining with it would affect the model’s performance in any significant way. We’ll still need to predict the similar other documents for it. The model building step would short-circuit though with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python"><span style="color:#080;font-weight:bold">if</span> __name__ == <span style="color:#d20;background-color:#fff0f0">'__main__'</span>:
<span style="color:#080;font-weight:bold">if</span> os.path.isfile(<span style="color:#d20;background-color:#fff0f0">'/data/articles.model'</span>):
<span style="color:#038">print</span>(<span style="color:#d20;background-color:#fff0f0">"Skipping as the model file already exists"</span>)
<span style="color:#080;font-weight:bold">else</span>:
build_model()
</code></pre></div><h5 id="step-4-predict-similar-articles">Step 4: Predict similar articles</h5>
<p>The listing of the <a href="https://github.com/kamilc/endpoint-blog-nlp/blob/master/tasks/infer/run.py">run.py</a> isn’t overly long:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python"><span style="color:#080;font-weight:bold">import</span> <span style="color:#b06;font-weight:bold">pandas</span> <span style="color:#080;font-weight:bold">as</span> <span style="color:#b06;font-weight:bold">pd</span>
<span style="color:#080;font-weight:bold">from</span> <span style="color:#b06;font-weight:bold">gensim.models.doc2vec</span> <span style="color:#080;font-weight:bold">import</span> Doc2Vec
<span style="color:#080;font-weight:bold">import</span> <span style="color:#b06;font-weight:bold">yaml</span>
<span style="color:#080;font-weight:bold">from</span> <span style="color:#b06;font-weight:bold">pathlib</span> <span style="color:#080;font-weight:bold">import</span> Path
<span style="color:#080;font-weight:bold">import</span> <span style="color:#b06;font-weight:bold">os</span>
<span style="color:#080;font-weight:bold">def</span> <span style="color:#06b;font-weight:bold">write_similar_for</span>(path, model):
similar_paths = model.docvecs.most_similar(path)
yaml_path = (Path(<span style="color:#d20;background-color:#fff0f0">'/data/blog/'</span>) / path).parent / <span style="color:#d20;background-color:#fff0f0">'similar.yaml'</span>
<span style="color:#080;font-weight:bold">with</span> <span style="color:#038">open</span>(yaml_path, <span style="color:#d20;background-color:#fff0f0">"w"</span>) <span style="color:#080;font-weight:bold">as</span> file:
file.write(yaml.dump([p <span style="color:#080;font-weight:bold">for</span> p, _ <span style="color:#080">in</span> similar_paths]))
<span style="color:#038">print</span>(<span style="color:#d20;background-color:#fff0f0">f</span><span style="color:#d20;background-color:#fff0f0">"Wrote similar paths to </span><span style="color:#33b;background-color:#fff0f0">{</span>yaml_path<span style="color:#33b;background-color:#fff0f0">}</span><span style="color:#d20;background-color:#fff0f0">"</span>)
<span style="color:#080;font-weight:bold">def</span> <span style="color:#06b;font-weight:bold">infer_similar</span>():
articles = pd.read_parquet(<span style="color:#d20;background-color:#fff0f0">'/data/articles.parquet'</span>)
model = Doc2Vec.load(<span style="color:#d20;background-color:#fff0f0">'/data/articles.model'</span>)
<span style="color:#080;font-weight:bold">for</span> tag <span style="color:#080">in</span> articles[<span style="color:#d20;background-color:#fff0f0">'file'</span>].tolist():
write_similar_for(tag, model)
<span style="color:#080;font-weight:bold">if</span> __name__ == <span style="color:#d20;background-color:#fff0f0">'__main__'</span>:
infer_similar()
</code></pre></div><p>The idea is to load up the saved Gensim model and the data frame with articles first. Then for each article use the model to get the 10 most similar other articles.</p>
<p>As the step’s output, the listing of similar articles is placed in the <code>similar.yml</code> file for each article’s subdirectory.</p>
<p>The blog’s Markdown → HTML compiler could then use this file and e.g. inject the “You might find those articles interesting too” section.</p>
<h4 id="results">Results</h4>
<p>The scratch notebook already includes the example results of running this doc2vec model. Examples:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python">model.docvecs.most_similar(<span style="color:#d20;background-color:#fff0f0">'2019/01/09/liquid-galaxy-at-instituto-moreira-salles.html.md'</span>)
</code></pre></div><p>Giving the output of:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python">[(<span style="color:#d20;background-color:#fff0f0">'2016/04/22/liquid-galaxy-for-real-estate.html.md'</span>, <span style="color:#00d;font-weight:bold">0.8872901201248169</span>),
(<span style="color:#d20;background-color:#fff0f0">'2017/07/03/liquid-galaxy-at-2017-boma.html.md'</span>, <span style="color:#00d;font-weight:bold">0.8766101598739624</span>),
(<span style="color:#d20;background-color:#fff0f0">'2017/01/25/smartracs-liquid-galaxy-at-national.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.8722846508026123</span>),
(<span style="color:#d20;background-color:#fff0f0">'2016/01/04/liquid-galaxy-at-new-york-tech-meetup_4.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.8693454265594482</span>),
(<span style="color:#d20;background-color:#fff0f0">'2017/06/16/successful-first-geoint-symposium-for.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.8679709434509277</span>),
(<span style="color:#d20;background-color:#fff0f0">'2014/08/22/liquid-galaxy-for-daniel-island-school.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.8659971356391907</span>),
(<span style="color:#d20;background-color:#fff0f0">'2016/07/21/liquid-galaxy-featured-on-reef-builders.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.8644022941589355</span>),
(<span style="color:#d20;background-color:#fff0f0">'2017/11/17/president-of-the-un-general-assembly.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.8620222806930542</span>),
(<span style="color:#d20;background-color:#fff0f0">'2016/04/27/we-are-bigger-than-vr-gear-liquid-galaxy.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.8613147139549255</span>),
(<span style="color:#d20;background-color:#fff0f0">'2015/11/04/end-pointers-favorite-liquid-galaxy.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.8601428270339966</span>)]
</code></pre></div><p>Or the following:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python">model.docvecs.most_similar(<span style="color:#d20;background-color:#fff0f0">'2019/01/08/speech-recognition-with-tensorflow.html.md'</span>)
</code></pre></div><p>Giving:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python">[(<span style="color:#d20;background-color:#fff0f0">'2019/05/01/facial-recognition-amazon-deeplens.html.md'</span>, <span style="color:#00d;font-weight:bold">0.8850516080856323</span>),
(<span style="color:#d20;background-color:#fff0f0">'2017/05/30/recognizing-handwritten-digits-quick.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.8535605072975159</span>),
(<span style="color:#d20;background-color:#fff0f0">'2018/10/10/image-recognition-tools.html.md'</span>, <span style="color:#00d;font-weight:bold">0.8495659232139587</span>),
(<span style="color:#d20;background-color:#fff0f0">'2018/07/09/training-tesseract-models-from-scratch.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.8377258777618408</span>),
(<span style="color:#d20;background-color:#fff0f0">'2015/12/18/ros-has-become-pivotal-piece-of.html.md'</span>, <span style="color:#00d;font-weight:bold">0.8344655632972717</span>),
(<span style="color:#d20;background-color:#fff0f0">'2013/03/07/streaming-live-with-red5-media.html.md'</span>, <span style="color:#00d;font-weight:bold">0.8181146383285522</span>),
(<span style="color:#d20;background-color:#fff0f0">'2012/04/27/streaming-live-with-red5-media-server.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.8142604827880859</span>),
(<span style="color:#d20;background-color:#fff0f0">'2013/03/15/generating-pdf-documents-in-browser.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.7829260230064392</span>),
(<span style="color:#d20;background-color:#fff0f0">'2016/05/12/sketchfab-on-liquid-galaxy.html.md'</span>, <span style="color:#00d;font-weight:bold">0.7779937386512756</span>),
(<span style="color:#d20;background-color:#fff0f0">'2018/08/29/self-driving-toy-car-using-the-a3c-algorithm.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.7659779787063599</span>)]
</code></pre></div><p>Or</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python">model.docvecs.most_similar(<span style="color:#d20;background-color:#fff0f0">'2016/06/03/adding-bash-completion-to-python-script.html.md'</span>)
</code></pre></div><p>With:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python">[(<span style="color:#d20;background-color:#fff0f0">'2014/03/12/provisioning-development-environment.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.8298013806343079</span>),
(<span style="color:#d20;background-color:#fff0f0">'2015/04/03/manage-python-script-options.html.md'</span>, <span style="color:#00d;font-weight:bold">0.7975824475288391</span>),
(<span style="color:#d20;background-color:#fff0f0">'2012/01/03/automating-removal-of-ssh-key-patterns.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.7794561386108398</span>),
(<span style="color:#d20;background-color:#fff0f0">'2014/03/14/provisioning-development-environment_14.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.7763932943344116</span>),
(<span style="color:#d20;background-color:#fff0f0">'2012/04/16/easy-creating-ramdisk-on-ubuntu.html.md'</span>, <span style="color:#00d;font-weight:bold">0.7579266428947449</span>),
(<span style="color:#d20;background-color:#fff0f0">'2016/03/03/loading-json-files-into-postgresql-95.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.7410352230072021</span>),
(<span style="color:#d20;background-color:#fff0f0">'2015/02/06/vim-plugin-spotlight-ctrlp.html.md'</span>, <span style="color:#00d;font-weight:bold">0.7385793924331665</span>),
(<span style="color:#d20;background-color:#fff0f0">'2017/10/27/hot-deploy-java-classes-and-assets-in.html.md'</span>,
<span style="color:#00d;font-weight:bold">0.7358890771865845</span>),
(<span style="color:#d20;background-color:#fff0f0">'2012/03/21/puppet-custom-fact-ruby-plugin.html.md'</span>, <span style="color:#00d;font-weight:bold">0.718029260635376</span>),
(<span style="color:#d20;background-color:#fff0f0">'2012/01/14/using-disqus-and-rails.html.md'</span>, <span style="color:#00d;font-weight:bold">0.716759443283081</span>)]
</code></pre></div><p>To run the pipeline all you need is to:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ make run
</code></pre></div><p>Or directly with:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ argo submit pipeline.yml --watch
</code></pre></div><p>Argo gives a nice looking output of all the steps:</p>
<pre tabindex="0"><code>Name: endpoint-blog-pipeline-49ls5
Namespace: default
ServiceAccount: default
Status: Succeeded
Created: Wed Jun 26 13:27:51 +0200 (17 seconds ago)
Started: Wed Jun 26 13:27:51 +0200 (17 seconds ago)
Finished: Wed Jun 26 13:28:08 +0200 (now)
Duration: 17 seconds
STEP PODNAME DURATION MESSAGE
✔ endpoint-blog-pipeline-49ls5
├-✔ src endpoint-blog-pipeline-49ls5-3331170004 3s
├-✔ dataframe endpoint-blog-pipeline-49ls5-2286787535 3s
├-✔ model endpoint-blog-pipeline-49ls5-529475051 3s
└-✔ infer endpoint-blog-pipeline-49ls5-1778224726 6s
</code></pre><p>The resulting <code>similar.yml</code> files look as follows:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">$ ls ~/data/endpoint-blog-src/blog/2013/03/15/
generating-pdf-documents-in-browser.html.md similar.yaml
$ cat ~/data/endpoint-blog-src/blog/2013/03/15/similar.yaml
- 2016/03/17/creating-video-player-with-time-markers.html.md
- 2014/07/17/creating-symbol-web-font.html.md
- 2018/10/10/image-recognition-tools.html.md
- 2015/08/04/how-to-big-beautiful-background-video.html.md
- 2014/11/06/simplifying-mobile-development-with.html.md
- 2016/03/23/learning-from-data-basics-naive-bayes.html.md
- 2019/01/08/speech-recognition-with-tensorflow.html.md
- 2013/11/19/asynchronous-page-switches-with-django.html.md
- 2016/03/11/strict-typing-fun-example-free-monads.html.md
- 2018/07/09/training-tesseract-models-from-scratch.html.md
</code></pre></div><p>Although it’s difficult to quantify, those sets of “similar” documents do seem to be linked in many ways to their “anchor” articles. You’re invited to read them and see for yourself!</p>
<h3 id="closing-words">Closing words</h3>
<p>The code presented here is hosted <a href="https://github.com/kamilc/endpoint-blog-nlp">on GitHub</a>. There’s lots of room for improvement of course. It shows a nice approach that could be used for both small model deployments (like the one above) but also very big ones too.</p>
<p>The Argo workflows could be used in tandem with Kubernetes deployments. You could e.g. run a distributed <a href="https://www.tensorflow.org">TensorFlow</a> model training and then deploy it on Kubernetes via <a href="https://www.tensorflow.org/tfx/guide/serving">TensorFlow Serving</a>. If you’re more into <a href="https://pytorch.org">PyTorch</a>, then distributing the training would be possible via <a href="https://eng.uber.com/horovod/">Horovod</a>. Have data scientists that use R? Deploy <a href="https://www.rstudio.com">RStudio Server</a> instead of the JupyterLab with <a href="https://hub.docker.com/r/rocker/rstudio">the image from DockerHub</a> and run some or all tasks with the <a href="https://hub.docker.com/r/rocker/r-ver">simpler one</a> with R-base only.</p>
<p>If you have any questions or projects you’d like us to help you with, reach out right away through our <a href="/contact/">contact form</a>!</p>
Building Containers with Habitathttps://www.endpointdev.com/blog/2016/10/building-containers-with-habitat/2016-10-17T00:00:00+00:00Kirk Harr
<h3 id="many-containers-many-build-systems">Many Containers, Many Build Systems</h3>
<p>When working with modern container systems like Docker, Kubernetes, and Mesosphere, each provide methods for building your applications into their containers. However each build process is specific to that container system, and using similar applications across tiers of container environments would require maintaining each container’s build environment. When approaching this problem for multiple container environments, Chef Software created a tool to unify these build systems and create container-agnostic builds which could be exported into any of the containers. This tool is called <a href="https://www.habitat.sh/">Habitat</a> which also provide some pre-built images to get applications started quickly.</p>
<p>I recently attended a Habitat Hack event locally in Portland (Oregon) which helped me get more familiar with the system and its capabilities. We worked together in teams to take a deeper dive into various aspects of how Habitat works, you can read about our adventures over on the <a href="https://blog.chef.io/2016/09/09/habitat-hack-pdx-wrap/">Chef blog</a>.</p>
<p>To examine how the various parts of the build environment work, I picked an example Node.js application from the Habitat Documentation to build and customize.</p>
<h3 id="nodejs-application-into-a-docker-container">Node.js Application into a Docker Container</h3>
<p>For the most basic Habitat build, you must define a plan.sh file which will contain all the build process logic as well as all configuration values to define the application. Within my Node.js example, this file contains this content:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">pkg_origin=daehlie
pkg_name=mytutorialapp
pkg_version=0.3.0
pkg_maintainer="Kirk Harr <kharr@endpoint.com>"
pkg_license=()
pkg_source=https://s3-us-west-2.amazonaws.com/${pkg_name}/${pkg_name}-${pkg_version}.tar.gz
pkg_shasum=e4e988d9216775a4efa4f4304595d7ff31bdc0276d5b7198ad6166e13630aaa9
pkg_filename=${pkg_name}-${pkg_version}.tar.gz
pkg_deps=(core/node)
pkg_expose=(8080)
do_build() {
npm install
}
do_install() {
cp package.json ${pkg_prefix}
cp server.js ${pkg_prefix}
mkdir -p ${pkg_prefix}/node_modules/
cp -vr node_modules/* ${pkg_prefix}/node_modules/
}
</code></pre></div><p>Within this is defined all the application details like the name of the author, the version of the application being packaged, as well as the package name. Each package can be defined with a license for the code in use as well as any code dependencies, like the Node.js application server (core/node), as well as the repository URL for locating these files. There are also two executable statements which build the package dependencies, and perform final installation setup during the eventual package installation.</p>
<p>Additionally to define this application we must provide the logic for how to start the Node application server, and provide configuration on what ports to listen on as well as the message to be displayed once it has started. To do so we must create a stub Node.js config.json which provides the port and message values:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">{
"message": "Hello, World",
"port": "8080"
}
</code></pre></div><p>We also need two hooks which will be executed at package install time and at runtime for the application respectively. These are named, init and run in our case, with init setting up the symbolic links to the various Node.js components from the core/node package which will be included in the build, and run provides the entry point for the applications flow effectively starting the npm application server. Just like with a Dockerfile, any additional logic needed during the process would be included in these two hooks, depending on if the logic was specific to install time or run time.</p>
<h3 id="injected-configuration-values">Injected Configuration Values</h3>
<p>In this example, both the message to be displayed to the user, as well as the port that the Node.js application server will listen on are hard-coded into our build, and all the images that resulted from it would be identical. In order to allow for some customizing of the resulting image, you can replace the hard-coded values in the Node.js config.json into variables which can be replaced during the build process:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">{
"message": "{{cfg.message}}",
"port": "{{cfg.port}}"
}
</code></pre></div><p>To complete the replacement, we would provide a “Tom’s Obvious, Minimal Language” (.toml) file with has a key-value pair for each of these configuration variables we want to set. This .toml file will be interpreted during each build to populate these values, creating an opportunity to customize our builds by injecting specific values into the variables defined in the application configuration. Here is an example of the syntax from this example:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain"># Message of the Day
message = "Hello, World of Habitat"
# The port number that is listening for requests.
port = 8080
</code></pre></div><h3 id="conclusions">Conclusions</h3>
<p>Habitat seeks to fill in the gaps between the various container formats for Docker, Kubernetes and others, by allowing common build infrastructure and dependency libraries to be unified in distribution. By utilizing the same build infrastructure, it becomes more feasible to have a hybrid environment with various container formats in use, without creating duplicate build infrastructure which basically performs the same task slightly differently right at the end to package the application into the proper container format. Habitat helps to decouple the actual build process and all that plumbing, from the process of exporting the build image into the proper format for whatever container is in use. In that way as new container formats are developed, all that is required to accommodate them is expanding the export function for that new format, without any changes to the overall build process or customization of your code.</p>
Creating Composite Docker Containers with Docker Composehttps://www.endpointdev.com/blog/2016/02/creating-composite-docker-containers/2016-02-16T00:00:00+00:00Kirk Harr
<h3 id="composite-docker-containers">Composite Docker Containers</h3>
<p><a href="https://docker.com">Docker</a> is an application container system which allows logical isolation and automation of software components into isolated instances similar in some ways to a virtual server. This model is quite powerful for creating new instances of a given application rapidly, and creating automated system stacks from high-availability to high-performance clusters. Even though there is no technical limitation, the idea behind this model is that these containers should be in a 1:1 relationship with each application component. If you deploy a Docker image for Apache Tomcat, the container will contain Tomcat, and only Tomcat and its core dependencies. If you needed a Tomcat application server, and a PostgreSQL database to go with that application server; in general you would need to create two separate containers, one with the Tomcat image and the other with the PostgreSQL image. This can lead to an undesirable situation with complexity where you must manage both containers separately, even though they are both part of the same stack. In order to solve this problem, recently the Docker team developed Docker Compose to allow these complex applications to all live inside one distinct container configuration.</p>
<h3 id="creating-a-composite-stack-with-separated-containers">Creating a composite stack with separated containers</h3>
<p>Using the standard Dockerfile configurations and continuing the example above, you could create a Tomcat application server with a corresponding PostgreSQL database server using two separate containers. Here is an example of the Tomcat Dockerfile:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">FROM java:8-jre
ENV CATALINA_HOME /usr/local/tomcat
ENV PATH $CATALINA_HOME/bin:$PATH
RUN mkdir -p "$CATALINA_HOME"
WORKDIR $CATALINA_HOME
# see https://www.apache.org/dist/tomcat/tomcat-8/KEYS
RUN gpg --keyserver pool.sks-keyservers.net --recv-keys \
05AB33110949707C93A279E3D3EFE6B686867BA6 \
07E48665A34DCAFAE522E5E6266191C37C037D42 \
47309207D818FFD8DCD3F83F1931D684307A10A5 \
541FBE7D8F78B25E055DDEE13C370389288584E7 \
61B832AC2F1C5A90F0F9B00A1C506407564C17A3 \
79F7026C690BAA50B92CD8B66A3AD3F4F22C4FED \
9BA44C2621385CB966EBA586F72C284D731FABEE \
A27677289986DB50844682F8ACB77FC2E86E29AC \
A9C5DF4D22E99998D9875A5110C01C5A2F6059E7 \
DCFD35E0BF8CA7344752DE8B6FB21E8933C60243 \
F3A04C595DB5B6A5F1ECA43E3B7BBB100D811BBE \
F7DA48BB64BCB84ECBA7EE6935CD23C10D498E23
ENV TOMCAT_MAJOR 8
ENV TOMCAT_VERSION 8.0.30
ENV TOMCAT_TGZ_URL https://www.apache.org/dist/tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz
RUN set -x \
&& curl -fSL "$TOMCAT_TGZ_URL" -o tomcat.tar.gz \
&& curl -fSL "$TOMCAT_TGZ_URL.asc" -o tomcat.tar.gz.asc \
&& gpg --verify tomcat.tar.gz.asc \
&& tar -xvf tomcat.tar.gz --strip-components=1 \
&& rm bin/*.bat \
&& rm tomcat.tar.gz*
EXPOSE 8080
CMD ["catalina.sh", "run"]
</code></pre></div><p>Here is an example of the PostgreSQL Dockerfile:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain"># vim:set ft=dockerfile:
FROM debian:jessie
# explicitly set user/group IDs
RUN groupadd -r postgres --gid=999 && useradd -r -g postgres --uid=999 postgres
# grab gosu for easy step-down from root
RUN gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" \
&& wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& apt-get purge -y --auto-remove ca-certificates wget
# make the "en_US.UTF-8" locale so postgres will be utf-8 enabled by default
RUN apt-get update && apt-get install -y locales && rm -rf /var/lib/apt/lists/* \
&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8
RUN mkdir /docker-entrypoint-initdb.d
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
ENV PG_MAJOR 9.5
ENV PG_VERSION 9.5.0-1.pgdg80+2
RUN echo 'deb http://apt.postgresql.org/pub/repos/apt/ jessie-pgdg main' $PG_MAJOR > /etc/apt/sources.list.d/pgdg.list
RUN apt-get update \
&& apt-get install -y postgresql-common \
&& sed -ri 's/#(create_main_cluster) .*$/\1 = false/' /etc/postgresql-common/createcluster.conf \
&& apt-get install -y \
postgresql-$PG_MAJOR=$PG_VERSION \
postgresql-contrib-$PG_MAJOR=$PG_VERSION \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /var/run/postgresql && chown -R postgres /var/run/postgresql
ENV PATH /usr/lib/postgresql/$PG_MAJOR/bin:$PATH
ENV PGDATA /var/lib/postgresql/data
VOLUME /var/lib/postgresql/data
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
</code></pre></div><h3 id="creating-a-composite-container-with-docker-compose">Creating a composite container with Docker Compose</h3>
<p>Using <a href="https://docs.docker.com/compose/">Docker Compose</a> you can define multiple container images within a single configuration file so to keep them together as a single logical unit. In order to do so you give each container a name, and then provide within the definition for each container the same Docker parameters you would use in a regular Dockerfile. Here is an example of the situation discussed earlier where you have a Tomcat image and a PostgreSQL image container which go together:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">db:
image: postgres
web:
build: .
command: /usr/local/tomcat/bin/catalina.sh run
volumes:
- .:/code
ports:
- "8080:8080"
links:
- db
log_driver: "syslog"
log_opt:
syslog-facility: "daemon"
</code></pre></div><p>Within these configuration values, there are a number of things being defined. Firstly the database container is created and defined with the default postgres image. Then the web application container defines that the docker image will be built using the application code and Dockerfile present in the CWD. After both containers are built, Docker will perform some actions to get things ready like starting Catalina and copying the code from the CWD into /code on the new container volume. In addition there are some values to configure application logging and to allow for the database container to be linked to the web container. After the file is created, all that is required to start the containers is to call ‘docker-compose up’ from the CWD where the docker-compose.yml file and the application code are located.</p>
<p>After starting the containers you should see output in ‘docker ps’ showing the new containers. Here is the output from my test with Tomcat/PostgreSQL from when I was doing some testing on the Struts 2 web framework for Java:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3e3656879a7b strutsdocker_web "/usr/local/tomcat/bi" 3 weeks ago Up 3 weeks 0.0.0.0:8080->8080/tcp strutsdocker_web_1
cb756e473ed8 postgres "/docker-entrypoint.s" 8 weeks ago Up 3 weeks 5432/tcp strutsdocker_db_1
</code></pre></div><p>Both containers keep the naming convention of the directory name of the CWD where the application source code and Docker configuration files are located, along with the composite container name (db and web in this case) and an incrementing number for each instance you create. This can be really helpful in case you need to update any of the application source code and rebuild, as the DB container will be retained and all of its volume data will still be intact. It’s worth noting though that in the opposite situation, where the database needs to be rebuilt, you could do this without impacting the web container data but they would both be shutdown and startup together as they are seen by Docker as two containers with a dependency which make up one logical application.</p>
<h3 id="conclusions">Conclusions</h3>
<p>The concept of using individual images for applications, and distributing those images by using the public software ecosystem makes the initial deployment phase very easy as much of the initial work is already done. However the 1:1 relationship of one application per container does not really reflect the current state of web development. For complex applications that need a data layer in a database, a presentation layer in an application server as well as components like search indexing, having individual containers for each one would be unmanageable. Using a composite container allows you to keep the same benefits of the Docker image ecosystem, while adding in the ease of managing all the pieces of the application holistically as one container.</p>
DevOpsDays India — 2015https://www.endpointdev.com/blog/2015/09/devopsdays-india-2015/2015-09-30T00:00:00+00:00Selvakumar Arumugam
<p>DevOpsIndia 2015 was held at The Royal Orchid in Bengaluru on Sep 12-13, 2015. After saying hello to a few familiar faces who I often see at the conferences, I collected some goodies and entered into the hall. Everything was set up for the talks. Niranjan Paranjape, one of the organizers, was giving the introduction and overview of the conference.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="/blog/2015/09/devopsdays-india-2015/image-0-big.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="/blog/2015/09/devopsdays-india-2015/image-0.jpeg"/></a></div>
<p>Justin Arbuckle from Chef gave a wonderful keynote talk about the “Hedgehog Concept” and spoke more about the importance of consistency, scale and velocity in software development.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="/blog/2015/09/devopsdays-india-2015/image-1-big.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="/blog/2015/09/devopsdays-india-2015/image-1.jpeg"/></a></div>
<p>In addition, he quoted “A small team with generalists who have a specialization, deliver far more than a large team of single skilled people.”</p>
<p>A talk on “DevOps of Big Data infrastructure at scale” was given by Rajat Venkatesh from Qubole. He explained the architecture of Qubole Data Service (QDS), which helps to autoscale the Hadoop cluster. In short, scale up happens based on the data from Hadoop Job Tracker about the number of jobs running and time to complete the jobs. Scale down will be done by decommissioning the node, and the server will be chosen by which is reaching the boundary of an hour. This is because most of the cloud service providers charge for an hour regardless of whether the usage is 1 minute or 59 minutes.</p>
<p>Vishal Uderani, a DevOps guy from WebEngage, presented “Automating AWS infrastructure and code deployment using Ansible.” He shared the issues facing the environments like task failure due to ssh timeout on executing a giant task using Ansible and solved by monitoring the task after triggering the task to get out of the system immediately. Integrating Rundeck with Ansible is an alternative for enterprise Ansible Tower. He also stated the following reasons for using Ansible:</p>
<ul>
<li>Good learning curve</li>
<li>No agents will be running on the client side, which avoids having to monitor the agents at client nodes</li>
<li>Great deployment tool</li>
</ul>
<p>Vipul Sharma from CodeIgnition stated the importance of Resilience Testing. The application should be tested periodically to be tough and strong enough to handle any kind of load. He said Simian Army can be used to create the problems in environments and then resolving them to make them flawless. Simian Army can used to improve application using security monkey, chaos monkey, janitor monkey, etc… Also “Friday Failure” is a good method to identify the problem and improve the application.</p>
<p>Avi Cavale from Shippable gave an awesome talk on “Modern DevOps with Docker”. He talk started with “What is the first question that arises during an outage ? … What changed ?”After fixing these issues, the next question will be “Who made the change?” Both questions are bad for the business. Change is the root cause of all outage but business requires changes. In his own words, DevOps is a culture of embracing change. Along with that he explained the zero downtime ways to deploy the changes using a container.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="/blog/2015/09/devopsdays-india-2015/image-2-big.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="/blog/2015/09/devopsdays-india-2015/image-2.jpeg"/></a></div>
<p>He said DevOps is a culture, make it FACT(F-Frictionless, A-Agile, C-Continuous and T-Transparency).</p>
<p>Rahul Mahale from SecureDB gave a demo on <a href="https://terraform.io/">Terraform</a>, a tool for build and orchestration in Cloud. It features “Infrastructure as Code” and also provides an option to generate the diagrams and graphs of the present infrastructure.</p>
<p>Shobhit and Sankalp from CodeIgnition shared their experience on solving network based issues. Instead of whitelisting the user’s location every time manually to provide the access to systems, they created a VPN to enable access only to users, not locations. They have resolved two more additional kind of issues by adding Router to bind two networks using FIP. Another issue is that they need to whitelist to access third party services from containers, but it was hard to whitelist all the containers. Therefore, they created and whitelisted VMs and containers accessed the third party services through VMs.</p>
<p>Ankur Trivedi from Xebia Labs spoke about the “<a href="https://www.opencontainers.org/">Open Containers Initiative</a>” project. He explained the evolution of the containers (Docker—2013 & rkt—2014). The various distributions of containers are compared based on the Packing, Identity, Distribution and Runtime capabilities. Open Containers is supported by the community and various companies who are doing extensive work on containers in order to standardize them.</p>
<p>Vamsee Kanala, a DevOps consultant, presented a talk on “Docker Networking—A Primer”. He spoke about Bridge networking, Host Networking, Mapped container networking and None (Self Managed) with dockers. The communications between the containers can happen through:</p>
<ul>
<li>Port mapping</li>
<li>Link</li>
<li>Docker composing (programmatically)</li>
</ul>
<p>In addition, he explained the tools which feature the clustering of containers, and listed the tools that have their own way of clustering and advantages:</p>
<ul>
<li>Kubernetes</li>
<li>Mesos</li>
<li>Docker Swarm</li>
</ul>
<p>Aditya Patawari from BrowserStack gave a demo on “Using Kubernetes to Build Fault Tolerant Container Clusters”. Kubernetes has a feature called “Replication Controllers,” which helps to maintain number of pods running at any time. “Kubernetes Services” defines a policy to enable access among the pods which provisions the pods as micro services.</p>
<p>Arjun Shenoy from LinkedIn introduced a tool called “<a href="https://github.com/linkedin/simoorg">SIMOORG</a>.” The tool was developed at LinkedIn and does the failure induction in a cluster for testing the stability of the code. It is a components-based Open Source framework and few components are replaceable with external ones.</p>
<p>Dharmesh Kakadia, a researcher from Microsoft, gave a wonderful talk on “Mesos is the Linux”. He started with a wonderful explanation on Micro services (relating with linux commands, each command is a micro service) which is simplest, independently updatable, runnable and deployable. Mesos is a “Data Center Kernel” which takes care of scalability, fault tolerance, load balance, etc… in Data Center.</p>
<p>At the end, I got a chance to do some hands-on things on Docker and played with some of its features. It was a wonderful conference to learn more about configuration management and the containers world.</p>
SCaLE 13xhttps://www.endpointdev.com/blog/2015/03/scale-13x/2015-03-04T00:00:00+00:00Jacob Minshall
<img alt="SCaLE Penguin" src="/blog/2015/03/scale-13x/image-0.png" title=""/>
<p>I recently went to the <a href="http://www.socallinuxexpo.org">Southern California Linux Expo</a> (SCaLE). It takes place in Los Angeles at the Hilton, and is four days of talks, classes, and more, all focusing around Linux. SCaLE is the largest volunteer run open source conference. The volunteers put a lot of work into the conference, from the nearly flawless wireless network to the AV team making it as easy as plugging in a computer to start a presentation.</p>
<p>One large focus of the conference was the growing <a href="https://en.wikipedia.org/wiki/DevOps">DevOps</a> community in the Linux world. The more DevOps related talks drew the biggest crowds, and there was even a DevOps focused room on Friday. There are a wide range of DevOps related topics but the two that seemed to draw the largest crowds were configuration management and containerization. I decided to attend a full day talk on <a href="https://www.chef.io/">Chef</a> (a configuration management solution) and <a href="https://www.docker.com/">Docker</a> (the new rage in containerization).</p>
<p>The Thursday <a href="http://www.socallinuxexpo.org/scale/13x/presentations/introduction-chef-testing-your-automation-code">Chef talk</a> was so full that they decided to do an extra session on Sunday. The talk was more of an interactive tutorial than a lecture, so everyone was provided with an <a href="https://aws.amazon.com/">AWS</a> instance to use as their Chef playground. The talk started with the basics of creating a file, installing a package, and running a service. It was all very interactive; there would be a couple of slides explaining a feature and then there was time provided to try it out. During the talk there was a comment from someone about a possible bug in Chef, concerning the suid bit being reset after a change of owner or group to a file. The presenter, who works for the company that creates Chef, wasn’t sure what would happen and said, “Try it out.” I did try it out, and there was a bug in Chef. The presenter suggested I file an issue on github, so I <a href="https://github.com/chef/chef/issues/2951">did</a> and I even wrote a patch and made a <a href="https://github.com/chef/chef/pull/2967">pull request</a> later on that weekend.</p>
<p>Containers were the other hot topic that weekend, with the half day class on Friday, and a few other talks throughout the weekend. The <a href="https://www.socallinuxexpo.org/scale/13x/presentations/introduction-docker-and-containers">Docker Talk</a> was also set up in a learn by doing style. We learned the basics of downloading and running Docker images from the <a href="https://hub.docker.com/">Docker Hub</a> through the command line. We added our own tweaks to the tops of those images and created new images of our own. The speaker, Jerome Petazzoni, usually gives a two or three day class on the subject, so he picked the parts he thought most interesting to share with us. I really enjoyed making a Docker File which describes the creation of a new machine from a base image. I also thought one of the use cases described for Docker to be very interesting, creating a development environment for employees at a company. There is usually some time wasted moving things from machine to machine, whether upgrading a personal machine or transferring a project from one employee to another, especially when they are using different operating systems. Docker can help to create a unified state for all development machines in a company to the point where setting a new employee up with a workspace can be accomplished in a matter of minutes. This also helps to bring the development environment closer to the production environment.</p>
<p>One sentiment I heard reiterated in multiple DevOps talks was the treatment of servers as Pets vs. Cattle. Previously servers were treated as pets. We gave servers names, we knew what they liked and didn’t like, when they got sick we’d nurse them back to health. This kind of treatment for servers is time consuming and not manageable at the scale that many companies face. The new trend is to treat servers like cattle. Each server is given a number, they do their job, and if they get sick they are “put down”. Tools like Docker and Chef make this possible, servers can be set up so quickly that there’s no reason to nurse them back to health anymore. This is great for large companies that need to manage thousands of servers, but it can save time for smaller companies as well.</p>
Web Development, Big Data and DevOps—OSI Days 2014, Indiahttps://www.endpointdev.com/blog/2015/01/web-development-big-data-and-devops-osi/2015-01-12T00:00:00+00:00Selvakumar Arumugam
<p>This is the second part of an article about the conference <a href="https://opensourceindia.in/osidays/">Open Source India</a>, 2014 was held at Bengaluru, India. The first part is available <a href="/blog/2014/11/mongodb-and-openstack-osi-days-2014/">here</a>. The second day of the conference started with the same excitement level. I plan to attend talks covering Web, Big Data, Logs monitoring and Docker.</p>
<h3 id="web-personalisation">Web Personalisation</h3>
<p>Jacob Singh started the first talk session with a wonderful presentation along with real-world cases which explained the importance of personalisation in the web. It extended to content personalisation for users and A/B testing (comparing two versions of a webpage to see which one performs better). The demo used the <a href="https://www.drupal.org/project/acquia_lift">Acquia Lift</a> personalisation module for the <a href="https://www.drupal.org/">Drupal</a> CMS which is developed by his team.</p>
<h3 id="mean-stack">MEAN Stack</h3>
<p>Sateesh Kavuri of Yodlee spoke about the <a href="http://mean.io/">MEAN</a> stack which is a web development stack equivalent to popular LAMP stack. MEAN provides a flexible compatibility to web and mobile applications. He explained the architecture of MEAN stack.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="/blog/2015/01/web-development-big-data-and-devops-osi/image-0.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="480" src="/blog/2015/01/web-development-big-data-and-devops-osi/image-0.png" width="640"/></a></div>
<p>He also provided an overview of each component involved in MEAN Stack.</p>
<p><a href="https://www.mongodb.org/">MongoDB</a> — NoSQL database with dynamic schema, in-built aggregation, mapreduce, JSON style document, auto-sharding, extensive query mechanism and high availability.</p>
<p><a href="http://expressjs.com/">ExpressJS</a> — A node.js framework to provide features to web and mobile applications.</p>
<p><a href="https://angularjs.org/">AngularJS</a> — seamless bi-directional model with extensive features like services and directives.</p>
<p><a href="https://nodejs.org/">Node.js</a> — A server side JavaScript framework with event based programming and single threaded (non blocking I/O with help of request queue).</p>
<p><a href="https://sailsjs.org/">Sails.js</a> — MEAN Stack provisioner to develop applications quickly.</p>
<p>Finally he demonstrated a MEAN Stack demo application provisioned with help of Sails.js.</p>
<h3 id="moving-fast-with-high-performance-hack-and-php">Moving fast with high performance Hack and PHP</h3>
<p>Dushyant Min spoke about the way Facebook optimised the PHP code base to deliver better performance when they supposed to handle a massive growth of users. Earlier there were compilers HipHop for PHP (HPHPc) or HPHPi (developer mode) to convert the php code to C++ binary and executed to provide the response. After sometime, Facebook developed a new compilation engine called <a href="https://hhvm.com/">HipHop Virtual Machine</a> which uses Just-In-Time (JIT) compilation approach and converts the code to HipHop ByteCode (HHBC). Both Facebook’s production and development environment code runs over HHVM.</p>
<p>Facebook also created a new language called <a href="http://hacklang.org/">Hack</a> which is very similar to PHP which added static typing and many other new features. The main reason for Hack is to get the fastest development cycle to add new features and release frequent versions. Hack also uses the HHVM engine.</p>
<p>HHVM engine supports both PHP and Hack, also it provides better performance compare to Zend engine. So Zend Engine can be replaced with HHVM without any issues in the existing PHP applications to get much better performance. It is simple as below:</p>
<div class="separator" style="clear: both; text-align: center;"><a href="/blog/2015/01/web-development-big-data-and-devops-osi/image-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="480" src="/blog/2015/01/web-development-big-data-and-devops-osi/image-1.png" width="640"/></a></div>
<p>Also PHP code can be migrated to Hack by changing the <code><?php</code> tag to <code><?hh</code> and there are some converters (hackficator) available for code migration. Both PHP and Hack provide almost the same performance on the HHVM engine, but Hack has some additional developer-focussed features.</p>
<h3 id="application-monitoring-and-log-management">Application Monitoring and Log Management</h3>
<p>Abhishek Dwivedi spoke about a stack to process the logs with various formats, myriad timestamp and no context. He explains a stack of tools to process the logs, store and visualize in a elegant way.</p>
<p>ELK Stack = Elasticsearch, LogStash, Kibana. The architecture and the data flow of ELK stack is stated below:</p>
<div class="separator" style="clear: both; text-align: center;"><a href="/blog/2015/01/web-development-big-data-and-devops-osi/image-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="438" src="/blog/2015/01/web-development-big-data-and-devops-osi/image-2.png" width="640"/></a></div>
<p><a href="https://www.elastic.co/">Elasticsearch</a> — Open source full text search and analytics engine</p>
<p><a href="https://www.elastic.co/products/logstash">Log Stash</a> — Open source tool for managing events and logs which has following steps to process the logs</p>
<p><a href="https://www.elastic.co/products/kibana">Kibana</a> — seamlessly works with Elasticsearch and provides elegant user interface with various types of graphs</p>
<div class="separator" style="clear: both; text-align: center;"><a href="/blog/2015/01/web-development-big-data-and-devops-osi/image-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="138" src="/blog/2015/01/web-development-big-data-and-devops-osi/image-3.png" width="640"/></a></div>
<h3 id="apache-spark">Apache Spark</h3>
<p>Prajod and Namitha presented the overview of Apache Spark which is a real time data processing system. It can work on top of Hadoop Distributed FileSystem (HDFS). <a href="https://spark.apache.org/">Apache Spark</a> performs 100x faster in memory and 10x faster in disk compare to Hadoop. It fits with Streaming and Interactive scale of Big Data processing.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="/blog/2015/01/web-development-big-data-and-devops-osi/image-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="480" src="/blog/2015/01/web-development-big-data-and-devops-osi/image-4.png" width="640"/></a></div>
<p>Apache Spark has certain features in processing the data to deliver the promising performance:</p>
<ul>
<li>Multistep Directed Acyclic Graph</li>
<li>Cached Intermediate Data</li>
<li>Resilient Distributed Data</li>
<li>Spark Streaming — Adjust batch time to get the near real time data process</li>
<li>Implementation of Lambda architecture</li>
<li>Graphx and Mlib libraries play an important role</li>
</ul>
<h3 id="online-data-processing-in-twitter">Online Data Processing in Twitter</h3>
<p>Lohit Vijayarenu from Twitter spoke about the technologies used at Twitter and their contributions to Open Source. Also he explained the higher level architecture and technologies used in the Twitter microblogging social media platform.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="/blog/2015/01/web-development-big-data-and-devops-osi/image-5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="338" src="/blog/2015/01/web-development-big-data-and-devops-osi/image-5.png" width="640"/></a></div>
<p>The Twitter front end is the main data input for the system. The Facebook-developed Scribe log servers gather the data from the Twitter front end application and transfers the data to both batch and real time Big Data processing systems. Storm is a real time data processing system which takes care of the happening events at the site. Hadoop is a batch processing system which runs over historical data and generates result data to perform analysis. Several high level abstraction tools like PIG are used write the MR jobs. Along with these frameworks and tools at the high level architecture, there are plenty of Open Source tools used in Twitter. Lohit also strongly mentioned that in addition to using Open Source tools, Twitter contributes back to Open Source.</p>
<h3 id="docker">Docker</h3>
<p>Neependra Khare from Red Hat given a talk and demo on <a href="https://www.docker.com/">Docker</a> which was very interactive session. The gist of Docker is to build, ship and run any application anywhere. It provides good performance and resource utilization compared to the traditional VM model. It uses the Linux core feature called containerization. The container storage is ephemeral, so the important data can be stored in persistent external storage volumes. Slides can be found <a href="https://github.com/nkhare/presetations/blob/master/osidays/osi_docker.md">here</a>.</p>