Forum Posts

fonsi1mosora
Apr 18, 2019
In Cloud
If you’re thinking about investing your time to pick up new skills, you can’t go wrong with learning how to build, run, and manage containers. That’s probably not a big surprise. Based on all the excitement, headlines, and success stories coming out of tech news every day, there’s no doubt we’ll see even more container adoption going forward. But, it’s still very early in the game. And that means there’s a massive opportunity for those who embrace this new approach to building modern applications. My goal with this tutorial is to hopefully spark your interest in this topic or give you some new ideas to try out on your own. I’ll show you a quick and easy way to store and run your container images on Microsoft’s Azure platform. We’ll take a look at how to build a container image locally, push that image to the Azure Container Registry, and then run and continuously deploy updates to that container using Azure Web App for Containers. It’s a great way to run a single container for a web application. We’ll keep things simple, but practical. And, if you want to follow along, all you’ll need is Git and Docker Desktop installed locally, along with an Azure account. Building the Azure Container Registry (ACR) Resource Container images provide the foundation for applications running inside Docker containers. You can think of an image as an executable package that includes everything you need to run an application. Your container images include application code, the runtime environment, libraries, configuration files, and environment variables. Containers that you spin up to power an application are merely a running instance of a container image. Container images live in public or private container registries, and servers running the Docker engine download those images to start and run containers. The default container registry for Docker containers is hub.docker.com, which offers over 100,000 container images from ISVs, open-source projects, and community contributors. You can use this service to store your images in public or private repositories. Microsoft also offers the Azure Container Registry (ACR), which is a managed Docker registry service based on the open-source Docker Registry 2.0 platform. Using ACR for container deployments allows you to store your custom container images privately in the Azure cloud. ACR natively integrates with multiple Azure services, and teams can use ACR tasks to automate deployments when building new images, or when code is committed to version control. ACR also provides a low-latency endpoint to pull images from when your Docker hosts are running in the Azure cloud. For this tutorial, we’re going to provision a new ACR instance. We’ll build a custom Docker image and push it to ACR so we can use it later with Azure Web App for containers. In the Azure portal, select Create a resource, select Containers in the Azure Marketplace, and choose Container Registry. 📷 Figure 1. Creating an ACR Resource. Enter a globally unique, DNS-compliant hostname for your registry under the azurecr.io namespace. Set the destination resource group and make sure you enable the option for admin user and click on Create. Once provisioning is complete, navigate to your ACR resource in the portal and select Access Keys under the settings section. Here you will see your login server, admin username, and passwords. 📷 Figure 2. Retrieving the ACR Credentials. Finally, fire up a terminal and use the docker login command to authenticate to ACR and validate the credentials. 📷 Figure 3. Logging into the ACR. You should see a message that says “login succeeded” as shown in Figure 3. Cloning the Sample Node & Express Application Next, we’re ready to package up an application into a container image. For this example, we’ll use a bare-bones Node.js and Express application from my Github account. The app is really basic and simply returns a “hello world” message when you visit the web application. Open a terminal or command prompt and switch to a directory where you want to store the source code locally. Use the following command to clone the repo: git clone https://github.com/mikepfeiffer/node-docker-demo.git After you’ve cloned the repo, switch into the application directory: cd node-docker-demo/ Inside the local repo you’ll see the following set of files and folders: ├── Dockerfile ├── LICENSE ├── README.md ├── app.js ├── config.js ├── package-lock.json ├── package.json └── test └── index_test.js The main thing to focus on here is the Dockerfile included with the application. This will give Docker all the instructions to build our container image. Open the Dockerfile in a text editor to review the commands. You’ll see something like the output in Figure 4. Figure 4. Reviewing the Dockerfile. The commands in the Dockerfile will be used to build the image locally before we push it to ACR. Let’s break it down line by line. Line 1: The FROM command defines the base container image. In this case, we’ll use the official Node.js image available in the Docker Hub. Notice we’re not explicitly referencing a container registry for this image. This means we’ll pull down the Node.js container image from the default container registry (Docker Hub). Also note that the image uses an alpinetag. This means we’ll use the Node.js container image based on Alpine Linux, which is a Linux distro designed for security and resource efficiency. It will allow us to keep the container image very small at less than 90 MB in size. Line 3: The WORKDIR command sets the working directory and is where we’ll store the application code. If the path indicated doesn’t exist it will get created during the build process. Line 5: The COPY command will be used during the build process to add the package.json and package-lock.json files to the working directory. Note that, instead of copying the entire application at this stage, we are only copying the package*.json files. This allows us to take advantage of cached Docker layers to improve our development workflow, which you can read about here. Line 6: Next, the RUN command is used to execute npm install which will ensure our application dependencies are included within the container image. Line 7: The COPY command is used to copy the application source files into the working directory. Line 9: Finally, CMD defines the default command to execute when the container starts. In this case, npm start will run the start script defined in the package.json file to get our node server up and running. Now that we understand what will be included in our container image, we’re ready to run a build. Building the Container Image Head back over to the command-line. Make sure your context is set to the root of the application. You should be in the same directory as the Dockerfile. Use the docker build command to create your container image. docker build -t <YOUR REGISTRY NAME>.azurecr.io/node-docker-demo:latest . This command will tell Docker to build your image using the Dockerfile in the current directory. It will also tag your image with your ACR repository name, an image name, and version set to latest to indicate that this image is the current version of the application. Since the registry name is included in the image tag, the Docker client will intuitively know where to send the image when we run a push in the next step. Pushing the Image to ACR Since we already logged in using docker login, all that’s left to do is push the image up to ACR using the following syntax. docker push <YOUR REGISTRY NAME>.azurecr.io/node-docker-demo:latest After the command completes successfully, you can head back to the Azure portal to verify that your image is now available in ACR. 📷 Figure 5. Reviewing the image in ACR. Building the Web App Resource. With the image available in ACR, we’re ready to build a Web App resource that can host the application. In the Azure portal, select Create a resource, select Web in the Azure Marketplace, and choose Web App. 📷 Figure 6. Creating the Web App Resource. Enter a globally unique, DNS-compliant hostname for your web app under the azurewebsites.net namespace. Set the destination resource group and choose Linux as the OS. Make sure the publish settings are configured to use a container as shown in Figure 6. You’ll be able to browse and find your custom container image in ACR. Click on Apply and Create. Open a web browser and navigate to the URL for your new Azure Web App. You should see “Hello World!” on the home page. 📷 Figure 7. Testing the Web App. Now that we have the app up and running we can setup continuous deployment. Enabling and Validating Continuous Deployment Navigate to your Web App resource in the Azure portal. Select Container settings as shown in Figure 8. 📷 Figure 8. Updating the Web App. Switch Continuous Deployment to On and click on Save. This will create an ACR webhook behind the scenes. Anytime we push an updated image to ACR it will trigger a deployment in Azure App Service. This means we can update and rebuild our image locally, push it back to ACR, and the website will be updated without us having to do anything else. You’ll see the webhook in the same resource group as your web app in the Azure portal. 📷 Figure 9. Reviewing the Webhook. Now that continuous deployment is enabled, we can test out. Fire up your code editor and open the app.js file in the root of the application that we cloned earlier. Modify line 7 to return a new message. For this example, I’ll update the code to return “Hello World v2!” when a user visits the page. Figure 10. Updating the app.js file. Once the code has been updated, rebuild the container image: docker build -t <YOUR REGISTRY NAME>.azurecr.io/node-docker-demo:latest . Then push the new container image to ACR: docker push <YOUR REGISTRY NAME>.azurecr.io/node-docker-demo:latest The push will trigger the webhook. 📷 Figure 11. Reviewing the Webhook History. Then we can visit our web app and see that it’s serving the new version of the application. 📷 Figure 12. Validating the Deployment. Isn’t that awesome? This is a really slick solution for those who want to streamline the deployment process for single container-based web apps. Where to go from here There’s so much more when it comes to running Docker containers on Azure. Check out some of these resources to help you get started. Azure for Containers Play with Docker (If you’re new to Docker) Build and store container images with Azure Container Registry Use a custom Docker image for Web App for Containers Originally published at mikepfeiffer.io.
1
0
34
fonsi1mosora
Apr 18, 2019
In DevOps Tools
The Docker instructions, CMD and ENTRYPOINT, are used in Dockerfiles and Docker Compose files to configure the commands used to run a container. This tutorial will explain the differences between them and how to best use them in your Dockerfiles. Entrypoint CMD Best Practices Summary Find out more Entrypoint Entrypoint sets the command and parameters that will be executed first when a container is run. Any command line arguments passed to docker run <image> will be appended to the entrypoint command, and will override all elements specified using CMD. For example, docker run <image> bash will add the command argument bash to the end of the entrypoint. Dockerfile ENTRYPOINT Dockerfiles use all uppercase letters for the entrypoint instruction. There are several ways you can define this. The exec syntax The exec form is where you specify commands and arguments as a JSON array. This means you need to use double quotes rather than single quotes. ENTRYPOINT ["executable", "param1", "param2"] Using this syntax, Docker will not use a command shell, which means that normal shell processing does not happen. If you need shell processing features, then you can start the JSON array with the shell command. ENTRYPOINT [ "sh", "-c", "echo $HOME" ] Using an entrypoint script Another option is to use a script to run entrypoint commands for the container. By convention, it often includes entrypoint in the name. In this script, you can setup the app as well as load any configuration and environment variables. Here is an example of how you can run it in a Dockerfile with the ENTRYPOINT exec syntax. COPY ./docker-entrypoint.sh / ENTRYPOINT ["/docker-entrypoint.sh"] CMD ["postgres"] For example, the Postgres Official Image uses the following script as its ENTRYPOINT: #!/bin/bash set -e if [ "$1" = 'postgres' ]; then chown -R postgres "$PGDATA" if [ -z "$(ls -A "$PGDATA")" ]; then gosu postgres initdb fi exec gosu postgres "$@" fi exec "$@" Docker Compose entrypoint The instruction that you use in your Docker Compose files is the same, except you use lowercase letters. entrypoint: /code/entrypoint.sh You can also define the entrypoint with lists in your docker-compose.yml. entrypoint: - php - -d - zend_extension=/usr/local/lib/php/xdebug.so - -d - memory_limit=-1 - vendor/bin/phpunit Overriding Entrypoint You can override entrypoint instructions using the docker run --entrypointor docker-compose run --entrypoint flags. CMD / command The main purpose of a CMD (Dockerfiles) / command (Docker Compose files) is to provide defaults when executing a container. These will be executed after the entrypoint. For example, if you ran docker run <image>, then the commands and parameters specified by CMD / command in your Dockerfiles would be executed. Dockerfiles In Dockerfiles, you can define CMD defaults that include an executable. For example: CMD ["executable","param1","param2"] If they omit the executable, you must specify an ENTRYPOINT instruction as well. CMD ["param1","param2"] (as default parameters to ENTRYPOINT) NOTE: There can only be one CMD instruction in a Dockerfile. If you list more than one CMD, then only the last CMD will take effect. Docker Compose command When using Docker Compose, you can define the same instruction in your docker-compose.yml, but it is written in lowercase as the full word command. command: ["bundle", "exec", "thin", "-p", "3000"] Overriding CMD You can override the commands specified by CMD when you run a container. docker run rails_app rails console If the user specifies arguments to docker run, then they will override the default specified in CMD. Best practices Although there are different ways to use these instructions, Docker gives some guidance on best practices for their use and syntax. Use best practices Docker recommends using ENTRYPOINT to set the image’s main command, and then using CMD as the default flags. Here is an example Dockerfile that uses both instructions. FROM ubuntu ENTRYPOINT ["top", "-b"] CMD ["-c"] Syntax best practices As well as the exec syntax, Docker allows shell syntax as another valid option for both ENTRYPOINT and CMD. This executes this command as a string and performs variable substitution. ENTRYPOINT command param1 param2 CMD command param1 param2 However, this tutorial did not emphasise it due the exec syntax being seen as best practice. CMD should almost always be used in the form of CMD [“executable”, “param1”, “param2”…]. Thus, if the image is for a service, such as Apache and Rails, you would run something like CMD ["apache2","-DFOREGROUND"]. Indeed, this form of the instruction is recommended for any service-based image. The Dockerfile reference explains more about some of the issues. The ENTRYPOINT shell form prevents any CMD or run command line arguments from being used, but has the disadvantage that your ENTRYPOINT will be started as a subcommand of /bin/sh -c, which does not pass signals. This means that the executable will not be the container’s PID 1 - and will not receive Unix signals - so your executable will not receive a SIGTERM from docker stop <container> If CMD is used to provide default arguments for the ENTRYPOINT instruction, both the CMD and ENTRYPOINT instructions should be specified with the JSON array format. Summary Both CMD and ENTRYPOINT instructions define what command gets executed when running a container. There are few rules that describe how they interact. Dockerfiles should specify at least one of CMD or ENTRYPOINT commands. ENTRYPOINT should be defined when using the container as an executable. CMD should be used as a way of defining default arguments for an ENTRYPOINT command or for executing an ad-hoc command in a container. CMD will be overridden when running the container with alternative arguments.
Docker ENTRYPOINT & CMD: Dockerfile best practices content media
1
1
2k
fonsi1mosora
More actions