Forum Posts

andrey_mark
Mar 25, 2019
In CI CD
Credit : Mike Pfeiffer Continuous integration and continuous delivery (CI/CD) are considered by most to be the backbone of DevOps. Things start to get really interesting when you combine these practices with programmable infrastructure and a suite of services that allow you to automate the entire lifecycle of an application. The goal with this guide is to give you a practical example of what that all looks like when you’re building, testing, and deploying applications with Azure DevOps Services. I’ll walk you through the end-to-end process of building a fully automated build and release pipeline for a Node and Express application. We’ll use Azure DevOps Services to create the CI/CD pipeline and Azure App Service for deploying to development/staging and production. To follow along, you’ll need a GitHub account and Azure Subscription. The demo application is open source, so the Azure DevOps pipeline infrastructure we build will be covered under the free tier. Create Your Azure DevOps Organization The first step is to navigate to dev.azure.com and sign in to Azure DevOps. If you’ve never done this before, you’ll need to create a new organization. You need to have at least one organization, which is used to store your projects, structure your repositories, set up your teams, and manage access to data. The guidance from Microsoft is to keep things simple and start with a single organization. For more advanced scenarios, take a look at plan your organization structure in Microsoft’s documentation. After clicking on continue, you may end up with an organization name that was generated at random. You can change this as shown in Figure 2. Simply navigate to Organization Settings > Overview and update the name. Fork the Node & Express Demo App Repository I wanted to demonstrate an application that was somewhat realistic but not overly complex for this walkthrough. The Node and Express app is a simple website for a fictitious company. This app uses Express and Handlebars to serve up a few common pages you’d see on any company website. Also included are some unit tests that ensure those routes are working and serving up the right content. You can head over to my GitHub account to fork this repository. Next, we can move on to deploying the infrastructure to support both development and production deployment slots using Azure App Service. Deploy the App Service Infrastructure We’re going to use an Azure Web App for Linux resource to power our Node and Express application. We’ll set things up so our CI/CD pipeline can build and deploy the app into a development/staging slot. Then we’ll set up up a manual approval into the production slot. We’ll use an Azure Resource Manager (ARM) template to build the App Service infrastructure. Navigate to the node-express-azure repository you forked in the previous step. You’ll see a “Deploy to Azure” button about halfway down the screen. Clicking the “Deploy to Azure” button will redirect you to the Azure portal as shown in Figure 5. Notice that you’ll need to set a globally unique hostname for your web application, along with a name for the new app service plan. I’d recommend deploying these resources into a new resource group. That way when you’re done with this walkthrough, you can clean up the Azure resources easily by deleting the resource group. Click “Purchase” to launch the template to agree that you’ll have to pay for the App Service resources that this template deploys on your behalf. After you launch the template you should see a successful deployment message, and you should have a new resource group similar to the one shown in Figure 6. Notice that there is an App Service Plan, a web app that represents the production deployment slot, and a slot for development called “dev”. Quick side note about the ARM template: the Deploy to Azure button references the azuredeploy.json ARM template in my GitHub repository. If you want to update the template, update the version in your own repo, and don’t forget to change the target of the button in the source of your README.md file. Create a Build Pipeline We’re ready to move on and set up a build pipeline in Azure DevOps. Head back to dev.azure.com and create a new project inside your organization. Use the settings shown in Figure 7. After clicking on the “Create project” button, you’ll see a summary page for the project. Navigate to Pipelines and click on Builds as shown in Figure 8. Next, click the button to create a new build pipeline. You’ll be prompted to choose a repository. Select GitHub. You’ll see a screen like the one in Figure 9where you’ll need to authorize the Azure DevOps service to connect to your GitHub account on your behalf. Click Authorize. After your connection to GitHub has been authorized select the node-express-azure repo that you forked in the first step. You should end up seeing a “New pipeline” screen like the one shown in Figure 10. The new pipeline wizard should recognize that we already have an azure-pipelines.yml in the repository. This file contains all of the settings that the build service should use to build and test our application, as well as generate the output artifacts that will be used to deploy the app later in our release pipeline. After you click “Run” to kick off your first build, you should see a screen like the one shown in Figure 11. Notice that a lot went on with the build. The service used an Ubuntu 16.04 build agent to grab the code from GitHub, installed our development dependencies, and then ran our unit tests to validate the application. Finally, the code was bundled into an output artifact and published so we can use it as an input artifact for our upcoming release pipeline. Click on the release button at the top of this screen to create a new release pipeline. Create a Release Pipeline When you get into the release pipeline screen, you’ll need to select a template. For this scenario, we are going to choose “App Service deployment with slot”. Click on the apply button to create the new deployment stage within the release pipeline. On the next screen, you’ll be able to configure this stage. Change the name to “development” as shown in Figure 13. While on this screen, click on the link that says “2 tasks” inside your development stage. This will take you to a screen where you can configure the deployment task. Make sure you fill out all the fields as shown in Figure 14. Next, highlight and remove the second deployment task for swapping the slots. Finally, click Save. Head back over to the “Pipeline” tab at the top left of the screen. Inspect the deployment triggers for the artifacts as shown in Figure 17. Notice that continuous deployment is enabled by default. Going forward, each new build will trigger a deployment to our development slot in Azure App Service. First, let’s trigger a manual release. Create Your First Release to Development Click the “Release” button on the top right of the release pipeline screen and create a new release. Use the settings as shown in Figure 18. Click on the “Create” button to deploy the application to the development deployment slot. You should see a successful status in the properties of the release. Navigate to the public URL of the “dev” deployment slot in your web browser. The hostname will have “-dev” appended to it. For example, my web app is named “node-express-demo” and the “dev” deployment slot URL is https://node-express-demo-dev.azurewebsites.net. You should see the sample web application when you visit the “dev” slot URL. The production slot will show the default Azure App Service splash page since it is virtually untouched at this point. Let’s change that in the next step. Create a Release Stage for Production Head back over to the Azure DevOps portal and go to Pipelines > Releases. Click on the “Edit” button to modify the pipeline. Highlight the Developmentstage and click the dropdown to clone the stage. Rename the stage to “Production”. Next, click the pre-deployment conditions button for the Production stage. Enable pre-deployment approvals and add yourself as an approver. We’re doing this because we don’t want automated deployments going straight into production. We’re not building a continuous deployment pipeline for production. We’re building continuous delivery pipeline. Continuous delivery is a process that ensures our application is production ready. When we are doing a scheduled deployment we can do so with confidence since we know the application has been through a pipeline of tests before-hand. Next, click on the “task” link on the Production stage. We need to modify this task so that it does not deploy our code into the development slot. Simply uncheck “slot” and this will infer that the production slot of the web app should be used during the deployment. Click save when complete. Validating the Pipeline Navigate to your GitHub account and into the views folder of the demo application. Edit the index.handlebars file to update the app to version 2.0.0. Committing the change in this repo should automatically trigger a build, perform our tests, and publish a deployment package. We can confirm this by reviewing the build status. After the build, you should see a new release. The development stage should be green indicating that the deployment succeeded. The production stage should be blue and show that it’s pending approval. Click approve to kick-off the production deployment. Go back to your pipeline view and you should see the deployment to production succeeded. Finally, head over to the web app URL for the production slot to confirm the correct version is running. You should see version 2.0.0 on the homepage. Set up a Build Badge for Your Project Have you ever seen those build pass/fail badges when browsing projects on GitHub? They’re really cool because you can tell at a glance if the code is still working or if it’s old and busted. Let’s set up a badge for this project. Go back to your Builds section and click the status badge button. Copy the markdown code for the status badge. Now, go back to GitHub and modify the README.md file in your node-express-azure repo. Paste the markdown you copied from the status badge page. Commit the change and view the README. You should see a build passing status icon. If you’re still reading after all this time, respect! You now know who to build a CI/CD pipeline on Azure. You can simply delete all the resources to clean things up. Delete the resource group you created for this project, delete the Node demo project in the Azure DevOps portal, and delete the GitHub repo that you forked from my account (unless you want to keep a copy). Where to go from here Isn’t this awesome stuff? There’s so much more. For now, check out these resources to dive deeper. Get started with Azure DevOps DevOps Resource Center Microsoft Professional Program for DevOps
2
0
134
andrey_mark
Mar 25, 2019
In CI CD
The rate of DevOps adoption shows it’s becoming a preferred method for the continuous development, deployment, and improvement of software applications. One critical success factor is the use of automation in multiple areas including testing, deployment, and configuration management. Still, legacy systems and processes — as well as attitudes! — can impede the effective deployment of wide-scale DevOps automation. That’s why DevOps is more than just a “method.” Instead, it requires a complete rethinking of company philosophy. Even then, employing automation as a fundamental principle is not as simple as “automate everything.” What do you need to know when implementing DevOps automation? What are common pitfalls to avoid? The three strategies below will help streamline your automation initiatives and make them more productive in both the short and long term. Benefits of DevOps Automation DevOps fundamentals include cross-functional teams, continuous improvement, and increased collaboration. As opposed to a linear progression of software development from coding to release, DevOps uses a recursive workflow, so continuous improvement is possible. 📷Image source A key element in this improved workflow is the use of automation across a wide range of activities and processes. There are three basic benefits: Development and deployment time is reduced.Knowledge workers are freed up to concentrate on higher-order tasks.Large volumes of testing and other reporting data can be produced more quickly. 📷Image source When DevOps elements are effectively integrated, there will be a greater return on investment (ROI): Product time to market will decrease while quality increases. 3 DevOps Automation Strategies Still, a blind rush to “automate everything” should not be your goal. Instead, to ensure the effective application of automation from initial implementation to ongoing use, these operating principles will productively guide your efforts. 1. Systems Audit First, you need to perform a systems audit to identify repetitive activities within the continuous improvement and continuous deployment (CI/CD) pipeline. These include builds, testing, configuration, deployment, operation, and monitoring. More narrowly, tasks within the following categories are prime candidates for automation: occur with medium to high frequencyrequire three or more people to complete manuallyuse time-sensitive stepsimpact multiple systemsmust have audit documentation for complianceresult in bottlenecks on an ongoing basis A systems audit is key to determining where to implement automation, but it’s also important to identify which processes are less critical to automate. Sure, it may be attractive to develop an automated tool for every single task, but when doing so diverts resources from more essential areas, the end result won’t generate a positive ROI. 2. Cost-Benefit Analysis Once you’ve identified the processes most likely to benefit from automation, the next step as per DevOps.com is to perform a cost-benefit analysis for each one. This will calculate the net benefit for the organization in hours saved per year: Automation time overhead = initial development hours +ongoing maintenance hours per yearAutomation time savings = (hours to do task manually - hours to do task with automation) x number of times task is done per yearNet hours benefit per year = automation time savings - automation time overhead Still, there are intangibles to keep in mind when generating these analyses. As Scrum.org CEO Dave West says: “When deciding on what to automate … it is important to balance flexibility with removing waste. Wasteful tasks should be automated, but only if the cost of that automation is not reduced agility. Sometimes flexibility is more important than efficiency.” 3. Continuous Improvement KPIs DevOps automation is about both streamlining processes overall as well as addressing bottlenecks in particular. Still, no solution exists in a vacuum. Just as you want CI/CD for your software products, you must apply the same concept to your automated DevOps processes. After all, one seemingly minor change can have far-reaching consequences across multiple systems. You use application performance management (APM) for your software products, and it’s equally important to continually monitor the key performance indicators (KPIs) of your automated DevOps processes. As noted above, calculating the cost-benefit ratio in hours saved per year will help prioritize which processes to automate. In addition, careful monitoring of applications will allow you to maximize resource allocation, focus on business performance goals, and control cloud sprawl. Here are six operational KPIs to inform the continuous improvement of your DevOps automation: Business disruption hours: This basic metric tracks how much downtime is experienced by customers and users in particular as well as the business overall. This includes both planned and unplanned disruptions — both performance slowdowns and complete outages — in one or more affected applications.Technical debt closed: This refers to reducing — or even eliminating — the negative feedback cycle of poorly written code leading to higher costs, reduced business responsiveness, lower revenue, and ultimately less money for future software development which circles back around to create even more poorly written code.Mean time to discovery (MTTD): The amount of time it takes after an issue occurs for a DevOps team to discover the source of the problem. This metric is critical to track the efficiency of both your overall incident management processes as well as the particular tools being used.Mean time to recovery (MTTR): The amount of time required to repair an inoperative system. Not only will this measure the time necessary to have a system back online, granular incident reports will help proactively anticipate future problems.Mean time to failure (MTTF): The amount of time system-critical components take to fail. Similar to MTTR, knowing the MTTF of mission-critical systems will also help reduce downtime by predicting future failures.Mean time between failures (MTBF): A reliability metric, this helps measure how long a critical component or system will be available, and once you have established baseline values, you can work to increase MTBF over time. Taking the Next Step in DevOps Automation This discussion of DevOps automation up to this point has focused on big picture strategies for successful implementation. After an initial systems audit, your next step is to examine the best available automation solutions for your specific needs including infrastructure as code (IaC), CI/CD, and monitoring. 📷Image source DevOps allows you to work smart as well as hard. At the same time, thoughtful deployment and use of automation will allow you to reap all the benefits of DevOps without creating additional unnecessary work for your teams.
1
0
32
andrey_mark
Mar 25, 2019
In Cloud
By Michael Andrews and Chris Ramón At Monsoon, many of our Cloud Native services are deployed using AWS Lambda. While developing and deploying a serverless Node.js application in AWS Lambda is fairly straightforward, you might run into trouble building and testing your application with AWS Lambda locally. Recently, Amazon has released it’s “AWS Serverless Application Model” aka AWS SAM, which lets developers run their AWS Lambda function locally before deployment. Unfortunately, AWS SAM is packaged outside of the standard AWS Command Line Interface, and must be installed using a Python package manager (pip) outside of our regular Node.js toolchain. This isn’t a huge issue for local development, but when it comes to also building and testing your application in a Continuous Integration/Continuous Deployment (CI/CD) service, like AWS CodeBuild, this can become a pain point. Thankfully, there is a simple and well-known technique to encapsulate your build and test environment so that it functions the same locally as it does in CI/CD— bundling your build and test toolchain in a Docker container! The only problem with this approach — under the hood SAM itself also runs as a Docker container, and getting SAM to run in a parent Docker container turns out to be non-trivial. In this article, we will outline how to build and run your AWS Lambda using Docker and SAM. 📷 A Basic TypeScript Serverless Application Let’s start by defining our Node.js function. Our example function converts the JSON input it receives into an XML document using xml2json. This is a fairly synthetic example, but one that was motivated by a real world use case where we wanted to parse XML with a native library. In this example we will use TypeScript — a JavaScript inspired language with static type checking. import * as xml2json from 'xml2json'; import { Callback, Context, Handler } from 'aws-lambda'; const handler: Handler = (event: any, context: Context, callback: Callback): void => { console.log("received event: %j", event); const xml = xml2json.toXml(event); callback(null, xml); }; export { handler } Typically, in order to compile TypeScript into JavaScript we first need to add TypeScript as a dependency: { "private": true, "description": "Dockerized SAM", "main": "index.js", "scripts": { "tsc": "tsc" }, "dependencies": { "xml2json": "^0.11.2" }, "devDependencies": { "@types/aws-lambda": "^8.10.13", "@types/node": "^10.11.4", "typescript": "^3.1.1" } } … and then use NPM to install and run the TypeScript compiler: npm install npm run tsc However, since our goal is run the final AWS Lambda function in a Dockerized version of SAM, we will also need to install the dependencies in a similar container so that any native code can be executed properly. Building your TypeScript Application in a Container 📷Compiling a TypesScript Application in an AWS Lambda compatible Container First, we will need to build our application in an environment that closely resembles AWS Lambda. To accomplish this, we can leverage “Docker Lambda” — a Docker image that includes the AWS Lambda build tools and dependencies. To use this image, we will need to mount our source code so that the container has access to it, and then run the TypeScript tool chain within the container: docker run --rm -v "$PWD":/var/task lambci/lambda:build-nodejs8.10 sh -c 'npm install && npm run tsc' Now all of our dependencies and native code can be accessed from our AWS Lambda runtime Container! Running your TypeScript Application in a Container 📷Running a TypesScript Application in Dockerized SAM Now that we have a package that can be executed by SAM, we can attempt running SAM in a Docker container. At the moment, there aren’t any public Docker images that include the aws-sam-cli, though its pretty easy to define an image using the Alpine Linux Python image: FROM python:alpine RUN apk update && \ apk upgrade && \ apk add bash && \ apk add --no-cache --virtual build-deps build-base gcc && \ pip install aws-sam-cli && \ apk del build-deps RUN mkdir /app WORKDIR /app EXPOSE 3001 ENTRYPOINT ["./bin/sam_entrypoint.sh"] Since the ENTRYPOINT is a bit complex, we’ve encapsulated it in it’s own shell script: #!/bin/bash set -o errexit BASEDIR="$1" /usr/local/bin/sam local start-lambda \ --template dist/template.yaml \ --host 0.0.0.0 \ --docker-volume-basedir "${BASEDIR}" \ --docker-network monsoon-samples_default \ --skip-pull-image As you can see, the main arguments to our SAM entrypoint script will be the AWS SAM template file and the Docker volume with our source code. Additionally, we can specify a network so that our AWS Lambda function can connect to other external resources running in our Docker environment, for example a database. NB: The argument to skip the image pull ( — skip-pull-image) should make the entrypoint execute faster if the underlying Docker daemon already has a cached version of the AWS Lambda runtime image. AWSTemplateFormatVersion : '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: A sample AWS Lambda Function Resources: SampleFunction: Type: AWS::Serverless::Function Properties: Handler: dist/index.handler Runtime: nodejs8.10 CodeUri: ./ To bring this all together we can use Docker Compose to specify our local source code as a Docker volume mount point: version: '3.6' services: sam_app: build: . command: ["$PWD"] ports: - "3001:3001" volumes: - .:/app - /var/run/docker.sock:/var/run/docker.sock NB: Notice that we are also passing the Docker socket along as a mounted file. This is the key to running SAM in a Dockerized environment, as SAM itself will spawn a Docker container. The mounted and bound Docker socket effectively allows the SAM container to spawn alongside the parent container as a sibling instead of a child. This technique also allows us to specify our source code in the local working directory ($PWD), and have this directory forwarded along as the remote directory mounted by the SAM container (docker-volume-basedir). Finally, we can start our AWS Lambda function in a container, with all native dependencies compiled for AWS Lambda! docker-compose up sam_app Now all we need to do is test that our application is functioning properly, and we are good to deploy: echo '{"itemRecord":{"value":[{"longValue":"12345"},{"stringValue":{"number":"false","$t":"this is a string value"}},{"moneyValue":{"number":"true","currencyId":"USD","text":"123.45","$t":"104.95"}},{"moneyValue":{"currencyId":"USD","$t":"104.95"}},{"longValue":"0","bool":{"id":"0","$t":"true"}},{"longValue":"0"},{"dateValue":"2012-02-16T17:03:33.000-07:00"},{"stringValue":"SmDZ8RlMIjDvlEW3KUibzj2Q"},{"text":"42.42"}]}}' | curl \ --request POST \ --header "Content-Type: application/json" \ --data @- \ http://localhost:3001/2015-03-31/functions/SampleFunction/invocations Conclusion As you can see, although deploying a serverless Node.js application is pretty straightforward, it turns out that building and testing the application across platforms can be tricky. We hope you found this article to be illuminating! All of the sample code referenced in the article can be found in this GitHub repository. Next up- we plan to detail configuring AWS CodeBuild to automatically build, test and deploy our serverless Node.js Application!
1
0
77

andrey_mark

More actions