Forum Posts

imadnasri_
Mar 24, 2019
In CI CD
Learn how to build a CI/CD pipeline to automate the deployment process of your serverless applications . The following post will walk you through how to build a CI/CD pipeline to automate the deployment process of your Serverless applications and how to use features like code promotion, rollbacks, versions, aliases and blue/green deployment. At the end of this post, you will be able to build a pipeline similar to the following figure: For the sake of simplicity, I wrote a simple Go based Lambda function that calculates the Fibonacci number: package main import ( "errors" "github.com/aws/aws-lambda-go/lambda" ) func fibonacci(n int) int { if n <= 1 { return n } return fibonacci(n-1) + fibonacci(n-2) } func handler(n int) (int, error) { if n < 0 { return -1, errors.New("Input must be a positive number") } return fibonacci(n), nil } func main() { lambda.Start(handler) } I implemented also a couple of unit tests for both the Fibonacci recursive and Lambda handler functions: package main import ( "errors" "testing" "github.com/stretchr/testify/assert" ) func TestFibonnaciInputLessOrEqualToOne(t *testing.T) { assert.Equal(t, 1, fibonacci(1)) } func TestFibonnaciInputGreatherThanOne(t *testing.T) { assert.Equal(t, 13, fibonacci(7)) } func TestHandlerNegativeNumber(t *testing.T) { responseNumber, responseError := handler(-1) assert.Equal(t, -1, responseNumber) assert.Equal(t, errors.New("Input must be a positive number"), responseError) } func TestHandlerPositiveNumber(t *testing.T) { responseNumber, responseError := handler(5) assert.Equal(t, 5, responseNumber) assert.Nil(t, responseError) } To create the function in AWS Lambda and all the necessary AWS services, I used Terraform. An S3 bucket is needed to store all the deployment packages generated through the development lifecycle of the Lambda function: // S3 bucket resource "aws_s3_bucket" "bucket" { bucket = "${var.bucket}" acl = "private" } The build server needs to interact with S3 bucket and Lambda functions. Therefore, an IAM instance role must be created with S3 and Lambda permissions: // Jenkins slave instance profile resource "aws_iam_instance_profile" "worker_profile" { name = "JenkinsWorkerProfile" role = "${aws_iam_role.worker_role.name}" } resource "aws_iam_role" "worker_role" { name = "JenkinsBuildRole" path = "/" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "ec2.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } resource "aws_iam_policy" "s3_policy" { name = "PushToS3Policy" path = "/" policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:PutObject", "s3:GetObject" ], "Effect": "Allow", "Resource": "${aws_s3_bucket.bucket.arn}/*" } ] } EOF } resource "aws_iam_policy" "lambda_policy" { name = "DeployLambdaPolicy" path = "/" policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": [ "lambda:UpdateFunctionCode", "lambda:PublishVersion", "lambda:UpdateAlias" ], "Effect": "Allow", "Resource": "*" } ] } EOF } resource "aws_iam_role_policy_attachment" "worker_s3_attachment" { role = "${aws_iam_role.worker_role.name}" policy_arn = "${aws_iam_policy.s3_policy.arn}" } resource "aws_iam_role_policy_attachment" "worker_lambda_attachment" { role = "${aws_iam_role.worker_role.name}" policy_arn = "${aws_iam_policy.lambda_policy.arn}" } An IAM role is needed for the Lambda function as well: // Lambda IAM role resource "aws_iam_role" "lambda_role" { name = "FibonacciFunctionRole" path = "/" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "lambda.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } Finally, a Go-based Lambda function will be created with the following properties: // Lambda function resource "aws_lambda_function" "function" { filename = "deployment.zip" function_name = "Fibonacci" role = "${aws_iam_role.lambda_role.arn}" handler = "main" runtime = "go1.x" } Next, build the deployment package with the following commands: # Build linux binary GOOS=linux go build -o main main.go # Create a zip file zip deployment.zip main Then, issue the terraform apply command to create the resources: Sign in to AWS Management Console and navigate to Lambda Console, a new function called “Fibonacci” should be created: You can test it out, by mocking the input from the “Select a test event” dropdown list: If you click on “Test” button the Fibonacci number of 7 will be returned: So far our function is working as expected. However, how can we ensure each changes to our codebase doesn't break things ? That’s where CI/CD comes into play, the idea is making all code changes and features go through a complex pipeline before integrating them to the master branch and deploying it to production. You need a Jenkins cluster with at least a single worker (with Go preinstalled), you can follow my previous post for a step by step guide on how to build a Jenkins cluster on AWS from scratch: Deploy a Jenkins Cluster on AWS — Mohamed Labouardy — Medium Prior to the build, the IAM instance role (created with Terraform) with the write access to S3 and the update operations to Lambda must be configured on the Jenkins workers: Jump back to Jenkins Dashboard and create new multi-branch project and configure the GitHub repository where the code source is versioned as follows: Create a new file called Jenkinsfile, it defines a set of steps that will be executed on Jenkins (This definition file must be committed to the Lambda function’s code repository): def bucket = 'deployment-packages-mlabouardy' def functionName = 'Fibonacci' def region = 'eu-west-3' node('slaves'){ stage('Checkout'){ checkout scm } stage('Test'){ sh 'go get -u github.com/golang/lint/golint' sh 'go get -t ./...' sh 'golint -set_exit_status' sh 'go vet .' sh 'go test .' } stage('Build'){ sh 'GOOS=linux go build -o main main.go' sh "zip ${commitID()}.zip main" } stage('Push'){ sh "aws s3 cp ${commitID()}.zip s3://${bucket}" } stage('Deploy'){ sh "aws lambda update-function-code --function-name ${functionName} \ --s3-bucket ${bucket} \ --s3-key ${commitID()}.zip \ --region ${region}" } } def commitID() { sh 'git rev-parse HEAD > .git/commitID' def commitID = readFile('.git/commitID').trim() sh 'rm .git/commitID' commitID } The pipeline is divided into 5 stages: Checkout: clone the GitHub repository. Test: check whether our code is well formatted and follows Go best practices and run unit tests. Build: build a binary and create the deployment package. Push: store the deployment package (.zip file) to an S3 bucket. Deploy: update the Lambda function’s code with the new artifact. Note the usage of the git commit ID as a name for the deployment package to give a meaningful and significant name for each release and be able to roll back to a specific commit if things go wrong. Once the project is saved, a new pipeline should be created as follows: Once the pipeline is completed, all stages should be passed, as shown in the next screenshot: At the end, Jenkins will update the Lambda function’s code with the update-function- code command: If you open the S3 Console, then click on the bucket used by the pipeline, a new deployment package should be stored with a key name identical to the commit ID: Finally, to make Jenkins trigger the build when you push to the code repository, click on “Settings” from your GitHub repository, then create a new webhook from “Webhooks”, and fill it in with a URL similar to the following: In case you’re using Git branching workflows (you should), Jenkins will discover automatically the new branches: Hence, you must separate your deployment environments to test new changes without impacting your production. Therefore, having multiple versions of your Lambda functions makes sense. Update the Jenkinsfile to add a new stage to publish a new Lambda function’s version, every-time you push (or merge) to the master branch: def bucket = 'deployment-packages-mlabouardy' def functionName = 'Fibonacci' def region = 'eu-west-3' node('slaves'){ stage('Checkout'){ checkout scm } stage('Test'){ sh 'go get -u github.com/golang/lint/golint' sh 'go get -t ./...' sh 'golint -set_exit_status' sh 'go vet .' sh 'go test .' } stage('Build'){ sh 'GOOS=linux go build -o main main.go' sh "zip ${commitID()}.zip main" } stage('Push'){ sh "aws s3 cp ${commitID()}.zip s3://${bucket}" } stage('Deploy'){ sh "aws lambda update-function-code --function-name ${functionName} \ --s3-bucket ${bucket} \ --s3-key ${commitID()}.zip \ --region ${region}" } if (env.BRANCH_NAME == 'master') { stage('Publish') { sh "aws lambda publish-version --function-name ${functionName} \ --region ${region}" } } } def commitID() { sh 'git rev-parse HEAD > .git/commitID' def commitID = readFile('.git/commitID').trim() sh 'rm .git/commitID' commitID } On the master branch, a new stage called “Published” will be added: As a result, a new version will be published based on the master branch source code: However, in agile based environment (Extreme programming). The development team needs to release iterative versions of the system often to help the customer to gain confidence in the progress of the project, receive feedback and detect bugs in earlier stage of development. As a result, small releases can be frequent: AWS services using Lambda functions as downstream resources (API Gatewayas an example) need to be updated every-time a new version is published -> operational overhead and downtime. USE aliases !!! The alias is a pointer to a specific version, it allows you to promote a function from one environment to another (such as staging to production). Aliases are mutable, unlike versions, which are immutable. That being said, create an alias for the production environment that points to the latest version published using the AWS command line: aws lambda create-alias --function-name Fibonacci \ --name production --function-version 2 \ --region eu-west-3 You can now easily promote the latest version published into production by updating the production alias pointer’s value: def bucket = 'deployment-packages-mlabouardy' def functionName = 'Fibonacci' def region = 'eu-west-3' node('slaves'){ stage('Checkout'){ checkout scm } stage('Test'){ sh 'go get -u github.com/golang/lint/golint' sh 'go get -t ./...' sh 'golint -set_exit_status' sh 'go vet .' sh 'go test .' } stage('Build'){ sh 'GOOS=linux go build -o main main.go' sh "zip ${commitID()}.zip main" } stage('Push'){ sh "aws s3 cp ${commitID()}.zip s3://${bucket}" } stage('Deploy'){ sh "aws lambda update-function-code --function-name ${functionName} \ --s3-bucket ${bucket} \ --s3-key ${commitID()}.zip \ --region ${region}" } if (env.BRANCH_NAME == 'master') { stage('Publish') { def lambdaVersion = sh( script: "aws lambda publish-version --function-name ${functionName} --region ${region} | jq -r '.Version'", returnStdout: true ) sh "aws lambda update-alias --function-name ${functionName} --name production --region ${region} --function-version ${lambdaVersion}" } } } def commitID() { sh 'git rev-parse HEAD > .git/commitID' def commitID = readFile('.git/commitID').trim() sh 'rm .git/commitID' commitID } Credit for Mohamed Labouardy via CloudGuru
1
0
342
imadnasri_
Mar 24, 2019
In Network & Security
DNS, or the Domain Name System, translates human readable domain names (for example, www.amazon.com) to machine readable IP addresses (for example, 192.0.2.44). DNS Basics All computers on the Internet, from your smart phone or laptop to the servers that serve content for massive retail websites, find and communicate with one another by using numbers. These numbers are known as IP addresses. When you open a web browser and go to a website, you don't have to remember and enter a long number. Instead, you can enter a domain name like example.com and still end up in the right place. A DNS service such as Amazon Route 53 is a globally distributed service that translates human readable names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. The Internet’s DNS system works much like a phone book by managing the mapping between names and numbers. DNS servers translate requests for names into IP addresses, controlling which server an end user will reach when they type a domain name into their web browser. These requests are called queries. Types of DNS Service Authoritative DNS: An authoritative DNS service provides an update mechanism that developers use to manage their public DNS names. It then answers DNS queries, translating domain names into IP address so computers can communicate with each other. Authoritative DNS has the final authority over a domain and is responsible for providing answers to recursive DNS servers with the IP address information. Amazon Route 53 is an authoritative DNS system. Recursive DNS: Clients typically do not make queries directly to authoritative DNS services. Instead, they generally connect to another type of DNS service known a resolver, or a recursive DNS service. A recursive DNS service acts like a hotel concierge: while it doesn't own any DNS records, it acts as an intermediary who can get the DNS information on your behalf. If a recursive DNS has the DNS reference cached, or stored for a period of time, then it answers the DNS query by providing the source or IP information. If not, it passes the query to one or more authoritative DNS servers to find the information. How Does DNS Route Traffic To Your Web Application? The following diagram gives an overview of how recursive and authoritative DNS services work together to route an end user to your website or application. 📷 A user opens a web browser, enters www.example.com in the address bar, and presses Enter. The request for www.example.com is routed to a DNS resolver, which is typically managed by the user's Internet service provider (ISP), such as a cable Internet provider, a DSL broadband provider, or a corporate network. The DNS resolver for the ISP forwards the request for www.example.com to a DNS root name server. The DNS resolver for the ISP forwards the request for www.example.com again, this time to one of the TLD name servers for .com domains. The name server for .com domains responds to the request with the names of the four Amazon Route 53 name servers that are associated with the example.com domain. The DNS resolver for the ISP chooses an Amazon Route 53 name server and forwards the request for www.example.com to that name server. The Amazon Route 53 name server looks in the example.com hosted zone for the www.example.com record, gets the associated value, such as the IP address for a web server, 192.0.2.44, and returns the IP address to the DNS resolver. The DNS resolver for the ISP finally has the IP address that the user needs. The resolver returns that value to the web browser. The DNS resolver also caches (stores) the IP address for example.com for an amount of time that you specify so that it can respond more quickly the next time someone browses to example.com. For more information, see time to live (TTL). The web browser sends a request for www.example.com to the IP address that it got from the DNS resolver. This is where your content is, for example, a web server running on an Amazon EC2 instance or an Amazon S3 bucket that's configured as a website endpoint. The web server or other resource at 192.0.2.44 returns the web page for www.example.com to the web browser, and the web browser displays the page. Original article - https://aws.amazon.com/route53/what-is-dns/
1
0
10
imadnasri_
Mar 24, 2019
In DevOps Tools
In this article, I will be discussing on how to scale Jenkins with Kubernetes, what are the different components required and how to fit those components together to have complete scalable solution. Note: I will be taking an example of AWS and will be using its terminology but concepts can be easily applied to other cloud vendors too. Basic understanding of kubernetes, would be required like what is pod, deployment, service, ingress, and its basic commands. This article will give you fair idea but won’t go into very deep in steps. I recommend to read official documentation for deeper understanding. Jenkins has been popular choice for CI/CD and it has become great tool in automating the deployments across different environments. With modern microservices based architecture, different teams with frequent commit cycle need to test the code in different environment before raising pull request. So we need Jenkins to work as fast as possible. Below are few important components which we need to consider before we start designing the solution on top of kubernetes Setting up Jenkins in kubernetes cluster Jenkins access from outside the cluster Configure kubernetes plugin in Jenkins Pod scheduling in kubernetes cluster Capacity and cost management Step 1: Setting up Jenkins in kubernetes cluster Before starting, we should have kubernetes cluster running in separate VPC, separate vpc is not mandatory but we can have all devops related tool which are common to different environment running in its own vpc and then use vpc peering connections to allow access of each other. Below is reference diagram For setting up Jenkins inside kubernetes, create jenkins-deploy.yaml file with below content apiVersion: extensions/v1beta1 kind: Deployment metadata: name: jenkins-master spec: replicas: 1 template: metadata: labels: app: jenkins-master spec: containers: - name: jenkins-leader image: jenkins volumeMounts: - name: jenkins-home mountPath: /var/jenkins_home - name: docker-sock-volume mountPath: /var/run/docker.sock resources: requests: memory: "1024Mi" cpu: "0.5" limits: memory: "1024Mi" cpu: "0.5" ports: - name: http-port containerPort: 8080 - name: jnlp-port containerPort: 50000 volumes: - name: jenkins-home emptyDir: {} - name: docker-sock-volume hostPath: path: /var/run/docker.sock Now expose Jenkins as service by creating another file jenkins-svc.yaml with below content apiVersion: v1 kind: Service metadata: name: jenkins-master-svc labels: app: jenkins-master spec: type: NodePort ports: - port: 80 targetPort: 8080 protocol: TCP name: http - port: 50000 targetPort: 50000 protocol: TCP name: slave selector: app: jenkins-master Now we need to apply this into kubernetes cluster with following commands kubectl create -f jenkins-deploy.yaml kubectl create -f jenkins-svc.yaml We have Jenkins running inside the cluster, you can access it by using kubectl proxy command but since we need to access Jenkins from outside the cluster too, lets set this up Step 2: Jenkins Access from outside For setting up Jenkins access from outside the cluster if we define service type as loadbalancer in Jenkins service file, it will spin up ELB instance in cloud and you can access Jenkins by IP of ELB. Problem with this approach, if we you want to have some other service exposed from cluster, and follow same approach you will end up in another ELB instance and that does increase cost. To avoid this, kubernetes support feature named as ingress Ingress: This is collection of rules by which outside traffic can reach to the services deployed in kubernetes and to support ingress we also need to have ingress controller. We will be using nginx-ingress controller which is supported by nginx. Below is the sample file which can be deployed in kubernetes as deployment apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ingress-nginx spec: replicas: 1 template: metadata: labels: app: ingress-nginx spec: containers: - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3 name: ingress-nginx imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend and now expose it as service. We will use the type as loadbalancer, which would mean it will spin a ELB in AWS. ELB endpoint is what we can use for outside access apiVersion: v1 kind: Service metadata: name: ingress-nginx spec: type: LoadBalancer selector: app: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: https Now, we need to define some rules so ingress controller can decide which service to call. Before defining rules, we need to create sub-domain mapped to elb end point, lets say we mapped jenkins.yourcompany.com to elb end point. Now, lets write up ingress and use this domain as host name apiVersion: extensions/v1beta1 kind: Ingress metadata: name: jenkins-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/cors-allow-credentials: "true" nginx.ingress.kubernetes.io/cors-allow-headers: Authorization, origin, accept nginx.ingress.kubernetes.io/cors-allow-methods: GET, OPTIONS nginx.ingress.kubernetes.io/enable-cors: "true" spec: rules: - host: jenkins.yourcompany.com http: paths: - backend: serviceName: jenkins-master-svc servicePort: 80 With this ingress in place, whenever request comes to jenkins.yourcompany.com, it will go to ELB first, elb send this to nginx controller which read the ingress and send the traffic to jenkins-master-svc service. You can define more ingress and map them with your services, so we can use single elb to manage the traffic to all services hosted in kubernetes cluster Step 3: Configuring Kubernetes plugin: You should be able to access the Jenkins by your sub-domain, initially you need to set this up as you normally do and configure kubernetes plugin : jenkinsci/kubernetes-plugin This link has all the information on how to setup this plugin in Jenkins. Since Jenkins has been installed inside the kubernetes cluster, we will be able to access the kubernetes by https://kubernetes.default.svc.cluster.local. If you install Jenkins outside kubernetes cluster then proper endpoint has to be defined. Below is the configuration, only three things need to be added up, Kubernetes URL, Jenkins URL and credentials for kubernetes. We don’t need to set up pod template as we will create them dynamically in next step Step 4: Pod scheduling in cluster To plan for scaling so it can handle all jobs which keeps on increasing with time. We can plan for a. Vertical Scaling: Adding more cores and memory to jenkins master b. Horizontal Scaling: Adding more slave nodes which can coordinate with master and run the jobs While both approaches does solve the scaling issue but cost will also increase with these approaches. This is where we can use kubernetes by doing on demand horizontal scaling of Jenkins. In kubernetes, we setup Jenkins in master slave mode where each job can be assigned to run in specific agent. Agent in our case would be pod running on slave node. So when job needs to run it start creating their pod and execute the job in it and once done the pod gets terminated. This solve the problem of on-demand scaling, below will show example on how to set this up For defining pipeline, Jenkins supports two types of syntax a. Scripted b. Declarative Declarative syntax is improved version should be preferred in defining pipeline. In plugin setup we only need to add kubernetes and Jenkins end points and rest we configure in pipeline itself on what type of pod we want to run in which job will execute. In most of the cases you would want you own slave image rather than public one so assuming you have hosted that image in registry, below is what can be used You can create one shared library with all common function which are being used in pipeline. Example, below function is to return content of yaml file to run pod on kubernetes cluster def call(){ agent = """ apiVersion: v1 kind: Pod metadata: labels: name: jenkins-slave spec: containers: - name: jenkins-slave image: xxx.com/jenkins:slave workingDir: /home/jenkins volumeMounts: - name: docker-sock-volume mountPath: /var/run/docker.sock command: - cat tty: true volumes: - name: docker-sock-volume hostPath: path: /var/run/docker.sock """ return agent } and use these functions in the pipeline to use them. Below is sample pipeline which will run the pod in kubernetes cluster based on custom Jenkins slave image and all the defined steps will get executed in that container pipeline { agent { kubernetes { label 'jenkins-slave' defaultContainer 'jenkins-slave' yaml getAgent() } } stages{ stage ('stage1'){ steps{ // Define custom steps as per requirement } } } } Step 5: Capacity and cost management So far, we have Jenkins installed on kubernetes and each Jenkins job will create container and run its code in it and terminate. Other important aspect we need to plan for is, containers need nodes to run and we need to setup system where nodes are created on demand and gets removed when not in use. This is where cluster autoscaler is helpful kubernetes/autoscaler Purpose of cluster autoscaler is to keep looking for event when pod has failed to start due to insufficient resources and adding node in cluster so pods can run. It also keep monitoring for nodes which doesn’t have any pods running on it so those nodes can be removed from cluster. This solves the problem of on-demand scale out and scale in very well and all we need to do is configure this in our cluster, in configuration we will define the min and max nodes count so the cluster scale out operation stays in limit and we will always have minimum number of nodes ready to execute the jobs faster. We can also use spot instances to create these nodes instead vs on-demand nodes which will save our cost further So, this is it, we have scalable Jenkins cluster in place with each trigger of Jenkins job, pod gets created in kubernetes cluster and gets destroyed when it is done. Scaling of cluster is handled by autoscaler and ingress is used to expose Jenkins to outside of cluster. One part we haven’t covered is usage of helm, package manager for kubernetes, once we get comfortable in these concepts, we should be deploying in kubernetes as helm charts only, more on this later. Thanks Gaurav Vashishth for this great guide .
1
0
60

imadnasri_

More actions