Forum Posts

Shaked Yosef
Jun 24, 2020
In DevOps Tools
https://ansible-editor.com Using Ansible-Editor, you’ll be able to build playbooks easily and quickly. READ MORE ⬇️ https://medium.com/@shakedbraimokyosef/ansible-is-easier-than-ever-6182fa49b004
The Ansible-Editor is here, and it's totally free! content media
0
0
23
Shaked Yosef
May 05, 2020
In DevOps Tools
Pulumi released the 2.0 release of its open source platform for automating the provisioning of IT infrastructure in the cloud! https://www.pulumi.com/superpowers/ What do you think about that?
Pulumi released the 2.0 release of the Open-Source platform content media
1
0
11
Shaked Yosef
Sep 08, 2019
In DevOps Tools
A quick overview Kacidi is a tool which integrates with your Infrastructure-as-code and protects your infrastructure from human mistakes and security breaches. Kacidi detects issues in your infrastructure-as-code automatically before the deployment. What is Infrastructure as code ? Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code. Like the principle that the same source code generates the same binary, an IaC model generates the same environment every time it is applied. IaC is a key DevOps practice and is used in conjunction with continuous delivery. Infrastructure as Code evolved to solve the problem of environment drift in the release pipeline. Without IaC, teams must maintain the settings of individual deployment environments. Over time, each environment becomes a snowflake, that is, a unique configuration that cannot be reproduced automatically. Inconsistency among environments leads to issues during deployments. With snowflakes, administration and maintenance of infrastructure involves manual processes which were hard to track and contributed to errors. IaC is great, but can brings some issues to your organization … As the pace of changes and requirements for organisational agility just increases, and as DevOps practices like IaC and Immutable Infrastructure are being the new norm, it is crucial that we perform some critical thinking about our existing processes and how to better align security and DevOps. As more people get involved in maintaining and creating infrastructure, we need to keep malicious actors and simple human errors out of the cloud infrastructure somehow, automatically and enforce best practices in the infrastructure write from the beginning - before a breach or a disruption to service happens. Kacidi solves these issues and make IaC safest and fastest Which IaC platform supported by Kacidi ? Terraform CloudFormation K8S GCP Deployment Manager Azure Resource Manager Kacidi Key Features: Best-practices checks - perform best practices and security checks . Drift detection check - prevent conflicts in infrastructure by performing a drift detection before the Merge. Change-set check - summarize the infrastructure change that will be applied as a of result merging the branch. Personalized policy - set a policy per user or group in your Github organisation, Examples: - “Restrict developer to create instance t2.small only”.“ - "Only Admin can edit VPC details”. Automated workflow - create, update and destroy environments automatically according GitOps (Deploy on Merge). 🔹 Getting Started (5 min setup) 🔹 Sign-up : https://kacidi.com/sign-up . Integrate Kacidi with one of your DevOps tools (Github, Gitlab, Jenkins, CircleCI, etc). After the integration setup, Kacidi will notify inside your Pull-requests about best-practices issues and conflicts in your infrastructure. If you want to set personalised-policy, open Kacidi-editor and set policies per user / group . Using the personalized-policy you can avoid human errors that add up to unnecessary costs, security breaches and even achitecture changes. Start protect your cloud infrastructure today ⬇ https://kacidi.com/sign-up
Kacidi | Bullet-Proof Cloud Infrastructure content media
1
2
182
Shaked Yosef
Jun 24, 2019
In DevOps Tools
Author: Yevgeniy Brikma Update: we took this blog post series, expanded it, and turned it into a book called Terraform: Up & Running! This is Part 1 of the Comprehensive Guide to Terraform series. In the intro to the series, we discussed why every company should be using infrastructure-as-code (IAC). In this post, we’re going to discuss why we picked Terraform as our IAC tool of choice. If you search the Internet for “infrastructure-as-code”, it’s pretty easy to come up with a list of the most popular tools: Chef Puppet Ansible SaltStack CloudFormation Terraform What’s not easy is figuring out which one of these you should use. All of these tools can be used to manage infrastructure as code. All of them are open source, backed by large communities of contributors, and work with many different cloud providers (with the notable exception of CloudFormation, which is closed source and AWS-only). All of them offer enterprise support. All of them are well documented, both in terms of official documentation and community resources such as blog posts and StackOverflow questions. So how do you decide? What makes this even harder is that most of the comparisons you find online between these tools do little more than list the general properties of each tool and make it sound like you could be equally successful with any of them. And while that’s technically true, it’s not helpful. It’s a bit like telling a programming newbie that you could be equally successful building a website with PHP, C, or Assembly — a statement that’s technically true, but one that omits a huge amount of information that would be incredibly useful in making a good decision. In this post, we’re going to dive into some very specific reasons for why we picked Terraform over the other IAC tools. As with all technology decisions, it’s a question of trade-offs and priorities, and while your particular priorities may be different than ours, we hope that sharing our thought process will help you make your own decision. Here are the main trade-offs we considered: Configuration Management vs OrchestrationMutable Infrastructure vs Immutable InfrastructureProcedural vs DeclarativeClient/Server Architecture vs Client-Only Architecture Configuration Management vs Orchestration Chef, Puppet, Ansible, and SaltStack are all “configuration management” tools, which means they are designed to install and manage software on existing servers. CloudFormation and Terraform are “orchestration tools”, which means they are designed to provision the servers themselves, leaving the job of configuring those servers to other tools. These two categories are not mutually exclusive, as most configuration management tools can do some degree of provisioning and most orchestration tools can do some degree of configuration management. But the focus on configuration management or orchestration means that some of the tools are going to be a better fit for certain types of tasks. In particular, we’ve found that if you use Docker or Packer, the vast majority of your configuration management needs are already taken care of. With Docker and Packer, you can create images (such as containers or virtual machine images) that have all the software your server needs already installed and configured (for more info on what makes Docker great, see here). Once you have such an image, all you need is a server to run it. And if all you need to do is provision a bunch of servers, then an orchestration tool like Terraform is typically going to be a better fit than a configuration management tool (here’s an example of how to use Terraform to deploy Docker on AWS). Mutable Infrastructure vs Immutable Infrastructure Configuration management tools such as Chef, Puppet, Ansible, and SaltStack typically default to a mutable infrastructure paradigm. For example, if you tell Chef to install a new version of OpenSSL, it’ll run the software update on your existing servers and the changes will happen in-place. Over time, as you apply more and more updates, each server builds up a unique history of changes. This often leads to a phenomenon known as configuration drift, where each server becomes slightly different than all the others, leading to subtle configuration bugs that are difficult to diagnose and nearly impossible to reproduce. If you’re using an orchestration tool such as Terraform to deploy machine images created by Docker or Packer, then every “change” is actually a deployment of a new server (just like every “change” to a variable in functional programming actually returns a new variable). For example, to deploy a new version of OpenSSL, you would create a new image using Packer or Docker with the new version of OpenSSL already installed, deploy that image across a set of totally new servers, and then undeploy the old servers. This approach reduces the likelihood of configuration drift bugs, makes it easier to know exactly what software is running on a server, and allows you to trivially deploy any previous version of the software at any time. Of course, it’s possible to force configuration management tools to do immutable deployments too, but it’s not the idiomatic approach for those tools, whereas it’s a natural way to use orchestration tools. Procedural vs Declarative Chef and Ansible encourage a procedural style where you write code that specifies, step-by-step, how to to achieve some desired end state. Terraform, CloudFormation, SaltStack, and Puppet all encourage a more declarative style where you write code that specifies your desired end state, and the IAC tool itself is responsible for figuring out how to achieve that state. For example, let’s say you wanted to deploy 10 servers (“EC2 Instances” in AWS lingo) to run v1 of an app. Here is a simplified example of an Ansible template that does this with a procedural approach: - ec2: count: 10 image: ami-v1 instance_type: t2.micro And here is a simplified example of a Terraform template that does the same thing using a declarative approach: resource "aws_instance" "example" { count = 10 ami = "ami-v1" instance_type = "t2.micro" } Now at the surface, these two approaches may look similar, and when you initially execute them with Ansible or Terraform, they will produce similar results. The interesting thing is what happens when you want to make a change. For example, imagine traffic has gone up and you want to increase the number of servers to 15. With Ansible, the procedural code you wrote earlier is no longer useful; if you just updated the number of servers to 15 and reran that code, it would deploy 15 new servers, giving you 25 total! So instead, you have to be aware of what is already deployed and write a totally new procedural script to add the 5 new servers: - ec2: count: 5 image: ami-v1 instance_type: t2.micro With declarative code, since all you do is declare the end state you want, and Terraform figures out how to get to that end state, Terraform will also be aware of any state it created in the past. Therefore, to deploy 5 more servers, all you have to do is go back to the same Terraform template and update the count from 10 to 15: resource "aws_instance" "example" { count = 15 ami = "ami-v1" instance_type = "t2.micro" } If you executed this template, Terraform would realize it had already created 10 servers and therefore that all it needed to do was create 5 new servers. In fact, before running this template, you can use Terraform’s “plan” command to preview what changes it would make: > terraform plan + aws_instance.example.11 ami: "ami-v1" instance_type: "t2.micro" + aws_instance.example.12 ami: "ami-v1" instance_type: "t2.micro" + aws_instance.example.13 ami: "ami-v1" instance_type: "t2.micro" + aws_instance.example.14 ami: "ami-v1" instance_type: "t2.micro" + aws_instance.example.15 ami: "ami-v1" instance_type: "t2.micro" Plan: 5 to add, 0 to change, 0 to destroy. Now what happens when you want to deploy v2 the service? With the procedural approach, both of your previous Ansible templates are again not useful, so you have to write yet another template to track down the 10 servers you deployed previous (or was it 15 now?) and carefully update each one to the new version. With the declarative approach of Terraform, you go back to the exact same template once again and simply change the ami version number to v2: resource "aws_instance" "example" { count = 15 ami = "ami-v2" instance_type = "t2.micro" } Obviously, the above examples are simplified. Ansible does allow you to use tags to search for existing EC2 instances before deploying new ones (e.g. using the instance_tags and count_tag parameters), but having to manually figure out this sort of logic for every single resource you manage with Ansible, based on each resource’s past history, can be surprisingly complicated (e.g. finding existing instances not only by tag, but also image version, availability zone, etc). This highlights two major problems with procedural IAC tools: When dealing with procedural code, the state of the infrastructure is notfully captured in the code. Reading through the three Ansible templates we created above is not enough to know what’s deployed. You’d also have to know the order in which we applied those templates. Had we applied them in a different order, we might end up with different infrastructure, and that’s not something you can see in the code base itself. In other words, to reason about an Ansible or Chef codebase, you have to know the full history of every change that has ever happened.The reusability of procedural code is inherently limited because you have to manually take into account the current state of the codebase. Since that state is constantly changing, code you used a week ago may no longer be usable because it was designed to modify a state of your infrastructure that no longer exists. As a result, procedural code bases tend to grow large and complicated over time. On the other hand, with the kind of declarative approach used in Terraform, the code always represents the latest state of your infrastructure. At a glance, you can tell what’s currently deployed and how it’s configured, without having to worry about history or timing. This also makes it easy to create reusable code, as you don’t have to manually account for the current state of the world. Instead, you just focus on describing your desired state, and Terraform figures out how to get from one state to the other automatically. As a result, Terraform codebases tend to stay small and easy to understand. Of course, there are downsides to declarative languages too. Without access to a full programming language, your expressive power is limited. For example, some types of infrastructure changes, such as a rolling, zero-downtime deployment, are hard to express in purely declarative terms. Similarly, without the ability to do “logic” (e.g. if-statements, loops), creating generic, reusable code can be tricky (especially in CloudFormation). Fortunately, Terraform provides a number of powerful primitives — such as input variables, output variables, modules, create_before_destroy, count, and interpolation functions — that make it possible to create clean, configurable, modular code even in a declarative language. We’ll discuss these tools more in Part 4, How to create reusable infrastructure with Terraform modules and Part 5, Terraform tips & tricks: loops, if-statements, and pitfalls. Client/Server Architecture vs Client-Only Architecture Chef, Puppet, and SaltStack all use a client/server architecture by default. The client, which could be a web UI or a CLI tool, is what you use to issue commands (e.g “deploy X”). Those commands go to a server, which is responsible for executing your commands and storing the state of the system. To execute those commands, the server talks to agents, which must be running on every server you want to configure. This has a number of downsides: You have to install and run extra software on every one of your servers.You have to deploy an extra server (or even a cluster of servers for high availability) just for configuration management.You not only have to install this extra software and hardware, but you also have to maintain it, upgrade it, make backups of it, monitor it, and restore it in case of outages.Since the client, server, and agents all need to communicate over the network, you have to open extra ports for them, and configure ways for them to authenticate to each other, all of which increases your surface area to attackers.All of these extra moving parts introduce a large number of new failure modes into your infrastructure. When you get a bug report at 3AM, you’ll have to figure out if it’s a bug in your application code, or your IAC code, or the configuration management client software, or the configuration management agent software, or the configuration management server software, or the ports all those configuration management pieces use to communicate, or the way they authenticate to each other, or… CloudFormation, Ansible, and Terraform, use a client-only architecture. Actually, CloudFormation is also client/server, but AWS handles all the server details so transparently, that as an end user, you only have to think about the client code. The Ansible client works by connecting directly to your servers over SSH. Terraform uses cloud provider APIs to provision infrastructure, so there are no new authentication mechanisms beyond what you’re using with the cloud provider already, and there is no need for direct access to your servers. We found this as the best option in terms of ease-of-use, security, and maintainability. Conclusion Putting it all together, below is a table that shows how the most popular IAC tools stack up: At Gruntwork, what we wanted was an open source, cloud-agnostic orchestration tool that supported immutable infrastructure, a declarative language, and a client-only architecture. From the table above, Terraform is the only tool that meets all of our criteria. Of course, Terraform isn’t perfect. It’s younger and less mature than all the other tools on the list: whereas Puppet came out in 2005, Chef in 2009, SaltStack and CloudFormation in 2011, and Ansible in 2012, Terraform came out just 2 years ago, in 2014. Terraform is still pre 1.0.0 (latest version is 0.7.4), so there is no guarantee of a stable or backwards compatible API. Bugs are relatively common (e.g. there are over 800 open issues with the label “bug”), although the vast majority are harmless eventual consistency issues that go away when you rerun Terraform. There are also some issues with how Terraform stores state, although there are effective solutions for those issues that we will discuss in Part 3: How to manage Terraform state. Despite its drawbacks, we find that Terraform’s strengths far outshine its weaknesses, and that no other IAC tool fits our criteria nearly as well. If Terraform sounds like something that may fit your criteria too, head over to Part 2: An Introduction to Terraform, to learn more. For an expanded version of this blog post series, pick up a copy of the book Terraform: Up & Running. If you need help with Terraform, DevOps practices, or AWS at your company, feel free to reach out to us at Gruntwork.
1
1
403
Shaked Yosef
Mar 29, 2019
In DevOps Tools
This great manual taken from AWS blog, by the integration you can use Slack ChatOps for code deployment. Slack is widely used by DevOps and development teams to communicate status. Typically, when a build has been tested and is ready to be promoted to a staging environment, a QA engineer or DevOps engineer kicks off the deployment. Using Slack in a ChatOps collaboration model, the promotion can be done in a single click from a Slack channel. And because the promotion happens through a Slack channel, the whole development team knows what’s happening without checking email. In this blog post, I will show you how to integrate AWS services with a Slack application. I use an interactive message button and incoming webhook to promote a stage with a single click. To follow along with the steps in this post, you’ll need a pipeline in AWS CodePipeline. If you don’t have a pipeline, the fastest way to create one for this use case is to use AWS CodeStar. Go to the AWS CodeStar console and select the Static Website template (shown in the screenshot). AWS CodeStar will create a pipeline with an AWS CodeCommit repository and an AWS CodeDeploy deployment for you. After the pipeline is created, you will need to add a manual approval stage. You’ll also need to build a Slack app with webhooks and interactive components, write two Lambda functions, and create an API Gateway API and a SNS topic. As you’ll see in the following diagram, when I make a change and merge a new feature into the master branch in AWS CodeCommit, the check-in kicks off my CI/CD pipeline in AWS CodePipeline. When CodePipeline reaches the approval stage, it sends a notification to Amazon SNS, which triggers an AWS Lambda function (ApprovalRequester). The Slack channel receives a prompt that looks like the following screenshot. When I click Yes to approve the build promotion, the approval result is sent to CodePipeline through API Gateway and Lambda (ApprovalHandler). The pipeline continues on to deploy the build to the next environment. Create a Slack app For App Name, type a name for your app. For Development Slack Workspace, choose the name of your workspace. You’ll see in the following screenshot that my workspace is AWS ChatOps. After the Slack application has been created, you will see the Basic Information page, where you can create incoming webhooks and enable interactive components. To add incoming webhooks: Under Add features and functionality, choose Incoming Webhooks. Turn the feature on by selecting Off, as shown in the following screenshot. Now that the feature is turned on, choose Add New Webhook to Workspace. In the process of creating the webhook, Slack lets you choose the channel where messages will be posted. After the webhook has been created, you’ll see its URL. You will use this URL when you create the Lambda function. If you followed the steps in the post, the pipeline should look like the following. Write the Lambda function for approval requests This Lambda function is invoked by the SNS notification. It sends a request that consists of an interactive message button to the incoming webhook you created earlier.  The following sample code sends the request to the incoming webhook. WEBHOOK_URL and SLACK_CHANNEL are the environment variables that hold values of the webhook URL that you created and the Slack channel where you want the interactive message button to appear. # This function is invoked via SNS when the CodePipeline manual approval action starts. # It will take the details from this approval notification and sent an interactive message to Slack that allows users to approve or cancel the deployment. import os import json import logging import urllib.parse from base64 import b64decode from urllib.request import Request, urlopen from urllib.error import URLError, HTTPError # This is passed as a plain-text environment variable for ease of demonstration. # Consider encrypting the value with KMS or use an encrypted parameter in Parameter Store for production deployments. SLACK_WEBHOOK_URL = os.environ['SLACK_WEBHOOK_URL'] SLACK_CHANNEL = os.environ['SLACK_CHANNEL'] logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): print("Received event: " + json.dumps(event, indent=2)) message = event["Records"][0]["Sns"]["Message"] data = json.loads(message) token = data["approval"]["token"] codepipeline_name = data["approval"]["pipelineName"] slack_message = { "channel": SLACK_CHANNEL, "text": "Would you like to promote the build to production?", "attachments": [ { "text": "Yes to deploy your build to production", "fallback": "You are unable to promote a build", "callback_id": "wopr_game", "color": "#3AA3E3", "attachment_type": "default", "actions": [ { "name": "deployment", "text": "Yes", "style": "danger", "type": "button", "value": json.dumps({"approve": True, "codePipelineToken": token, "codePipelineName": codepipeline_name}), "confirm": { "title": "Are you sure?", "text": "This will deploy the build to production", "ok_text": "Yes", "dismiss_text": "No" } }, { "name": "deployment", "text": "No", "type": "button", "value": json.dumps({"approve": False, "codePipelineToken": token, "codePipelineName": codepipeline_name}) } ] } ] } req = Request(SLACK_WEBHOOK_URL, json.dumps(slack_message).encode('utf-8')) response = urlopen(req) response.read() return None Create a SNS topic Create a topic and then create a subscription that invokes the ApprovalRequester Lambda function. You can configure the manual approval action in the pipeline to send a message to this SNS topic when an approval action is required. When the pipeline reaches the approval stage, it sends a notification to this SNS topic. SNS publishes a notification to all of the subscribed endpoints. In this case, the Lambda function is the endpoint. Therefore, it invokes and executes the Lambda function. For information about how to create a SNS topic, see Create a Topic in the Amazon SNS Developer Guide. Write the Lambda function for handling the interactive message button This Lambda function is invoked by API Gateway. It receives the result of the interactive message button whether or not the build promotion was approved. If approved, an API call is made to CodePipeline to promote the build to the next environment. If not approved, the pipeline stops and does not move to the next stage. The Lambda function code might look like the following. SLACK_VERIFICATION_TOKEN is the environment variable that contains your Slack verification token. You can find your verification token under Basic Information on Slack manage app page. When you scroll down, you will see App Credential. Verification token is found under the section. # This function is triggered via API Gateway when a user acts on the Slack interactive message sent by approval_requester.py. from urllib.parse import parse_qs import json import os import boto3 SLACK_VERIFICATION_TOKEN = os.environ['SLACK_VERIFICATION_TOKEN'] #Triggered by API Gateway #It kicks off a particular CodePipeline project def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) body = parse_qs(event['body']) payload = json.loads(body['payload'][0]) # Validate Slack token if SLACK_VERIFICATION_TOKEN == payload['token']: send_slack_message(json.loads(payload['actions'][0]['value'])) # This will replace the interactive message with a simple text response. # You can implement a more complex message update if you would like. return { "isBase64Encoded": "false", "statusCode": 200, "body": "{\"text\": \"The approval has been processed\"}" } else: return { "isBase64Encoded": "false", "statusCode": 403, "body": "{\"error\": \"This request does not include a vailid verification token.\"}" } def send_slack_message(action_details): codepipeline_status = "Approved" if action_details["approve"] else "Rejected" codepipeline_name = action_details["codePipelineName"] token = action_details["codePipelineToken"] client = boto3.client('codepipeline') response_approval = client.put_approval_result( pipelineName=codepipeline_name, stageName='Approval', actionName='ApprovalOrDeny', result={'summary':'','status':codepipeline_status}, token=token) print(response_approval) Create the API Gateway API In the Amazon API Gateway console, create a resource called InteractiveMessageHandler. Create a POST method. For Integration type, choose Lambda Function. Select Use Lambda Proxy integration. From Lambda Region, choose a region. In Lambda Function, type a name for your function.  Deploy to a stage. For more information, see Getting Started with Amazon API Gateway in the Amazon API Developer Guide. Now go back to your Slack application and enable interactive components. To enable interactive components for the interactive message (Yes) button: Under Features, choose Interactive Components. Choose Enable Interactive Components. Type a request URL in the text box. Use the invoke URL in Amazon API Gateway that will be called when the approval button is clicked. Now that all the pieces have been created, run the solution by checking in a code change to your CodeCommit repo. That will release the change through CodePipeline. When the CodePipeline comes to the approval stage, it will prompt to your Slack channel to see if you want to promote the build to your staging or production environment. Choose Yes and then see if your change was deployed to the environment. Conclusion That is it! You have now created a Slack ChatOps solution using AWS CodeCommit, AWS CodePipeline, AWS Lambda, Amazon API Gateway, and Amazon Simple Notification Service. Now that you know how to do this Slack and CodePipeline integration, you can use the same method to interact with other AWS services using API Gateway and Lambda. You can also use Slack’s slash command to initiate an action from a Slack channel, rather than responding in the way demonstrated in this post. Original post : https://aws.amazon.com/blogs/devops/use-slack-chatops-to-deploy-your-code-how-to-integrate-your-pipeline-in-aws-codepipeline-with-your-slack-channel/
How to Integrate Your Pipeline in AWS CodePipeline with Your Slack Channel content media
2
1
119
Shaked Yosef
Sep 17, 2018
In CI CD
In this post I would like to share with you our JIRA automated workflow & how we made it, actually, how we got to a situation where developers do not have to deal with unnecessary distractions (for example, moving tickets in jira and documentation within the JIRA ticket) . all the actions that the developers made in the past in JIRA - now the actions are fully automated . A Short Overview This is a basic ALM flow : Every stage send to JIRA a details about the code state(e.g. - development, CI , CD , Code-Review , Monitoring, etc. ) , according to the events, JIRA inserts to each ticket his development details and progress the ticket to the relevant status in the project . For example: when developer creates a branch about his JIRA ticket by Smart-Commit, the ticket will progress to “In-Progress” status & the branch link will be presented in the ticket details. Let's continue with a detailed ALM flow : After making the automated JIRA workflow, the flow will look like this : every stage in the ALM flow updates the JIRA project by events and provides the right status about your code (e.g. - where is your code right now ? Code-review ? CI ? Staging tests ? OR whats is the status of the Code-review / the CI ?) In this workflow, developers are not using in JIRA , just watching their tasks , all the actions (e.g. moving ticket to next status) is happening automatically by the events. Events Triggers Events #1, #2, #3 - received by Smart-Commit. Events #4, #5, #6 - received by API requests. NOTE: you can receive event #2 by 2 options: 1. In the CI stage by Jenkins JIRA plugin. 2. By Github after Merge action or "status" event (received by CI webhook in the Github) . In addition to the JIRA ticket transitions, thanks to Smart-Commit you can view updates from Github or Bitbucket within the relevant ticket HOW-TO ? These are the steps you need to make to get this great feature in your JIRA workflow: 1 To get the development details within your tickets & set triggers, please enable Smart-commit feature by this guide : https://confluence.atlassian.com/adminjiracloud/enable-smart-commits-776830276.html (available only on Github and Bitbucket) . Smart-Commit usage guide : https://confluence.atlassian.com/fisheye/using-smart-commits-298976812.html 2. Edit your workflow and set trigger for each status you want. 3 Add to your CD jobs stage for JIRA API request (to progress the tickets to “Deployed Staging” status (when Staging deployment has been finished) and to release version of the project (at the end of the Production deployment). JIRA API https://developer.atlassian.com/cloud/jira/platform/rest/v3/?utm_source=%2Fcloud%2Fjira%2Fplatform%2Frest%2F&utm_medium=302 Thats all ! These 3 steps will provide you a fully automated JIRA workflow ! Enjoy !
Create a fully automatic JIRA workflow content media
8
2
197

Shaked Yosef

Admin
More actions