Forum Posts

fernandoames
Mar 16, 2019
In Cloud
References: https://aws.amazon.com/eks/pricing/ https://aws.amazon.com/ecs/pricing/ https://aws.amazon.com/fargate/faqs/ https://docs.aws.amazon.com/eks/latest/userguide/service_limits.html https://medium.com/becloudy/amazon-container-cheat-sheet-d5d05469181c https://www.reddit.com/r/aws/comments/87r6uo/understanding_amazons_container_services_ecs_eks/ Credit: totalCloud via Medium
ECS Vs. EKS Vs. Fargate: The Good, the Bad, the Ugly content media
4
0
26
fernandoames
Mar 16, 2019
In DevOps Tools
Red Hat lanched OperatorHub.io in collaboration with AWS, Google Cloud and Microsoft. OperatorHub.io is designed to be the public registry for finding Kubernetes Operator backed services. Introduced by CoreOS in 2016, and now championed by Red Hat and a large portion of the Kubernetes community, the Operator pattern enables a fundamentally new way to automate infrastructure and application management tasks using Kubernetes as the automation engine. With Operators, developers and Kubernetes administrators can gain the automation advantages of public cloud-like services, including provisioning, scaling, and backup/restore, while enabling the portability of the services across Kubernetes environments regardless of the underlying infrastructure. As the Operator concept has experienced growing interest across upstream communities and software providers, the number of Operators available has increased. However, it remains challenging for developers and Kubernetes administrators to find available Operators, including those that meet their quality standards. With the introduction of OperatorHub.io, we are helping to address this challenge by introducing a common registry to publish and find available Operators. At OperatorHub.io, developers and Kubernetes administrators can find curated Operator-backed services for a base level of documentation, active communities or vendor-backing to show maintenance commitments, basic testing, and packaging for optimized life-cycle management on Kubernetes.   With the introduction of OperatorHub.io, we look forward to continuing to work across the industry in enabling the creation of more Operators as well as the evolution of existing Operators. It is important to note that we expect the set of Operators that currently reside in OperatorHub.io to be only the start and anticipate more to be contributed over time. "At Google Cloud, we have invested in building and qualifying community developed operators, and are excited to see more than 40 percent of Google Kubernetes Engine (GKE) clusters running stateful applications today. Operators play an important role in enabling lifecycle management of stateful applications on Kubernetes," said Aparna Sinha, Group Product Manager, Google Cloud. "The creation of OperatorHub.io provides a centralized repository that helps users and the community to organize around Operators. We look forward to seeing growth and adoption of OperatorHub.io as an extension of the Kubernetes community." "Use of Kubernetes Operators is growing both inside Microsoft and amongst our customers, and we look forward to working with Red Hat and the broader community on this important technology," said Gabe Monroy, Lead Program Manager, Containers, Microsoft Azure. What is an Operator? Operators are a method of packaging, deploying and managing a Kubernetes-native application. We define a Kubernetes application as an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling. Operators benefit Kubernetes users in that they can help to automate the sometimes routine, mundane and complex tasks required for an application to run on Kubernetes. An Operator can automate updates, backups and scaling by using Kubernetes CLI and can scan for things out of place, helping to enable a no-ops experience. The Operator Framework is an open source toolkit that provides an SDK, lifecycle management, metering and monitoring capabilities enabling developers to build, test and publish Operators. Operators can be implemented in several programming and automation languages, including Go, Helm and Ansible. Operators follow a maturity model that ranges from basic functionality to having specific operational logic for an application. Operators’ capabilities differ in sophistication depending on how much intelligence has been added into the Operator itself. Advanced Operators are designed to handle upgrades more seamlessly and be able to react to failures automatically. The set of Operators at OperatorHub.io span the maturity scale and we expect that others will evolve on that scale over time. What does listing of an Operator on OperatorHub.io mean? It means that the launch partners, at the onset, are working together to check the Operators added to OperatorHub.io against a series of basic tests. To be listed, Operators must successfully show cluster lifecycle features, packaging that can be maintained through the Operator Framework’s Operator Lifecycle Management, and acceptable documentation for its intended users.   Some examples of Operators that are currently listed in OperatorHub.io include: Amazon Web Services Operator, Couchbase Autonomous Operator, CrunchyData’s PostgreSQL, etcd Operator, Jaeger Operator for Kubernetes, Kubernetes Federation Operator, MongoDB Enterprise Operator, Percona MySQL Operator, PlanetScale’s Vitess Operator, Prometheus Operator, and Redis Operator. "We are pleased to have our Couchbase Autonomous Operator included among a set up of curated and tested Kubernetes-native Operators for the launch of OperatorHub.io. Couchbase’s inclusion represents interest from the community to run complex applications at scale - like our NoSQL data platform - by leveraging the automation of common Couchbase operational tasks," said Anthony Farinha, Senior Director, Strategic Partnerships, Couchbase. "With the Operator Hub listing, this is an easier way for the community to find and make use of Operators that are generally available or in development." "MongoDB customers continually state they are looking to modernize applications and automate infrastructure management as they digitize their business," said Alan Chhabra, SVP MongoDB. "We’re excited to be included in OperatorHub.io as a supported Operator for customers building cloud-native apps using Kubernetes. The Kubernetes MongoDB Enterprise Operator can allow users to deploy and manage MongoDB clusters from the Kubernetes API, without having to manually configure them." "Participating in the Operator program championed by Red Hat and others, enables Redis Labs to help developers and enterprises more easily orchestrate their Redis Enterprise deployments," said Rod Hamlin, VP of Global Strategic Alliances, Redis Labs. "The launch of OperatorHub.io is particularly valuable as it brings together a curated registry of tested Operators to enable application installs and updates, while helping administrators make services portable and more manageable. As one of the programs’ early design partners, Redis Labs is able to bring higher quality database services to Kubernetes clusters - in different environments." If you are interested in creating your own Operator, we recommend checking out the Operator Framework to get started. Want to add your Operator to OperatorHub.io? Follow these steps If you have an existing Operator, follow the contribution guide. Each OperatorHub.io entry contains the Custom Resource Definitions (CRDs), access control rules and references to the container image needed to install and more securely run your Operator, plus other info like a description of its features and supported Kubernetes versions. After testing out the Operator on your own cluster, submit a PR to the community registrywith all of your YAML files following this directory structure. At first this will be reviewed manually, but automation is on the way. After it’s merged by the maintainers, it will show up on OperatorHub.io for installation. Operators and the road ahead An important goal for Red Hat is to lower the barrier for bringing applications to Kubernetes. We believe that Operator-backed services play a critical role in lowering this barrier by enabling application owners to use services that can provide the flexibility of cloud services across Kubernetes environments. We hope that the introduction of OperatorHub.io will further lower this barrier by making it easier for application owners to find the Operator-backed services that they are looking for.   Want to learn more? Attend one of the upcoming Kubernetes Operator Framework hands-on workshops at ScaleX in Pasadena on March 7 and at the OpenShift Commons Gathering on Operating at Scale in Santa Clara on March 11 Listen in on the recorded OpenShift Commons Briefing on "All Things Operators" with Daniel Messer and Diane Mueller Join in on the online conversations in the community Kubernetes-Operator Slack Channeland the Operator Framework Google Group Finally, read up on how to add your Operator to OperatorHub.io
4
0
13
fernandoames
Mar 16, 2019
In DevOps Tools
Objective This blog post describes the steps required to setup a multi node Kubernetes cluster for development purposes. This setup provides a production-like cluster that can be setup on your local machine. Why do we require multi node cluster setup? Multi node Kubernetes clusters offer a production-like environment which has various advantages. Even though Minikube provides an excellent platform for getting started, it doesn’t provide the opportunity to work with multi node clusters which can help solve problems or bugs that are related to application design and architecture. For instance, Ops can reproduce an issue in a multi node cluster environment, Testers can deploy multiple versions of an application for executing test cases and verifying changes. These benefits enable teams to resolve issues faster which make the more agile. Why use Vagrant and Ansible? Vagrant is a tool that will allow us to create a virtual environment easily and it eliminates pitfalls that cause the works-on-my-machine phenomenon. It can be used with multiple providers such as Oracle VirtualBox, VMware, Docker, and so on. It allows us to create a disposable environment by making use of configuration files. Ansible is an infrastructure automation engine that automates software configuration management. It is agentless and allows us to use SSH keys for connecting to remote machines. Ansible playbooks are written in yaml and offer inventory management in simple text files. Prerequisites Vagrant should be installed on your machine. Installation binaries can be found here. Oracle VirtualBox can be used as a Vagrant provider or make use of similar providers as described in Vagrant’s official documentation. Ansible should be installed in your machine. Refer to the Ansible installation guide for platform specific installation. Setup overview We will be setting up a Kubernetes cluster that will consist of one master and two worker nodes. All the nodes will run Ubuntu Xenial 64-bit OS and Ansible playbooks will be used for provisioning. Step 1: Creating a Vagrantfile Use the text editor of your choice and create a file with named Vagrantfile, inserting the code below. The value of N denotes the number of nodes present in the cluster, it can be modified accordingly. In the below example, we are setting the value of N as 2. IMAGE_NAME = "bento/ubuntu-16.04" N = 2 Vagrant.configure("2") do |config| config.ssh.insert_key = false config.vm.provider "virtualbox" do |v| v.memory = 1024 v.cpus = 2 end config.vm.define "k8s-master" do |master| master.vm.box = IMAGE_NAME master.vm.network "private_network", ip: "192.168.50.10" master.vm.hostname = "k8s-master" master.vm.provision "ansible" do |ansible| ansible.playbook = "kubernetes-setup/master-playbook.yml" end end (1..N).each do |i| config.vm.define "node-#{i}" do |node| node.vm.box = IMAGE_NAME node.vm.network "private_network", ip: "192.168.50.#{i + 10}" node.vm.hostname = "node-#{i}" node.vm.provision "ansible" do |ansible| ansible.playbook = "kubernetes-setup/node-playbook.yml" end end end Step 2: Create an Ansible playbook for Kubernetes master. Create a directory named kubernetes-setup in the same directory as the Vagrantfile. Create two files named master-playbook.ymland node-playbook.yml in the directory kubernetes-setup. In the file master-playbook.yml, add the code below. Step 2.1: Install Docker and its dependent components. We will be installing the following packages, and then adding a user named “vagrant” to the “docker” group. - docker-ce - docker-ce-cli - containerd.io --- - hosts: all become: true tasks: - name: Install packages that allow apt to be used over HTTPS apt: name: "{{ packages }}" state: present update_cache: yes vars: packages: - apt-transport-https - ca-certificates - curl - gnupg-agent - software-properties-common - name: Add an apt signing key for Docker apt_key: url: https://download.docker.com/linux/ubuntu/gpg state: present - name: Add apt repository for stable version apt_repository: repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable state: present - name: Install docker and its dependecies apt: name: "{{ packages }}" state: present update_cache: yes vars: packages: - docker-ce - docker-ce-cli - containerd.io notify: - docker status - name: Add vagrant user to docker group user: name: vagrant group: docker Step 2.2: Kubelet will not start if the system has swap enabled, so we are disabling swap using the below code. - name: Remove swapfile from /etc/fstab mount: name: "{{ item }}" fstype: swap state: absent with_items: - swap - none - name: Disable swap command: swapoff -a when: ansible_swaptotal_mb > 0 Step 2.3: Installing kubelet, kubeadm and kubectl using the below code. - name: Add an apt signing key for Kubernetes apt_key: url: https://packages.cloud.google.com/apt/doc/apt-key.gpg state: present - name: Adding apt repository for Kubernetes apt_repository: repo: deb https://apt.kubernetes.io/ kubernetes-xenial main state: present filename: kubernetes.list - name: Install Kubernetes binaries apt: name: "{{ packages }}" state: present update_cache: yes vars: packages: - kubelet - kubeadm - kubectl Step 2.3: Initialize the Kubernetes cluster with kubeadm using the below code (applicable only on master node). - name: Initialize the Kubernetes cluster using kubeadm command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master --pod-network-cidr=192.168.0.0/16 Step 2.4: Setup the kube config file for the vagrant user to access the Kubernetes cluster using the below code. - name: Setup kubeconfig for vagrant user command: "{{ item }}" with_items: - mkdir -p /home/vagrant/.kube - cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config - chown vagrant:vagrant /home/vagrant/.kube/config Step 2.5: Setup the container networking provider and the network policy engine using the below code. - name: Install calico pod network become: false command: kubectl create -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml Step 2.6: Generate kube join command for joining the node to the Kubernetes cluster and store the command in the file named join-command. - name: Generate join command command: kubeadm token create --print-join-command register: join_command - name: Copy join command to local file local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command" Step 2.7: Setup a handler for checking Docker daemon using the below code. handlers: - name: docker status service: name=docker state=started Step 3: Create the Ansible playbook for Kubernetes node. Create a file named node-playbook.yml in the directory kubernetes-setup. Add the code below into node-playbook.yml Step 3.1: Start adding the code from Steps 2.1 till 2.3. Step 3.2: Join the nodes to the Kubernetes cluster using below code. - name: Copy the join command to server location copy: src=join-command dest=/tmp/join-command.sh mode=0777 - name: Join the node to cluster command: sh /tmp/join-command.sh Step 3.3: Add the code from step 2.7 to finish this playbook. Step 4: Upon completing the Vagrantfile and playbooks follow the below steps. $ cd /path/to/Vagrantfile $ vagrant up Upon completion of all the above steps, the Kubernetes cluster should be up and running. We can login to the master or worker nodes using Vagrant as follows: $ ## Accessing master $ vagrant ssh k8s-master vagrant@k8s-master:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 18m v1.13.3 node-1 Ready <none> 12m v1.13.3 node-2 Ready <none> 6m22s v1.13.3 $ ## Accessing nodes $ vagrant ssh node-1 $ vagrant ssh node-2 Credit: Naresh L J (Infosys) via Kubernetes blog
5
0
75
fernandoames
More actions