When you start working with more than 10 containers, you realize that managing each container on each server is not very optimal:
Kubernetes brings solutions to these problems with the following characteristics:
Even if other solutions exist (Docker Swarm, Mesos, Nomad), Kubernetes (or k8s) - a Google all-star opensource project on Github - is becoming the reference for scheduling and managing containers. Many companies which were already proposing their own orchestration solutions, are now integrating Kubernetes as their container engine.
As a serious contender, we can mention for example Openshift. This RedHat product is leveraging Kubernetes engine framework with additional layers to get to a full PAAS. While winning on ease of use (GUI to pilot the cluster, auto-deploy from registry), it is losing on flexibility (slower Kubernetes updates, integration with other solutions) and community.
Kubernetes alone is then very versatile and powerful. However, it requires a learning curve to understand the concepts and get it running.
AWS proposes another solution to manage containers: Elastic Container Service (ECS). It is a proprietary product but simpler and very well integrated within AWS. Checkout my post comparing Kubernetes and ECS here.
Now that you understand the power of Kubernetes, you need to setup your own cluster in order to start running your container workloads.
Here are the different components:
Masters: running Kubernetes master components for container orchestration, they will control, schedule the tasks, record the workload state in etcd and expose an API for developers
Nodes: running the actual containers and doing the actual work
Each Kubernetes component has its own lifecycle, needs to be upgraded, and may need troubleshooting when bugs occur. For redundancy, you need at minimum 2 master servers, and at minimum 3 etcd components.
Building a Kubernetes cluster is not trivial even if many tools exist nowadays like kubeadm, kops, automating the deployment and destruction. Upgrading to a major version of the master with an active workload is quite delicate and need some good planning and testing. Troubleshooting the cluster may give you some headaches too as there are many moving parts.
If you want to know more about deploying Kubernetes on premise, checkout this (older) post.
Now let's introduce EKS, a managed version of Kubernetes in AWS. This cloud provider will then take care of all the problematics listed above: creating, upgrading, troubleshooting your cluster (masters + nodes) so you can concentrate more on deploying containers/pods in a healthy cluster.
EKS internally uses Fargate, ASG and EC2 to be able to run a scalable Kubernetes cluster in AWS.
I listed below the difference between each provider:
I prepared a demo below to deploy an EKS cluster in AWS, using Terraform. In this cluster, we will deploy a simple helloworld container, with automatic CI/CD from AWS CodePipeline.
git push from a developer in Github will launch the whole CI/CD process. Docker image will build and containers in EKS will be updated to run that new image without any downtime.
Check out the Github repo to deploy the infra.
Thank you for reading :-) See you in the next post!