When you start working with more than 10 containers, you realize that managing each container on each server is not very optimal:
Kubernetes brings solutions to these problems with the following characteristics:
Even if other solutions exist (docker swarm, mesos, nomad ...), Kubernetes (or k8s) - a Google all-star opensource project on Github - is becoming the reference for scheduling and managing containers. Many companies which were already proposing their own orchestration solutions, are now integrating Kubernetes as their container engine.
As a serious contender, we can cite for example Openshift. This RedHat product is leveraging Kubernetes engine framework with additional layers to get to a full PAAS. While winning on ease of use (GUI to pilot the cluster, auto-deploy from registry), it is losing on flexibility (slower Kubernetes updates, integration with other solutions) and community.
Kubernetes alone is then very versatile and powerful. However, it requires a learning curve to understand the concepts and get it running. This post will help you get a base working setup, where you can start running your own containers on top.
More info on k8s: http://kubernetes.io/
This schema explains how it all works together. Let's break down each component:
Kargo: this project use ansible recipe to deploy and migrate Kubernetes version on many types of servers (coreos, ubuntu, CentOS) and cloud (AWS, Azure, OpenStack, baremetal). So, the choice is yours. In my doc, I will just deploy CoreOS servers manually, and give IP info to Kargo. But you could use Kargo to provision AWS or Google servers in one command line.
1 master node running Kubernetes components for container orchestration, it will pilot and gives work to the minions
1 master/minion node: In order to have master redundancy and running containers too
2(or more) minion nodes running the actual containers and doing the actual work
CoreOS: this minimal and secure OS is perfect for running Kubernetes masters and nodes.
EFK (logging): we will send all Kubernetes container logs to an Elasticsearch db, via Fluentd, and visualize dashboards with Kibana
Prometheus (monitoring) will check all this infra, with Grafana dashboards
Kubernetes dashboard addon (not EFK dashboard), where you can visualize Kubernetes component in a GUI
Service-loadbalancer: public gateway to access your internal Kubernetes services (Kibana, Grafana). In the setup later, you will have 2 choices of lb, a static (HAProxy) and a dynamic lb (Traefik)
Registry: a private docker registry deployed in the Kubernetes cluster
This schema represents Kubernetes internal components after the Kargo install. Nearly all of them are running as containers. You will be able to adjust the number of masters, minions and ETCD to fit your needs.
Below is a visual preview of what you will get with this setup, using open source tools, which integrate perfectly with Kubernetes.
Logging with EFK:
First you will collect container logs with EFK, so you can see who is very talkative, or which application is in pain and sending errors/timeout. Once setup, it is all automatic: any newly created containers will send logs to EFK.
Monitoring with Prometheus
Then we will dig deeper in the stats and counters (CPU, RAM, disk, network), where we can investigation bottlenecks, memory leaks and plan for capacity management.
You can already enjoy the loaded detailed dashboards, and you can find a lot more online (thanks to the community!)
Follow this github repo to get a full Kubernetes stack running.
I added to this setup lots of explanations on how to launch services and access them. Plus, an extra monitoring tool (Heapster), a demo of Gitlab CI/CD and some troubleshooting tips to fix some headaches you may have.
A previous setup for a deployment on CloudStack on Exoscale is available here. Less flexible as you can't migrate Kubernetes version (it doesn't use Kargo), this version however takes care of firewalls.
Now you got a working setup, it is time to run your own containers in Kubernetes.
Start building your yaml manifest file based on the examples provided. If you want to start from an existing docker-compose.yml file, use that fast/easy converter: Kompose.
Some other cool Kubernetes projects to try:
Thank you for reading :-) See you in the next post! Greg