Many developers have turned to Docker containerization to help provide consistency in spinning up new applications in the cloud and on premises, and many applications are dependent on the orchestration of multiple Docker containers. This creates a new set of challenges around spinning up collections of containers in conjunction with all the associated settings and configurations so they can work together.
To address that gap, the software industry has been rallying around Kubernetes -- a container management system -- as an ecosystem of functionality to automate the deployment of collections of containers. At the KubeCon conference in San Francisco experts discussed some of the best practices and tools for automating the deployment of clusters on top of Kubernetes.
Cameron Brunner, director of engineering at Univa, said there are a lot of great tools for creating a Kubernetes cluster, but there is not a great path to automating the deployment of consistent clusters. "'We like treating our hardware like cattle,'" Brunner said. "This is something we hear about all the time with regards to apps. But treating your hardware like pets can lead to nasty internal management problems."
'We like treating our hardware like cattle.' This is something we hear about all the time with regards to apps. But treating your hardware like pets can lead to nasty internal management problems.
director of engineering, Univa
Internally, Univa has about five clusters running with Kubernetes consisting of multiple nodes on each cluster. Having a solid tool chain helps to generate a Kubernetes cluster quickly that passes conformance and can be built and started in a couple of minutes.
Brunner said organizations need to address the following questions to generate a reliable conformant Kubernetes cluster:
- What base operating system to start with?
- How is the OS deployed and configured?
- How is Kubernetes going to be installed and configured?
Fortunately, a lot of tools in the environment can support this process, Brunner said. Immutable OSes like Atomic and CoreOS have become quite stable. The Preboot Execution Environment (PXE boot) network booting technology can simplify deployment. Cloud-Init is a good boot time configuration utility.
Brunner recommends always network booting your hardware. This allows for quick reprovisioning if a machine is misbehaving or needs to be upgraded. PXE boot can help simplify this process. Ideally, it is a good practice to have a tool for dynamic PXE generation, which is not something that comes with PXE.
It's also a good practice to install an immutable OS. This leads to a consistent set of software and a consistent environment in the data center. Brunner said, "This lessens the overall management complexity and makes it easy to understand what is going on in the cluster."
It is also a good idea to host the container images locally. Brunner recommends having a local repository. Another good practice is to manage configurations using Cloud-Init. This makes it easier to keep everything in one spot while still having an immutable OS. This can be integrated with Systemd for complex operations such as certificate enrollment and pulling in cryptography keys to create a streamlined workflow of system boot operations.
Automating Kubernetes on AWS
Jimmy Cuadra, a programmer in San Francisco, said it can be challenging to deploy a Kubernetes cluster in a consistent manner using Amazon Web Services (AWS) directly. He said, "Operating a production cluster is not something I trust, even though I respect the hard work of the Kubernetes team. For those of us who need to manage our own clusters, we need something more robust."
Cuadra said that Google Container Engine is a good choice for running on a hosted application. But a more robust approach is required for automating the deployment of clusters on AWS. Part of the problem is that most of the information relating to the tools for the declarative configuration of clusters is outdated or contains broken links. Cuadra wanted to be able to leverage declarative configuration files that could be checked in to Git for generating clusters.
Cuadra recommends using Terraform from Hashicorp. It allows organizations to use a declarative configuration language to describe resources and the state of cloud infrastructure to match what is in a Git repository.
Navigating configuration complexity
There are many provisioning issues around deploying a Kubernetes cluster that are not easy to tune with Terraform. Specific configuration settings are required to describe how different containers talk to each other. Other settings are required to specify how to authenticate the Kubernetes APIs and for distributing access credentials to different people on the team. There is also a need to manage workflow around where to keep the state for multiple Kubernetes implementations.
To address this gap, Cuadra created an open source tool called KAWS that makes it easier to specify infrastructure as code with domain name system built in. It also generates and distributes Kubernetes access credentials securely. KAWS is built on top of the Rust programming language.
The process of automating the Kubernetes deployment begins with the creation of a KAWS repository, which is essentially a Git repository for creating state. It also includes a key export command that can manage the public key of team members. This makes it possible to manage user access without having to expose each person's private key. Cuadra says this kind of automated approach makes it possible to stage and deploy a new Kubernetes cluster in about 10 minutes.
How has your development team used a Kubernetes cluster effectively? Let us know.