Kubernetes is a popular open source platform for container orchestration — that is, for the management of applications built out of multiple, largely self-contained runtimes called containers. Containers have become increasingly popular since the Docker containerization project launched in 2013, but large, distributed containerized applications can become increasingly difficult to coordinate. By making containerized applications dramatically easier to manage at scale, Kubernetes has become a key part of the container revolution.
What is container orchestration?
Containers support VM-like separation of concerns but with far less overhead and far greater flexibility. As a result, containers have reshaped the way people think about developing, deploying, and maintaining software. In a containerized architecture, the different services that constitute an application are packaged into separate containers and deployed across a cluster of physical or virtual machines. But this gives rise to the need for container orchestration—a tool that automates the deployment, management, scaling, networking, and availability of container-based applications.
What is Kubernetes?
Kubernetes is an open source project that has become one of the most popular container orchestration tools around; it allows you to deploy and manage multi-container applications at scale. While in practice Kubernetes is most often used with Docker, the most popular containerization platform, it can also work with any container system that conforms to the Open Container Initiative (OCI) standards for container image formats and runtimes. And because Kubernetes is open source, with relatively few restrictions on how it can be used, it can be used freely by anyone who wants to run containers, most anywhere they want to run them—on-premises, in the public cloud, or both.
Google and Kubernetes
Kubernetes began life as a project within Google. It’s a successor to—though not a direct descendent of—Google Borg, an earlier container management tool that Google used internally. Google open sourced Kubernetes in 2014, in part because the distributed microservices architectures that Kubernetes facilitates makes it easy to run applications in the cloud. Google sees the adoption of containers, microservices, and Kubernetes as potentially driving customers to its cloud services (although Kubernetes certainly works with Azure and AWS as well). Kubernetes is currently maintained by the Cloud Native Computing Foundation, which is itself under the umbrella of the Linux Foundation.
Kubernetes vs. Docker and Kubernetes vs. Docker Swarm
Kubernetes doesn’t replace Docker, but augments it. However, Kubernetes does replace some of the higher-level technologies that have emerged around Docker.
One such technology is Docker Swarm, an orchestrator bundled with Docker. It’s still possible to use Docker Swarm instead of Kubernetes, but Docker Inc. has chosen to make Kubernetes part of the Docker Community and Docker Enterprise editions going forward.
Not that Kubernetes is a drop-in replacement for Docker Swarm. Kubernetes is significantly more complex than Swarm, and requires more work to deploy. But again, the work is intended to provide a big payoff in the long run—a more manageable, resilient application infrastructure. For development work, and smaller container clusters, Docker Swarm presents a simpler choice.
Kubernetes vs. Mesos
Another project you might have heard about as a competitor to Kubernetes is Mesos. Mesos is an Apache project that originally emerged from developers at Twitter; it was actually seen as an answer to the Google Borg project.
Mesos does in fact offer container orchestration services, but its ambitions go far beyond that: it aims to be a sort of cloud operating system that can coordinate both containerized and non-containerized components. To that end, a lot of different platforms can run within Mesos—including Kubernetes itself.
Kubernetes architecture: How Kubernetes works
Kubernetes’s architecture makes use of various concepts and abstractions. Some of these are variations on existing, familiar notions, but others are specific to Kubernetes.
The highest-level Kubernetes abstraction, the cluster, refers to the group of machines running Kubernetes (itself a clustered application) and the containers managed by it. A Kubernetes cluster must have a master, the system that commands and controls all the other Kubernetes machines in the cluster. A highly available Kubernetes cluster replicates the master’s facilities across multiple machines. But only one master at a time runs the job scheduler and controller-manager.
Kubernetes nodes and pods
Each cluster contains Kubernetes nodes. Nodes might be physical machines or VMs. Again, the idea is abstraction: Whatever the app is running on, Kubernetes handles deployment on that substrate. Kubernetes even makes it possible to ensure that certain containers run only on VMs or only on bare metal.
Nodes run pods, the most basic Kubernetes objects that can be created or managed. Each pod represents a single instance of an application or running process in Kubernetes, and consists of one or more containers. Kubernetes starts, stops, and replicates all containers in a pod as a group. Pods keep the user’s attention on the application, rather than on the containers themselves. Details about how Kubernetes needs to be configured, from the state of pods on up, is kept in Etcd, a distributed key-value store.
Pods are created and destroyed on nodes as needed to conform to the desired state specified by the user in the pod definition. Kubernetes provides an abstraction called a controller for dealing with the logistics of how pods are spun up, rolled out, and spun down. Controllers come in a few different flavors depending on the kind of application being managed. For instance, the recently introduced “StatefulSet” controller is used to deal with applications that need persistent state. Another kind of controller, the deployment, is used to scale an app up or down, update an app to a new version, or roll back an app to a known-good version if there’s a problem.
Because pods live and die as needed, we need a different abstraction for dealing with the application lifecycle. An application is supposed to be a persistent entity, even when the pods running the containers that comprise the application aren’t themselves persistent. To that end, Kubernetes provides an abstraction called a service.
A service in Kubernetes describes how a given group of pods (or other Kubernetes objects) can be accessed via the network. As the Kubernetes documentation puts it, the pods that constitute the back-end of an application might change, but the front-end shouldn’t have to know about that or track it. Services make this possible.
A few more pieces internal to Kubernetes round out the picture. The scheduler parcels out workloads to nodes so that they’re balanced across resources and so that deployments meet the requirements of the application definitions. The controller manager ensures that the state of the system—applications, workloads, etc.—matches the desired state defined in Etcd’s configuration settings.
It is important to keep in mind that none of the low-level mechanisms used by containers, such as Docker itself, are replaced by Kubernetes. Rather, Kubernetes provides a larger set of abstractions for using these mechanisms for the sake of keeping apps running at scale.
Kubernetes services are thought of as running within a cluster. But you’ll want to be able to access these services from the outside world. Kubernetes has several components that facilitate this with varying degrees of simplicity and robustness, including NodePort and LoadBalancer, but the component with the most flexibility is Ingress. Ingress is an API that manages external access to a cluster’s services, typically via HTTP.
Ingress does require a bit of configuration to set up properly—Matthew Palmer, who wrote a book on Kubernetes development, steps you through the process on his website.
One Kubernetes component that helps you keep on top of all of these other components is Dashboard, a web-based UI with which you can deploy and troubleshoot apps and manage cluster resources. Dashboard isn’t installed by default, but adding it isn’t too much trouble.
Related video: What is Kubernetes?
In this 90-second video, learn about Kubernetes, the open-source system for automating containerized applications, from one of the technology’s inventors, Joe Beda, founder and CTO at Heptio.
Because Kubernetes introduces new abstractions and concepts, and because the learning curve for Kubernetes is high, it’s only normal to ask what the long-term payoffs are for using Kubernetes. Here’s a rundown of some of the specific ways running apps inside Kubernetes becomes easier.
Kubernetes manages app health, replication, load balancing, and hardware resource allocation for you
One of the most basic duties Kubernetes takes off your hands is the busywork of keeping an application up, running, and responsive to user demands. Apps that become “unhealthy,” or don’t conform to the definition of health you describe for them, can be automatically healed.
Another benefit Kubernetes provides is maximizing the use of hardware resources including memory, storage I/O, and network bandwidth. Applications can have soft and hard limits set on their resource usage. Many apps that use minimal resources can be packed together on the same hardware; apps that need to stretch out can be placed on systems where they have room to grow. And again, rolling out updates across a cluster, or rolling back if updates break, can be automated.
Kubernetes eases the deployment of preconfigured applications with Helm charts
Package managers such as Debian Linux’s APT and Python’s Pip save users the trouble of manually installing and configuring an application. This is especially handy when an application has multiple external dependencies.
Helm is essentially a package manager for Kubernetes. Many popular software applications must run in Kubernetes as a group of interdependent containers. Helm provides a definition mechanism, a “chart,” that describes how an application or service can be run as a group of containers inside Kubernetes.
You can create your own Helm charts from scratch, and you might have to if you’re building a custom app to be deployed internally. But if you’re using a popular application that has a common deployment pattern, there is a good chance someone has already composed a Helm chart for it and published it in the official Helm charts repository. Another place to look for official Helm charts is the Kubeapps.com directory.
Kubernetes simplifies management of storage, secrets, and other application-related resources
Containers are meant to be immutable; whatever you put into them isn’t supposed to change. But applications need state, meaning they need a reliable way to deal with external storage volumes. That’s made all the more complicated by the way containers live, die, and are reborn across the lifetime of an app.
Kubernetes provides abstractions to allow containers and apps to deal with storage in the same decoupled way as other resources. Many common kinds of storage, from Amazon EBS volumes to plain old NFS shares, can be accessed via Kubernetes storage drivers, called volumes. Normally, volumes are bound to a specific pod, but a volume subtype called a “Persistent Volume” can be used for data that needs to live on independently of any pod.
Containers often need to work with “secrets”—credentials like API keys or service passwords that you don’t want hardcoded into a container or stashed openly on a disk volume. While third-party solutions are available for this, like Docker secrets and HashiCorp Vault, Kubernetes has its own mechanism for natively handling secrets, although it does need to be configured with care. For instance, Etcd must be configured to use SSL/TLS when sending secrets between nodes, rather than in plain text.
Kubernetes applications can run in hybrid and multi-cloud environments
One of the long-standing dreams of cloud computing is to be able to run any app in any cloud, or in any mix of clouds public or private. This isn’t just to avoid vendor lock-in, but also to take advantage of features specific to individual clouds.
Kubernetes provides a set of primitives, collectively known as federation, for keeping multiple clusters in sync with one another across multiple regions and clouds. For instance, a given app deployment can be kept consistent between multiple clusters, and different clusters can share service discovery so that a back-end resource can be accessed from any cluster. Federation can also be used to create highly available or fault-tolerant Kubernetes deployments, whether or not you’re spanning multiple cloud environments.
Federation is still relatively new to Kubernetes. Not all API resources are supported across federated instances yet, and upgrades don’t yet have automatic testing infrastructure. But these shortcomings are slated to be addressed in future versions of Kubernetes.
Where to get Kubernetes
Kubernetes is available in many forms—from open source bits to commercially backed distribution to public cloud service—that the best way to figure out where to get it is by use case.
- If you want to do it all yourself: The source code, and pre-built binaries for most common platforms, can be downloaded from the GitHub repository for Kubernetes.
- If you’re using Docker Community or Docker Enterprise: Docker’s most recent editions come with Kubernetes as a pack-in. This is ostensibly the easiest way for container mavens to get a leg up with Kubernetes, since it comes by way of a product you’re almost certainly already familiar with.
- If you’re deploying on-prem or in a private cloud: Chances are good that any infrastructure you choose for your private cloud has Kubernetes built-in. Standard-issue, certified, supported Kubernetes distributions are available from dozens of vendors including Canonical, IBM, Mesosphere, Mirantis, Oracle, Pivotal, Red Hat, Suse, VMware, and many more.
- If you’re deploying in a public cloud: The three major public cloud vendors all offer Kubernetes as a service. Google Cloud Platform offers Google Kubernetes Engine. Microsoft Azure offers the Azure Kubernetes Service. And Amazon has added Kubernetes to its existing Elastic Container Service. Managed Kubernetes services are also available from IBM, Nutanix, Oracle, Pivotal, Platform9, Rancher Labs, Red Hat, VMware, and many other vendors.
Now that you’ve got the basics under your belt, are you ready to get started with Kubernetes? There are a variety of tutorials out there that can help you play around with Kubernetes and learn how to use it in your own work. You might want to start off with the simple tutorials on the Kubernetes project site itself; when you’re ready for something more advanced, check out Quick Code’s picks for the top 10 Kubernetes tutorials, which have a little something for everybody.
If you feel like you have a good handle on how Kubernetes works and you want to be able to demonstrate your expertise to employers, you might want to check out the pair of Kubernetes-related certifications offered jointly by the Linux Foundation and the Cloud Native Computing Foundation:
- Certified Kubernetes Administrator, which seeks to “provide assurance that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators,” including application lifecycle management, installation, configuration, validation, cluster maintenance, and troubleshooting.
- Certified Kubernetes Application Developer, which certifies that users can design, build, configure, and expose cloud native applications for Kubernetes.”
The certification exams are $300 each. There are also accompanying training courses, which can serve as a good, structured way to learn more about Kubernetes for those who are interested.