Kubernetes (or k8s) is an open-source container orchestration system. It is used for application deployment and scaling. It has became very popular recently, partly due to the rise of cloud-native solutions. In this article, I will give a high-level overview of Kubernetes, introduce the terminology, and explain what kind of problems K8s solves.
What is a container?
To understand what is a container orchestration system, one has to understand what a container is. A container is used to describe OS-level virtualization, a principle that allows you to run multiple user-space instances on one kernel. In other words, it offers virtualization (a computer running within a computer) with much lesser overhead and performance downgrade. It also does not require special hardware support or even third-party software.
The most popular implementation of OS-level virtualization is the
chroot command, which should be familiar for Linux users.
chroot lets you change the root mount point of the current session on Linux systems. It will let you use the same kernel but within an isolated file system. As a bonus, you will also have isolated access to hardware, since disks and devices are represented as files in Linux.
Another OS-level virtualization software is Docker, which was designed specifically to containerize applications. This approach lets you be confident that your applications will run the same way across your development and production environments. It also simplifies the process of deployment and integration.
In Docker, a container is described using a
Dockerfile, a configuration file written in YAML. It describes what image should this container use, what is the process for building and starting the container and what networking/storage devices to use. Here is a typical
Dockerfile that runs an ExpressJS application:
Why orchestrate the containers?
Now that you know what containers are, let us talk about orchestrating them. Orchestrating, as Wikipedia puts it, is “the automated configuration, coordination, and management of computer systems and software“. Why do we need to automate all of it in containers? This closely deals with microservice architecture and scaling.
Microservice architecture is a way to design computer software by breaking it down into small, independent services. For example, the authentication service, the messaging service, the assets service, etc. This achieves two main goals:
- Robustness. If one service shuts down (due to an error or planned maintenance), the rest of the system is not affected.
- Scaling. Microservices adapt to all kinds of loads perfectly. If you notice that your asset service consumes too much CPU, but the messaging service is barely even used, you can transfer the resources to the asset service, or launch more instances of it.
Both of these problems are solved with container orchestration systems such as Kubernetes. It automates running and managing containers for you, restarts containers if they fail, and distribute the workload across several workstations.
Before moving on any further, we need to know some essential concepts of Kubernetes:
- Pod – a group of containers. This is the smallest possible unit in the Kubernetes space. It is also guaranteed to run on the same machine, which means containers can share resources.
- Service – a group of pods. Service is usually a semi-independent part of a larger application.
- Volume – permanent storage for a Pod. By default, all data will be wiped when you restart a Pod. Volumes are used to store data forever.
- ConfigMaps/Secrets – mechanisms to serve credentials/keys to the containers. They will be injected into the environment of the containers that require them.
There are many more concepts to cover, but they are out of the scope of this article.
Why do you need Kubernetes?
Kubernetes is a perfect solution for managing and maintaining a large-scale, high-throughput application. It is also used under the hood of many cloud service providers (like Cloud Foundry), so you are probably using it without even realizing it. With K8s you will be able to run enterprise-ready applications within days.
One thing to keep in mind, though: Kubernetes is hard. This is a very powerful and featureful system and it takes a lot of skill to set up correctly. If you are a developer without system administration experience, it is best to leave it to DevOps engineers. Nevertheless, it would still be beneficial to know the basics.
Play around with it
If you are running Windows or Mac, you can use Docker Desktop to install both Docker and Kubernetes on your computer. On Linux, it is a bit more complicated, but I am sure you know your ways if you are using Linux.
Note: To enable Kubernetes in Docker Desktop, open the Kubernetes tab and check the Enable Kubernetes box:
Thank you for reading, I hope you enjoyed this article. Let me know in the comments how you are using Kubernetes in your projects!