What is Kubernetes?

Kubernetes, or k8s for short, is an open source platform pioneered by Google, which started as a simple container orchestration tool but has grown into a cloud native platform.

It's one of the most significant advancements in IT since the public cloud came to being in 2009, and has already exceeded the 50% of overall adoption among the enterprise.


What is container orchestration?

Container orchestration is about managing the lifecycle of containers, particularly in large, dynamic environments. It automates the deployment, networking, scaling, and availability of containerized workloads and services.

Running containers – which are lightweight and usually ephemeral by nature – in small numbers, is easy enough to be done manually. However, managing them at scale in production environments can be a significant challenge without the automation that container orchestration platforms offer.

Kubernetes has become the standard for container orchestration in the enterprise world.


Kubernetes impact


65%

Improved maintenance, monitoring, and automation


46%

Modernizing infrastructure


26.6%

Faster time to market


What is a Kubernetes cluster?

A Kubernetes cluster is what you get when you deploy Kubernetes on physical or virtual machines. It consists of two types of machines:


  • Workers: the resources used to run the services needed to host containerized workloads
  • Control plane hosts: used to manage the workers and monitor the health of the entire system

Every cluster has at least one worker, and the control plane services can be located on the single machine. In production environments, there is typically a large number of workers, depending on the number of containers to be run, and the control plane is distributed across multiple machines for high-availability and fault-tolerance purposes.


Kubernetes advantages



Kubernetes is popular for its appealing architecture, a large and active community and the continuous need for extensibility that enables countless development teams to deliver and maintain software at scale by automating container orchestration.



Kubernetes maps out how applications should work and interact with other applications. Due to its elasticity, it can scale services up and down as required, perform rolling updates, switch traffic between different versions of your applications to test features, or rollback problematic deployments.



Kubernetes has emerged as a leading choice for organizations looking to build their multi-cloud environments. All public clouds have adopted Kubernetes and offer their own distributions, such as AWS Elastic Container Service for Kubernetes, Google Kubernetes Engine and Azure Kubernetes Service.


Kubernetes history and ecosystem

Kubernetes (from the Greek ‘κυβερνήτης’ meaning 'helmsman') was originally developed by Google. Kubernetes' design has been heavily influenced by Google's 'Borg' project – a similar system used by Google to run much of its infrastructure. Kubernetes has since been donated to the Cloud Native Computing Foundation (CNCF), a collaborative project between the Linux Foundation and Google, Cisco, IBM, Docker, Microsoft, AWS and VMware.

Did you know?


  • The font used in the Kubernetes logo is the Ubuntu font
  • Kubernetes is shortened as the numeronym ‘K8s’, from the first (K) and last (s) characters and the 8 characters in between (ubernete)

Key contributors to Kubernetes


What does Kubernetes do?

What does Kubernetes do?

Kubernetes is a platform for running your applications and services. It manages the full lifecycle of container-based applications, by automating tasks, controlling resources, and abstracting infrastructure. Enterprises adopt Kubernetes to cut down operational costs, reduce time-to-market, and transform their business. Developers like container-based development, as it helps break up monolithic applications into more maintainable microservices. Kubernetes allows their work to move seamlessly from development to production, and results in faster-time-to-market for business applications.

Kubernetes works by:


  • Orchestrating containerized applications across multiple hosts
  • Ensures that containerized apps behave in the same way in all environments, from testing to production
  • Controlling and automating application deployments and updates
  • Making more efficient use of hardware to minimize resources needed to run containerized applications
  • Mounting and adding storage to run stateful apps
  • Scaling and load balancing containerized applications and their resources on the fly and reacting to changes in the workload
  • Exposing containers to the internet, to other containers and to other clusters
  • Health-checking and self-healing applications with auto-placement, autorestart, auto-replication and autoscaling
  • Declaratively managing services, which guarantees that applications are always running as intended
  • Being open source (all Kubernetes code is on GitHub) and maintained by a large, active community

Learn how others
are using Kubernetes


What Kubernetes is not?

Kubernetes enables configuration, automation and management capabilities around containers. It has a vast tooling ecosystem and can address complex use cases, and this is the reason why many mistake it for a traditional Platform-as-a-Service (PaaS).

Kubernetes, as opposed to PaaS, does not:


  • Limit the types of supported applications or require a dependency handling framework
  • Require applications to be written in a specific programming language, nor does it dictate a specific configuration language/system
  • Provide application-level services, such as middleware, databases and storage clusters out of the box. Such components can be integrated with k8s through add-ons
  • Provide or dictate specific logging, monitoring and alerting components
  • Deploy source code or build applications, although it can be used to build CI/CD pipelines
  • Manage and provision certificates for the applications running in containers

Kubernetes enables configuration, automation and management capabilities around containers. It has a vast tooling ecosystem and can address complex use cases, and this is the reason why many mistake it for a traditional Platform-as-a-Service (PaaS).

Kubernetes, as opposed to PaaS, does not:


  • Limit the types of supported applications or require a dependency handling framework
  • Require applications to be written in a specific programming language, nor does it dictate a specific configuration language/system
  • Provide application-level services, such as middleware, databases and storage clusters out of the box. Such components can be integrated with k8s through add-ons
  • Provide or dictate specific logging, monitoring and alerting components
  • Deploy source code or build applications, although it can be used to build CI/CD pipelines
  • Manage and provision certificates for the applications running in containers

How does Kubernetes work?

Kubernetes works by joining a group of physical or virtual host machines, referred to as “nodes”, into a cluster. This creates a “supercomputer” to run containerized applications with a greater processing speed, more storage capacity, and increased network capabilities than any single machine would have on its own. The nodes include all necessary services to run “pods”, which in turn run single or multiple containers. A pod corresponds to a single instance of an application in Kubernetes.

One (or more for larger clusters, or High Availability) node of the cluster is designated as the “control plane”. The control plane node then assumes responsibility for the cluster as the orchestration layer – scheduling and allocating tasks to the “worker” nodes in a way which optimises the resources of the cluster. All administrative and operational tasks on the cluster are done through the control plane, whether these are changes to the configuration, executing or terminating workloads, or controlling ingress and egress on the network.

The control plane is also responsible for monitoring all aspects of the cluster, enabling it to perform additional useful functions such as automatically reallocating workloads in case of failure, scaling up tasks which need more resources and otherwise ensuring that the assigned workloads are always operating correctly.


Kubernetes glossary


Node

A virtual or physical machine, depending on the cluster setup. Each node is managed by the control plane and contains the services necessary to run pods.


Pod

The smallest unit in the Kubernetes object model that is used to host containers.


Container

A standard unit of software that packages up code and all its dependencies, making it portable across different compute environments.


Control plane

The orchestration layer that provides interfaces to define, deploy, and manage the lifecycle of containers.


Worker nodes

Every worker node can host applications as containers. A Kubernetes cluster usually has multiple worker nodes, but minimum one.


API server

The primary control plane component, which exposes the Kubernetes API, enabling communications between cluster components.


Controller-manager

A control plane daemon that monitors the state of the cluster and makes all necessary changes for the cluster to reach its desired state.


Services

A logical abstraction that groups multiple pods and provides network policies that define how the pods can be accessed.


Resources

Kubernetes is based on the principles of control theory. It allows users to manage their applications lifecycle by creating, modifying or deleting resources that are tracked by controllers, thus regulating the state of the entire system. In the Kubernetes API, every resource corresponds to a specific endpoint.


Custom resources

A customization of a particular Kubernetes installation. They allow users to extend the Kubernetes API, beyond its default supported behavior, addressing more complicated use cases and making the platform more modular.


Etcd

The control plane database, a reliable key-value store, used to store the state of the cluster, capturing details such as deployment status of containers, pod metrics and logs, network configuration, cluster certificates etc.


Ingress

A way to manage external access to the services in a cluster by exposing network routes through the Kubernetes API.


Kubelet

An agent that runs on each worker node in the cluster and ensures that containers are running in a pod.


Kubeproxy

Enables communication between worker nodes, by maintaining network rules on the nodes.


Container runtime/ CRI

The software responsible for running containers, by coordinating the use of system resources across containers. Kubernetes can use different container runtimes (e.g. containerd, CRI-O) through the container runtime interface (CRI).


CNI

The Container Network Interface is a specification and a set of tools to define networking interfaces between network providers and Kubernetes.


CSI

The Container Storage Interface is a specification for data storage tools and applications to integrate with Kubernetes clusters.


Kubectl

A command-line tool for controlling Kubernetes clusters.



Get your K8s questions answered

Let our Kubernetes experts help you take the next step.