In the previous post, we discussed what is Kubernetes. In this post, let's start with the fundamentals, such as Kubernetes Architecture.
The basic Kubernetes architecture comprises of the control plane and the worker nodes or compute machines. The Kubernetes API server, Kubernetes scheduler, Kubernetes controller manager, and etcd are all control plane machine components. While the Kubernetes node components include a container runtime engine or docker, a Kubelet service, and a Kubernetes proxy service.
Control Plane Components -
So let's start with Control Plane Components. The control plane is the nerve center of the Kubernetes cluster architecture, housing the cluster's control components. The Kubernetes control plane is in constant contact with the compute machines to ensure that the cluster runs as configured. The control plane component's make global decisions about the cluster (for example, scheduling), as well as detect and respond to cluster events (for example, starting up a new pod when the replicas field of a deployment is unsatisfied).
1. Kubernetes API Server (kube-apiserver)
The API Server, the front end of the Kubernetes control plane, facilitates updates, scaling, and other types of lifecycle orchestration by providing APIs for various types of applications. Because the API server serves as the gateway, users must be able to access it from outside the cluster. Users in this role use the API server as a tunnel to pods, services, and nodes, and they authenticate through the API server.
The main implementation of a Kubernetes API server is kube-apiserver. kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.
2. etcd
etcd is a distributed and fault-tolerant key-value store database that stores configuration data as well as cluster state information. Although etcd can be configured externally, it is often a part of the Kubernetes control plane.
The Raft consensus algorithm is used to store the cluster state in etcd. This helps in dealing with a common issue that occurs in the context of replicated state machines and involves multiple servers agreeing on values. Raft define three roles: leader, candidate, and follower, and achieves consensus by electing a leader.
As a result, etcd serves as the single source of truth (SSOT) for all Kubernetes cluster components, responding to control plane queries and retrieving various parameters about the state of containers, nodes, and pods.etcd is also used to store configuration information like ConfigMaps, subnets, and Secrets, as well as cluster state data.
etcd is generally formatted in human-readable YAML .
3. Kubernetes Scheduler (kube-scheduler)
The Kubernetes scheduler stores resource usage data for each compute node, determines whether a cluster is healthy, and decides whether and where new containers should be deployed. The scheduler considers the cluster's overall health as well as the pod's resource demands, such as CPU or memory. Then it chooses an appropriate compute node and schedules the task, pod, or service, taking into account resource constraints or guarantees, data locality, service quality requirements, anti-affinity and affinity specifications, and other factors.
4. Kubernetes Controller Manager (kube-controller-manager)
Although there are several controller functions in a Kubernetes cluster, they are all compiled into a single binary known as kube-controller-manager. The controller manager, also known as the cloud controller manager or simply the controller, is a daemon that manages the Kubernetes cluster by performing various controller functions.
The controller monitors the objects in the cluster while running the Kubernetes core control loops. It monitors them for their desired and current states using the API server. If the current and desired states of managed objects do not match, the controller takes corrective action to move the object status closer to the desired state.
Worker Plane Components -
Cluster nodes are machines that run containers and are managed by the control plane. The kubelet—the primary Kubernetes controller—runs on each node as an agent for communicating with the control plane.
The components listed below run on each worker node in your Kubernetes cluster.
1. Nodes
A Kubernetes cluster must have at least one compute node, but it can have many more depending on capacity requirements. Because pods are orchestrated and scheduled to run on nodes, more nodes are required to increase cluster capacity.
2. kubelet
An agent that runs on each cluster node. It ensures that containers in a Pod are running. When the control plane demands that a specific action be performed in a node, the kubelet receives the pod specifications via the API server and performs the action. It then ensures that the associated containers are in good working order. Containers that were not created by Kubernetes are not managed by the kubelet.
3. kube-proxy
It is a network proxy that runs on every node in your cluster and implements a component of the Kubernetes Service concept. kube-proxy keeps network rules up to date on nodes. These network rules permit network communication to your Pods from network sessions both inside and outside of your cluster. It acts as a network proxy and service load balancer, managing network routing for UDP and TCP packets. The kube-proxy, in fact, routes traffic for all service endpoints. If an operating system packet filtering layer is available, kube-proxy will use it. Otherwise, kube-proxy will forward the traffic.
4. Container runtime
The container runtime is the software that is responsible for running containers. Kubernetes supports container runtimes such as containerd, CRI-O, and any other implementation of the Kubernetes CRI (Container Runtime Interface).
Other Kubernetes infrastructure components -
1. Pods
So far, we've covered internal and infrastructure-related concepts. Pods, on the other hand, are central to Kubernetes because they are the primary outward-facing construct with which developers interact. A pod is the simplest unit in the Kubernetes object model, representing a single instance of an application. However Pods are central and critical to Kubernetes. Each pod is made up of a container or a series of tightly coupled containers that logically go together, as well as rules that govern how the containers run.
Pods have a short lifespan and will eventually die after being upgraded or scaled back down. Pods, despite being ephemeral, can run stateful applications by connecting to persistent storage. Pods can also scale horizontally, which means they can increase or decrease the number of instances running. They are also capable of performing rolling updates and canary deployments.
2. Service
Pods are volatile, which means Kubernetes cannot guarantee that a specific physical pod will be kept alive (for instance, the replication controller might kill and start a new set of pods). A service, on the other hand, represents a logical set of pods and serves as a gateway, allowing (client) pods to send requests to the service without having to keep track of which physical pods make up the service.
3. Deployments
A way to deploy containerized application Pods. A desired state described in a Deployment will cause controllers to change the actual state of the cluster in an orderly manner to achieve that state.
4. ReplicaSet
Ensures that a specified number of identical Pods are running at any given point in time.
5. Cluster DNS
Serves DNS records needed to operate Kubernetes services.
6. Volume
A Kubernetes volume is similar to a Docker container volume, but it applies to an entire pod and is mounted on all containers in the pod. Kubernetes ensures that data is retained across container restarts. The volume will be removed only when the pod gets destroyed . A pod can also be associated with multiple volumes (of potentially different types).
7. Namespace
Namespaces in Kubernetes are a mechanism for isolating groups of resources within a single cluster. Resource names must be unique within a namespace but not across namespaces. Resources within a namespace must be distinct and cannot access resources in another namespace. A namespace can also be given a resource quota to avoid using more than its fair share of the physical cluster's overall resources.
That's all there is to know about the Kubernetes cluster architecture, and this brings the post to a conclusion.
I hope this was informative.
Thank you for reading!
*** Explore | Share | Grow ***
Comentarios