This article is to show a high-level overview of the most frequently used K8s (Kubernetes) components.
As a Software Engineer, I’ve identified that non-technical people have some kind of issues understanding how do Kubernetes is built and how the components can be seen, I hope this article will be helpful for both technical and not technical folks that want to understand a little bit of K8s.
Before start, some of the questions that we need to answer are “What is Kubernetes?” It is an open-source orchestration tool for containers, let me elaborate a little bit more. You may have heard about containers (usually docker) and how easy is to build new services and lightweight containers. This is awesome, but it gets complex when you have hundreds to thousands of those containers serving a lot of microservices, this is where K8s plays a key role since it allows us to monitor and manage (orchestrate) all those containers in an easy way allowing fault-tolerance, high availability, disaster recovery and high performance.
With that said, we can start learning a little bit more about some of the most used Kubernetes components.
A node will contain an N number of pods. Usually, it is a physical server, virtual machine, or any server.
A Pod is the smallest unit of Kubernetes, which is an abstraction of what we knew as a container, it will run only one application on each Pod. One important thing to point out is that each pod has its own IP address and a new IP address will be added per creation of a pod after it dies, this is usually something that we do not want and can be solved with Kubernetes Service. It is important to mention that the PODS runs in the layer 3 (network) of the OSI model
A service allows communication between pods by adding a permanent IP to the pod, this means if the pod dies it will conserve its IP. It also acts as a load balancer by locating the less loaded POD and send those requests to it. It is important to mention that the services runs in the layer 4 (transport) of the OSI model
The ingress route the traffic into the cluster, this is usually the first service that is reached, can be used for WEB servers that interact with the end-user.
It is an external configuration file to our application that we use to configure our Pods therefore all the pods can read this configuration file without the need of recreating or rebuilding the pods. Do not store sensitive information here, use Secrets for it.
Config Map — Secrets
The secret is a configuration file that is meant to store sensitive information such as passwords. It uses base64 encoding format
Each time a node is restarted the data on it is deleted, to avoid this Kubernetes has the Volumes component, which is meant to get persistence. You can attack physical storage or remotely, so if the pod of the database is restarted it will not lose the data.
Defines the pod and its replicating mechanisms
This is how we deploy new pods, it is meant for stateless applications (applications that do not store data). This can be seen as an abstraction of pods to replicate and configure them.
This is meant for stateful applications such as databases, this must be used in this way since it avoids data inconsistency when two replicated nodes sharing a database pod are accessing the same database, how do Kubernetes can ensure there is data consistency, well by using stateful component and making sure who is using the storage at a given point of time. Not easy to configure.
This is how it should look in a diagram.
We have two nodes in which we are replicating our pods, the share of those two pods is made through the services, all the requests will reach out first to the service which will act as a load balancer (we will take a deeper look at this component in another article) in this particular diagram the highlighted in red are showing pods that are not available due to a failure. This means that we are going to use the pod of MyWebApp in Node 2 since node 1 is not available, the same occurs with the database, we will use the one of the Node 1 which is also a stateful set component, in which if both pods of the databases is running it is how we are guaranteeing data consistency.
That is it for now, will see you in another article.