In this lab, we will explore the architecture of Kubernetes, a powerful container orchestration platform. We'll examine the key components that make up a Kubernetes cluster and learn how they interact to manage containerized applications. This lab is designed for beginners, providing a hands-on introduction to Kubernetes architecture.
Starting Minikube and Exploring Control Plane Components
Let's begin by starting a Kubernetes cluster using Minikube and examining the control plane components.
First, open your terminal. You should be in the /home/labex/project directory by default. If not, navigate there:
cd ~/project
Now, start Minikube with the following command:
minikube start
This command initializes a single-node Kubernetes cluster on your local machine. It may take a few minutes to complete. Don't worry if you see a lot of output â this is normal.
Once Minikube has started, let's explore the control plane components. The control plane is the brain of Kubernetes, responsible for managing the overall state of the cluster. To check the status of these components, run:
kubectl get componentstatuses
You should see output similar to this:
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
Let's break down what each of these components does:
The scheduler: This component watches for newly created Pods with no assigned node, and selects a node for them to run on.
The controller manager: This runs controller processes, which regulate the state of the system. For example, the replication controller ensures that the right number of Pod replicas are running.
etcd: This is a distributed key-value store that acts as Kubernetes' backing store for all cluster data.
If all components show "Healthy", your control plane is functioning correctly. If you see any errors, it might be worth restarting Minikube with minikube delete followed by minikube start.
Examining Node Components
Now that we've looked at the control plane, let's examine the node components. In Kubernetes, nodes are the worker machines that run your applications. Think of them as the muscles of your cluster, doing the heavy lifting of running containers.
To see the nodes in your cluster, run:
kubectl get nodes
You should see output similar to this:
NAME STATUS ROLES AGE VERSION
minikube Ready control plane 10m v1.20.0
This output shows that we have one node named "minikube" which is both a master (control plane) and a worker node, as we're using a single-node cluster. In a production environment, you'd typically have multiple nodes, with separate master and worker nodes.
The "Ready" status means the node is healthy and ready to accept Pods.
To get more detailed information about the node, use:
kubectl describe node minikube
This command provides a wealth of information about the node. Don't worry if it seems overwhelming â let's break down some key sections:
Node Conditions: These show the status of various node conditions (e.g., Ready, DiskPressure, MemoryPressure).
Capacity: This shows the total resources available on the node (CPU and memory).
Allocatable: This shows the resources available for Pods to use.
System Info: This provides information about the node's operating system, kernel version, and container runtime.
The key node components, which you won't see directly but are running on the node, include:
kubelet: This is the primary node agent. It watches for Pods that have been assigned to its node and ensures they're running.
kube-proxy: This maintains network rules on the node, allowing network communication to your Pods from inside or outside of your cluster.
Creating and Examining a Pod
Now that we understand the cluster architecture, let's create a simple pod and examine its components. In Kubernetes, a Pod is the smallest deployable unit â think of it as a single instance of an application.
Create a file named simple-pod.yaml in your project directory:
This YAML file defines a Pod named "nginx-pod" that runs an Nginx container. Let's break it down:
apiVersion and kind: These specify that we're creating a Pod object.
metadata: This includes the name of the Pod and any labels. Labels are key/value pairs used to organize and select objects.
spec: This describes the desired state of the Pod, including which containers it should run.
Save the file and exit the editor (in nano, press Ctrl+X, then Y, then Enter).
Now, create the pod by running:
kubectl apply -f simple-pod.yaml
To check the status of your pod, use:
kubectl get pods
You should see output similar to this:
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 30s
The "1/1" under READY means that one out of one containers in the Pod is ready. "Running" under STATUS means the Pod is working correctly.
To get more detailed information about the pod, use:
kubectl describe pod nginx-pod
This command provides a lot of information, including:
The node the Pod is running on
The Pod's IP address
The containers in the Pod
Recent events related to the Pod
This information is crucial for debugging and understanding the state of your application.
Creating a Service
Now that we have a running pod, let's create a Service to expose it. In Kubernetes, a Service is an abstraction that defines a logical set of Pods and a policy by which to access them. Think of it as a way to expose your application to the network, either within the cluster or externally.
Create a file named nginx-service.yaml in your project directory:
selector: This determines which Pods the Service will send traffic to. In this case, it will select any Pods with the label app: nginx.
ports: This specifies which ports the Service should use.
type: NodePort: This means the Service will be accessible on a port on each node in your cluster.
Save the file and exit the editor.
Now, create the service by running:
kubectl apply -f nginx-service.yaml
To check the status of your service, use:
kubectl get services
You should see output similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1h
nginx-service NodePort 10.110.126.65 <none> 80:30080/TCP 30s
The nginx-service line shows that your service has been created. The 80:30080/TCP under PORT(S) means that port 80 inside the cluster is mapped to port 30080 on the node.
To get more detailed information about the service, use:
kubectl describe service nginx-service
This command provides information about the service's type, IP addresses, ports, and endpoints. The endpoints are the IP addresses of the Pods that the Service is sending traffic to.
Accessing the Application
Now that we have a pod running our application and a service exposing it, let's access the application. This step will show you how all the components we've set up work together to make your application accessible.
First, we need to find out the URL that Minikube has assigned to our service:
minikube service nginx-service --url
This command will output a URL, which should look something like http://192.168.64.2:30080. The IP address might be different on your machine.
To access the application, you can use the curl command followed by the URL:
curl $(minikube service nginx-service --url)
This should return the default Nginx welcome page HTML. If you see HTML output starting with <!DOCTYPE html>, congratulations! You've successfully accessed your application.
Let's break down what just happened:
Your request first hit the NodePort service we created.
The service then forwarded the request to the Pod running the Nginx container.
The Nginx container processed the request and sent back the default welcome page.
This demonstrates how Kubernetes abstracts away the underlying infrastructure, allowing you to focus on your application rather than worrying about which specific machine it's running on.
Summary
In this lab, we explored the architecture of Kubernetes by examining its key components and their interactions. We started a Kubernetes cluster using Minikube, inspected the control plane and node components, created a pod to run an application, exposed the application using a service, and finally accessed the application.
We learned about:
Control plane components like the API server, scheduler, and controller manager
Node components like kubelet and kube-proxy
Pods as the smallest deployable units in Kubernetes
Services as a way to expose applications
This hands-on experience provides a solid foundation for understanding Kubernetes architecture. Remember, Kubernetes is a complex system with many moving parts, and it's okay if you don't understand everything right away. As you continue to work with Kubernetes, these concepts will become more familiar and intuitive.
Next steps in your Kubernetes journey could include learning about Deployments for managing multiple replicas of your application, ConfigMaps and Secrets for managing configuration, and Persistent Volumes for data storage. Keep exploring and happy Kuberneting!
We use cookies for a number of reasons, such as keeping the website reliable and secure, to improve your experience on our website and to see how you interact with it. By accepting, you agree to our use of such cookies. Privacy Policy