Introduction
In this lab, we will explore the architecture of Kubernetes, a powerful container orchestration platform. We'll examine the key components that make up a Kubernetes cluster and learn how they interact to manage containerized applications. This lab is designed for beginners, providing a hands-on introduction to Kubernetes architecture.
Explore Control Plane Components
Let's begin by starting a Kubernetes cluster using Minikube and examining the control plane components.
First, open your terminal. You should be in the /home/labex/project directory by default. If not, navigate there:
cd ~/project
Now, start Minikube with the following command:
minikube start
This command initializes a single-node Kubernetes cluster on your local machine. It may take a few minutes to complete. Don't worry if you see a lot of output – this is normal.
Once Minikube has started, let's explore the control plane components. The control plane is the brain of Kubernetes, responsible for managing the overall state of the cluster. To check the status of these components, run:
kubectl get componentstatuses
You should see output similar to this:
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
Let's break down what each of these components does:
- The scheduler: This component watches for newly created Pods with no assigned node, and selects a node for them to run on.
- The controller manager: This runs controller processes, which regulate the state of the system. For example, the replication controller ensures that the right number of Pod replicas are running.
- etcd: This is a distributed key-value store that acts as Kubernetes' backing store for all cluster data.
If all components show "Healthy", your control plane is functioning correctly. If you see any errors, it might be worth restarting Minikube with minikube delete followed by minikube start.
Examining Node Components
Now that we've looked at the control plane, let's examine the node components. In Kubernetes, nodes are the worker machines that run your applications. Think of them as the muscles of your cluster, doing the heavy lifting of running containers.
To see the nodes in your cluster, run:
kubectl get nodes
You should see output similar to this:
NAME STATUS ROLES AGE VERSION
minikube Ready control plane 10m v1.20.0
This output shows that we have one node named "minikube" which is both a master (control plane) and a worker node, as we're using a single-node cluster. In a production environment, you'd typically have multiple nodes, with separate master and worker nodes.
The "Ready" status means the node is healthy and ready to accept Pods.
To get more detailed information about the node, use:
kubectl describe node minikube
This command provides a wealth of information about the node. Don't worry if it seems overwhelming – let's break down some key sections:
- Node Conditions: These show the status of various node conditions (e.g., Ready, DiskPressure, MemoryPressure).
- Capacity: This shows the total resources available on the node (CPU and memory).
- Allocatable: This shows the resources available for Pods to use.
- System Info: This provides information about the node's operating system, kernel version, and container runtime.
The key node components, which you won't see directly but are running on the node, include:
- kubelet: This is the primary node agent. It watches for Pods that have been assigned to its node and ensures they're running.
- kube-proxy: This maintains network rules on the node, allowing network communication to your Pods from inside or outside of your cluster.
Creating and Examining a Pod
Before diving in, let's understand how YAML works in Kubernetes:
graph TB
A[YAML Config File] -->|Declares Desired State| B[Kubernetes API]
B -->|Creates/Manages| C[Running Containers]
D[kubectl CLI] -->|Reads| A
YAML files in Kubernetes act as "Infrastructure as Code":
- Think of it as a "menu" telling Kubernetes what you want
- Describes your desired system state in human-readable format
- Can be version controlled for team collaboration
Let's create our first YAML file. Create simple-pod.yaml:
nano ~/project/simple-pod.yaml
Add the following content:
## --- Beginning of YAML file ---
## 1. Tell Kubernetes which API version to use
apiVersion: v1
## 2. Declare what kind of resource we want to create
kind: Pod
## 3. Set metadata for this resource
metadata:
name: nginx-pod ## Name of the Pod
labels: ## Labels help us find and organize Pods
app: nginx
## 4. Define what the Pod should contain
spec:
containers: ## Pod can run one or more containers
- name: nginx ## Name of the container
image: nginx:latest ## Which container image to use
ports: ## Which ports to expose
- containerPort: 80 ## Nginx uses port 80 by default
The YAML file structure is like a tree:
Pod (root)
├── metadata (branch)
│ ├── name (leaf)
│ └── labels (leaf)
└── spec (branch)
└── containers (branch)
└── - name, image, ports (leaves)
Create the Pod:
kubectl apply -f simple-pod.yaml ## -f means read from file
This command will:
- Read your YAML file
- Send it to the Kubernetes API
- Kubernetes will work to achieve your described state
Verify the Pod creation:
kubectl get pods
You should see:
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 30s
The "1/1" under READY means one out of one containers in the Pod is ready. "Running" under STATUS means your first YAML configuration worked!
💡 Pro Tips:
- Indentation in YAML is crucial - use spaces, not tabs
- Use
kubectl explain podto see field documentation- Always add comments for better maintainability
To get detailed information about the pod:
kubectl describe pod nginx-pod
This command provides a lot of information, including:
- The node the Pod is running on
- The Pod's IP address
- The containers in the Pod
- Recent events related to the Pod
This information is crucial for debugging and understanding the state of your application.
Creating a Service
Now that we have a running pod, let's create a Service to expose it. In Kubernetes, a Service is an abstraction that defines a logical set of Pods and a policy by which to access them. Think of it as a way to expose your application to the network, either within the cluster or externally.
Create a file named nginx-service.yaml in your project directory:
nano ~/project/nginx-service.yaml
Add the following content to the file:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
Let's break down this YAML file:
selector: This determines which Pods the Service will send traffic to. In this case, it will select any Pods with the labelapp: nginx.ports: This specifies which ports the Service should use.type: NodePort: This means the Service will be accessible on a port on each node in your cluster.
Save the file and exit the editor.
Now, create the service by running:
kubectl apply -f nginx-service.yaml
To check the status of your service, use:
kubectl get services
You should see output similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1h
nginx-service NodePort 10.110.126.65 <none> 80:30080/TCP 30s
The nginx-service line shows that your service has been created. The 80:30080/TCP under PORT(S) means that port 80 inside the cluster is mapped to port 30080 on the node.
To get more detailed information about the service, use:
kubectl describe service nginx-service
This command provides information about the service's type, IP addresses, ports, and endpoints. The endpoints are the IP addresses of the Pods that the Service is sending traffic to.
Accessing the Application
Now that we have a pod running our application and a service exposing it, let's access the application. This step will show you how all the components we've set up work together to make your application accessible.
First, we need to find out the URL that Minikube has assigned to our service:
minikube service nginx-service --url
This command will output a URL, which should look something like http://192.168.64.2:30080. The IP address might be different on your machine.
To access the application, you can use the curl command followed by the URL:
curl $(minikube service nginx-service --url)
This should return the default Nginx welcome page HTML. If you see HTML output starting with <!DOCTYPE html>, congratulations! You've successfully accessed your application.
Let's break down what just happened:
- Your request first hit the NodePort service we created.
- The service then forwarded the request to the Pod running the Nginx container.
- The Nginx container processed the request and sent back the default welcome page.
This demonstrates how Kubernetes abstracts away the underlying infrastructure, allowing you to focus on your application rather than worrying about which specific machine it's running on.
Summary
In this lab, we explored the architecture of Kubernetes by examining its key components and their interactions. We started a Kubernetes cluster using Minikube, inspected the control plane and node components, created a pod to run an application, exposed the application using a service, and finally accessed the application.
graph TB
subgraph Control Plane
API[API Server]
CM[Controller Manager]
SCH[Scheduler]
ETCD[etcd]
API --> ETCD
API --> CM
API --> SCH
end
subgraph Worker Node
KL[kubelet]
KP[kube-proxy]
CR[Container Runtime]
subgraph Workloads
POD1[Pod]
POD2[Pod]
end
SVC[Service]
KL --> CR
POD1 --> CR
POD2 --> CR
KP --> SVC
SVC --> POD1
SVC --> POD2
end
API --> KL
Client[External Client] --> SVC
We learned about:
- Control plane components like the API server, scheduler, and controller manager
- Node components like kubelet and kube-proxy
- Pods as the smallest deployable units in Kubernetes
- Services as a way to expose applications
This hands-on experience provides a solid foundation for understanding Kubernetes architecture. Remember, Kubernetes is a complex system with many moving parts, and it's okay if you don't understand everything right away. As you continue to work with Kubernetes, these concepts will become more familiar and intuitive.
Next steps in your Kubernetes journey could include learning about Deployments for managing multiple replicas of your application, ConfigMaps and Secrets for managing configuration, and Persistent Volumes for data storage. Keep exploring and happy Kuberneting!


