Cluster Management
Kubernetes Cluster Architecture
Kubernetes cluster management involves coordinating multiple nodes and ensuring efficient resource allocation and application deployment. The cluster consists of master and worker nodes with specific responsibilities.
graph TD
A[Kubernetes Cluster] --> B[Master Node]
A --> C[Worker Nodes]
B --> D[API Server]
B --> E[Scheduler]
C --> F[Kubelet]
C --> G[Container Runtime]
Pod Configuration and Deployment
Pods represent the smallest deployable units in Kubernetes. Here's an example of creating a multi-container pod configuration:
apiVersion: v1
kind: Pod
metadata:
name: multi-container-pod
spec:
containers:
- name: web-container
image: nginx
- name: database-container
image: postgres
Service Discovery and Networking
Kubernetes provides robust service discovery mechanisms:
Service Type |
Description |
Use Case |
ClusterIP |
Internal cluster communication |
Default service type |
NodePort |
External access through node IP |
Development environments |
LoadBalancer |
External load balancing |
Production deployments |
Resource Management on Ubuntu 22.04
Managing cluster resources involves defining resource constraints:
## Install kubectl
sudo apt update
sudo apt install -y kubectl
## Apply resource limits
kubectl create namespace resource-demo
kubectl create -n resource-demo deployment nginx \
--image=nginx \
--limits=cpu=500m,memory=512Mi \
--requests=cpu=250m,memory=256Mi
Scaling Applications
Kubernetes enables dynamic application scaling:
## Horizontal Pod Autoscaler
kubectl autoscale deployment nginx \
--min=2 \
--max=10 \
--cpu-percent=50
This configuration automatically adjusts pod replicas based on CPU utilization, ensuring optimal resource allocation and application performance.