Practical Implementation
Step-by-Step Node Affinity Configuration
Preparing Your Kubernetes Cluster
graph TD
A[Cluster Preparation] --> B[Label Nodes]
A --> C[Define Affinity Rules]
A --> D[Deploy Applications]
Labeling Nodes
First, label your nodes to enable precise affinity configurations:
## Add custom labels to nodes
kubectl label nodes worker-node-1 disktype=ssd
kubectl label nodes worker-node-2 performance-tier=high
Creating Affinity-Enabled Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: database-cluster
spec:
replicas: 3
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 60
preference:
matchExpressions:
- key: performance-tier
operator: In
values:
- high
Debugging and Verification
Checking Node Placement
## Verify pod scheduling
kubectl get pods -o wide
## Describe deployment to understand scheduling
kubectl describe deployment database-cluster
Common Implementation Scenarios
Scenario |
Affinity Strategy |
Example Use Case |
High-Performance Workloads |
Required + Preferred |
Database clusters |
Geographic Distribution |
Zone-based Affinity |
Multi-region deployments |
Hardware-Specific Tasks |
Specific Node Labels |
GPU-enabled computing |
Advanced Troubleshooting
Handling Scheduling Failures
## Check events for scheduling issues
kubectl get events
## Verify node conditions
kubectl describe nodes
Best Practices
- Always have fallback scheduling options
- Use soft preferences when possible
- Regularly audit node labels
- Monitor cluster resource utilization
Real-World Example: Machine Learning Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ml-training-job
spec:
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu
operator: Exists
- key: gpu-type
operator: In
values:
- nvidia-tesla-v100
Use kubectl and cluster monitoring tools to track:
- Node resource utilization
- Pod scheduling efficiency
- Affinity rule impact
With LabEx's advanced Kubernetes training, you can master these practical implementation techniques and optimize your cluster's workload management.