Verifying Metrics-Server Operation
After deploying Metrics-Server, you can verify its operation by checking the availability of resource metrics in the Kubernetes API server.
Checking Metrics-Server Availability
You can use the kubectl top
command to check if Metrics-Server is providing resource metrics for your Kubernetes cluster:
kubectl top nodes
This command will display the current CPU and memory usage for each node in your cluster. If Metrics-Server is working correctly, you should see the resource usage data.
NAME CPU(cores) MEMORY(bytes)
node1 250m 2Gi
node2 500m 4Gi
node3 300m 3Gi
Verifying Metrics-Server Data
To further verify that Metrics-Server is providing accurate resource metrics, you can check the raw data available in the Kubernetes API server:
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq .
This command will retrieve the raw metrics data for all nodes in your cluster and display it in a JSON format. You can inspect the CPU and memory usage data for each node.
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": [
{
"metadata": {
"name": "node1",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node1",
"creationTimestamp": "2023-04-12T12:34:56Z"
},
"timestamp": "2023-04-12T12:34:56Z",
"window": "30s",
"usage": {
"cpu": "250m",
"memory": "2Gi"
}
},
{
"metadata": {
"name": "node2",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node2",
"creationTimestamp": "2023-04-12T12:34:56Z"
},
"timestamp": "2023-04-12T12:34:56Z",
"window": "30s",
"usage": {
"cpu": "500m",
"memory": "4Gi"
}
},
{
"metadata": {
"name": "node3",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node3",
"creationTimestamp": "2023-04-12T12:34:56Z"
},
"timestamp": "2023-04-12T12:34:56Z",
"window": "30s",
"usage": {
"cpu": "300m",
"memory": "3Gi"
}
}
]
}
By verifying the availability and accuracy of the resource metrics provided by Metrics-Server, you can ensure that other components, such as the Horizontal Pod Autoscaler and Vertical Pod Autoscaler, can effectively use the data to make scaling decisions.