How to enable Metrics-Server in Kubernetes?

KubernetesKubernetesBeginner
Practice Now

Introduction

Kubernetes is a powerful container orchestration platform that simplifies the deployment and management of applications at scale. To effectively manage and optimize your Kubernetes infrastructure, it's crucial to have access to performance metrics. In this tutorial, we'll guide you through the process of enabling the Metrics-Server in your Kubernetes cluster, which provides essential data for resource utilization and scaling decisions.


Skills Graph

%%%%{init: {'theme':'neutral'}}%%%% flowchart RL kubernetes(("`Kubernetes`")) -.-> kubernetes/TroubleshootingandDebuggingCommandsGroup(["`Troubleshooting and Debugging Commands`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/BasicCommandsGroup(["`Basic Commands`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/ConfigurationandVersioningGroup(["`Configuration and Versioning`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/ClusterInformationGroup(["`Cluster Information`"]) kubernetes/TroubleshootingandDebuggingCommandsGroup -.-> kubernetes/describe("`Describe`") kubernetes/TroubleshootingandDebuggingCommandsGroup -.-> kubernetes/logs("`Logs`") kubernetes/BasicCommandsGroup -.-> kubernetes/get("`Get`") kubernetes/ConfigurationandVersioningGroup -.-> kubernetes/version("`Version`") kubernetes/ClusterInformationGroup -.-> kubernetes/cluster_info("`Cluster Info`") subgraph Lab Skills kubernetes/describe -.-> lab-415633{{"`How to enable Metrics-Server in Kubernetes?`"}} kubernetes/logs -.-> lab-415633{{"`How to enable Metrics-Server in Kubernetes?`"}} kubernetes/get -.-> lab-415633{{"`How to enable Metrics-Server in Kubernetes?`"}} kubernetes/version -.-> lab-415633{{"`How to enable Metrics-Server in Kubernetes?`"}} kubernetes/cluster_info -.-> lab-415633{{"`How to enable Metrics-Server in Kubernetes?`"}} end

Understanding Metrics-Server

Metrics-Server is a scalable, efficient source of container resource metrics for Kubernetes. It is a cluster-level component that collects resource metrics from kubelets and exposes them in Kubernetes API server for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler.

What is Metrics-Server?

Metrics-Server is a lightweight, scalable, and efficient add-on for Kubernetes. It is responsible for collecting resource metrics from the Kubelet API on each node and making them available in the Kubernetes API server. This allows other components, such as the Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA), to access the resource usage data and make decisions based on it.

Why use Metrics-Server?

Metrics-Server provides the following benefits:

  1. Resource Monitoring: Metrics-Server collects CPU and memory usage metrics for each container running in the Kubernetes cluster, allowing you to monitor resource utilization.

  2. Horizontal Pod Autoscaling: The Horizontal Pod Autoscaler (HPA) uses the metrics provided by Metrics-Server to automatically scale the number of pods in a deployment based on CPU or memory usage.

  3. Vertical Pod Autoscaling: The Vertical Pod Autoscaler (VPA) uses the metrics provided by Metrics-Server to automatically adjust the CPU and memory requests and limits for each pod based on its actual usage.

  4. Efficient and Scalable: Metrics-Server is designed to be lightweight and scalable, allowing it to handle large Kubernetes clusters with minimal overhead.

graph TD A[Kubernetes Cluster] --> B[Metrics-Server] B --> C[Kubelet API] B --> D[Kubernetes API Server] D --> E[Horizontal Pod Autoscaler] D --> F[Vertical Pod Autoscaler]

Deploying Metrics-Server in Kubernetes

To deploy Metrics-Server in your Kubernetes cluster, follow these steps:

Step 1: Deploy Metrics-Server

You can deploy Metrics-Server using the official YAML manifest provided by the Kubernetes project. First, download the manifest file:

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Next, apply the manifest to your Kubernetes cluster:

kubectl apply -f components.yaml

This will create the necessary Deployment, Service, and RBAC resources for Metrics-Server.

Step 2: Verify Metrics-Server Installation

After deploying Metrics-Server, you can verify its installation by checking the status of the Metrics-Server pod:

kubectl get pods -n kube-system | grep metrics-server

You should see the Metrics-Server pod running and in a Ready state.

Step 3: Configure Metrics-Server (Optional)

Depending on your Kubernetes cluster setup, you may need to configure Metrics-Server further. For example, if your cluster is running on a cloud provider that requires specific parameters for the Metrics-Server container, you can modify the components.yaml file accordingly.

Here's an example of how you can configure Metrics-Server to use a custom Kubelet insecure port:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
spec:
  ## ... other Deployment configuration
  template:
    spec:
      containers:
        - name: metrics-server
          args:
            - --kubelet-insecure-tls
            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
            - --kubelet-use-node-status-port

After making any necessary configuration changes, apply the updated YAML file to your cluster:

kubectl apply -f components.yaml

Now, Metrics-Server is deployed and configured in your Kubernetes cluster, ready to provide resource metrics for other components to use.

Verifying Metrics-Server Operation

After deploying Metrics-Server, you can verify its operation by checking the availability of resource metrics in the Kubernetes API server.

Checking Metrics-Server Availability

You can use the kubectl top command to check if Metrics-Server is providing resource metrics for your Kubernetes cluster:

kubectl top nodes

This command will display the current CPU and memory usage for each node in your cluster. If Metrics-Server is working correctly, you should see the resource usage data.

NAME           CPU(cores)   MEMORY(bytes)
node1         250m         2Gi
node2         500m         4Gi
node3         300m         3Gi

Verifying Metrics-Server Data

To further verify that Metrics-Server is providing accurate resource metrics, you can check the raw data available in the Kubernetes API server:

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq .

This command will retrieve the raw metrics data for all nodes in your cluster and display it in a JSON format. You can inspect the CPU and memory usage data for each node.

{
  "kind": "NodeMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
  },
  "items": [
    {
      "metadata": {
        "name": "node1",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node1",
        "creationTimestamp": "2023-04-12T12:34:56Z"
      },
      "timestamp": "2023-04-12T12:34:56Z",
      "window": "30s",
      "usage": {
        "cpu": "250m",
        "memory": "2Gi"
      }
    },
    {
      "metadata": {
        "name": "node2",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node2",
        "creationTimestamp": "2023-04-12T12:34:56Z"
      },
      "timestamp": "2023-04-12T12:34:56Z",
      "window": "30s",
      "usage": {
        "cpu": "500m",
        "memory": "4Gi"
      }
    },
    {
      "metadata": {
        "name": "node3",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node3",
        "creationTimestamp": "2023-04-12T12:34:56Z"
      },
      "timestamp": "2023-04-12T12:34:56Z",
      "window": "30s",
      "usage": {
        "cpu": "300m",
        "memory": "3Gi"
      }
    }
  ]
}

By verifying the availability and accuracy of the resource metrics provided by Metrics-Server, you can ensure that other components, such as the Horizontal Pod Autoscaler and Vertical Pod Autoscaler, can effectively use the data to make scaling decisions.

Summary

By the end of this tutorial, you will have a fully operational Metrics-Server in your Kubernetes cluster, providing you with the necessary performance metrics to monitor and optimize your applications. This knowledge will empower you to make informed decisions about resource allocation and scaling, ensuring the efficient and reliable operation of your Kubernetes-based infrastructure.

Other Kubernetes Tutorials you may like