How to display pod CPU and memory usage in a namespace?

KubernetesKubernetesBeginner
Practice Now

Introduction

Kubernetes, the powerful container orchestration platform, provides robust monitoring capabilities to help you understand the resource usage of your applications. In this tutorial, we will explore how to display the CPU and memory usage of pods within a specific namespace, empowering you to optimize your Kubernetes deployments and ensure efficient resource utilization.


Skills Graph

%%%%{init: {'theme':'neutral'}}%%%% flowchart RL kubernetes(("`Kubernetes`")) -.-> kubernetes/TroubleshootingandDebuggingCommandsGroup(["`Troubleshooting and Debugging Commands`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/BasicCommandsGroup(["`Basic Commands`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/ConfigurationandVersioningGroup(["`Configuration and Versioning`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/ClusterManagementCommandsGroup(["`Cluster Management Commands`"]) kubernetes/TroubleshootingandDebuggingCommandsGroup -.-> kubernetes/describe("`Describe`") kubernetes/TroubleshootingandDebuggingCommandsGroup -.-> kubernetes/logs("`Logs`") kubernetes/BasicCommandsGroup -.-> kubernetes/get("`Get`") kubernetes/ConfigurationandVersioningGroup -.-> kubernetes/config("`Config`") kubernetes/ClusterManagementCommandsGroup -.-> kubernetes/top("`Top`") subgraph Lab Skills kubernetes/describe -.-> lab-415632{{"`How to display pod CPU and memory usage in a namespace?`"}} kubernetes/logs -.-> lab-415632{{"`How to display pod CPU and memory usage in a namespace?`"}} kubernetes/get -.-> lab-415632{{"`How to display pod CPU and memory usage in a namespace?`"}} kubernetes/config -.-> lab-415632{{"`How to display pod CPU and memory usage in a namespace?`"}} kubernetes/top -.-> lab-415632{{"`How to display pod CPU and memory usage in a namespace?`"}} end

Understanding Kubernetes Pod Monitoring

Kubernetes is a powerful container orchestration system that provides a robust platform for deploying and managing applications. One of the key aspects of Kubernetes is its ability to monitor the health and performance of the running containers, known as Pods. Monitoring Pods is essential for understanding the resource utilization, identifying bottlenecks, and ensuring the overall stability of your Kubernetes-based applications.

Kubernetes Metrics Server

The Kubernetes Metrics Server is a core component that collects and exposes various metrics about the Pods running in your cluster. It provides an API that allows you to access these metrics, which can be used for a variety of purposes, such as:

  • Horizontal Pod Autoscaling (HPA): The Metrics Server provides the necessary data for the HPA controller to automatically scale your Pods based on CPU and memory usage.
  • Monitoring and Visualization: The Metrics Server data can be integrated with monitoring tools, such as Prometheus and Grafana, to visualize and analyze the resource usage of your Pods.
  • Command-line Tools: Utilities like kubectl top can be used to quickly view the CPU and memory usage of Pods and nodes in your Kubernetes cluster.

Metrics Collected by the Metrics Server

The Metrics Server collects the following key metrics for each Pod in your Kubernetes cluster:

  • CPU Usage: The amount of CPU resources consumed by the Pod, measured in millicores (1 core = 1000 millicores).
  • Memory Usage: The amount of memory consumed by the Pod, measured in bytes.

These metrics can be used to understand the resource utilization of your Pods and make informed decisions about scaling, resource allocation, and overall cluster optimization.

graph TD A[Kubernetes Cluster] --> B[Metrics Server] B --> C[Metrics API] C --> D[Monitoring Tools] C --> E[Command-line Tools] D --> F[Prometheus] D --> G[Grafana] E --> H[kubectl top]

Monitoring Pod CPU and Memory Usage

Accessing Pod Metrics using kubectl

The kubectl top command is a convenient way to view the CPU and memory usage of Pods in your Kubernetes cluster. To use this command, you need to ensure that the Metrics Server is deployed and running in your cluster.

Here's an example of how to use kubectl top to view the CPU and memory usage of Pods in a specific namespace:

kubectl top pods -n <namespace>

This will output the CPU and memory usage for each Pod in the specified namespace, similar to the following:

NAME                    CPU(cores)   MEMORY(bytes)
example-pod-1           100m         64Mi
example-pod-2           200m         128Mi
example-pod-3           150m         96Mi

Querying Pod Metrics using the Metrics API

If you need more detailed or programmatic access to Pod metrics, you can use the Metrics API provided by the Kubernetes Metrics Server. This API can be accessed using the kubectl command or directly through HTTP requests.

Here's an example of how to use the Metrics API to retrieve the CPU and memory usage of Pods in a specific namespace:

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/namespaces/<namespace>/pods" | jq .

This will output the detailed metrics for each Pod in the specified namespace, including CPU and memory usage, in a JSON format.

Integrating with Monitoring Tools

To get a more comprehensive and visual understanding of Pod resource usage, you can integrate the Metrics Server data with monitoring tools like Prometheus and Grafana. These tools can collect, store, and visualize the metrics, allowing you to create custom dashboards and alerts based on your specific requirements.

Here's an example of a Grafana dashboard that displays the CPU and memory usage of Pods in a Kubernetes cluster:

graph LT A[Kubernetes Cluster] --> B[Metrics Server] B --> C[Prometheus] C --> D[Grafana] D --> E[CPU Usage Dashboard] D --> F[Memory Usage Dashboard]

By using these tools, you can gain deeper insights into the resource utilization of your Pods, identify performance bottlenecks, and make informed decisions about scaling and resource allocation.

Monitoring Pods in a Namespace

Monitoring Pods in a Specific Namespace

Kubernetes allows you to organize your resources, including Pods, into namespaces. This can be useful for managing different environments, teams, or applications within the same cluster. When monitoring Pod resource usage, you often want to focus on a specific namespace to get a clear picture of the resource consumption in that context.

To monitor Pods in a specific namespace using the kubectl top command, you can use the -n or --namespace flag:

kubectl top pods -n <namespace>

This will display the CPU and memory usage for all Pods in the specified namespace.

Filtering Pods by Labels

In addition to namespaces, you can also filter Pods by their labels when monitoring resource usage. This can be useful when you want to focus on a specific set of Pods, such as those belonging to a particular application or deployment.

To filter Pods by labels, you can use the --selector or -l flag with the kubectl top command:

kubectl top pods -n < namespace > --selector=app=myapp

This will display the CPU and memory usage for all Pods in the specified namespace that have the label app=myapp.

Monitoring Pods in a Namespace using the Metrics API

If you need more advanced monitoring capabilities, you can use the Kubernetes Metrics API to retrieve Pod metrics for a specific namespace. This can be useful for integrating with external monitoring and alerting systems.

Here's an example of how to use the Metrics API to retrieve the CPU and memory usage of Pods in a specific namespace:

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/namespaces/<namespace>/pods" | jq .

This will output the detailed metrics for each Pod in the specified namespace, including CPU and memory usage, in a JSON format.

By combining the use of namespaces, labels, and the Metrics API, you can effectively monitor the resource usage of Pods in your Kubernetes cluster, allowing you to make informed decisions about scaling, resource allocation, and overall application performance.

Summary

By the end of this tutorial, you will have a comprehensive understanding of how to monitor pod CPU and memory usage in a Kubernetes namespace. You will learn to leverage built-in Kubernetes tools and commands to gather and analyze this critical performance data, enabling you to make informed decisions about resource allocation and scaling your applications on the Kubernetes platform.

Other Kubernetes Tutorials you may like