Applying Resource Quota to a Namespace
Creating a Resource Quota
To apply a Resource Quota to a Kubernetes namespace, you need to create a ResourceQuota
object and apply it to the target namespace. Here's an example:
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
namespace: example-namespace
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
In this example, we're creating a ResourceQuota
object named compute-resources
and applying it to the example-namespace
namespace. The Resource Quota sets the following limits:
- CPU requests: 1 core
- Memory requests: 1 gigabyte
- CPU limits: 2 cores
- Memory limits: 2 gigabytes
To apply the Resource Quota, you can use the kubectl
command:
kubectl apply -f resource-quota.yaml
This will create the Resource Quota in the specified namespace.
Verifying Resource Quota Usage
After applying the Resource Quota, you can verify the current usage and limits by running the following command:
kubectl get resourcequota compute-resources -n example-namespace --output=yaml
This will output the current status of the Resource Quota, including the used and remaining resources.
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
namespace: example-namespace
status:
hard:
limits.cpu: "2"
limits.memory: 2Gi
requests.cpu: "1"
requests.memory: 1Gi
used:
limits.cpu: "1"
limits.memory: 512Mi
requests.cpu: "500m"
requests.memory: 256Mi
In this example, you can see that the namespace is currently using 500 millicores of CPU requests, 256 megabytes of memory requests, 1 core of CPU limits, and 512 megabytes of memory limits, which are within the defined Resource Quota limits.
Enforcing Resource Quota
Once the Resource Quota is applied, Kubernetes will automatically enforce the limits on any new pods or containers created in the namespace. If a pod or container exceeds the specified limits, Kubernetes will prevent it from being scheduled or will terminate the pod.
By using Resource Quotas, you can ensure that your Kubernetes namespace stays within the defined resource limits, promoting fairness and efficient resource utilization across your cluster.