Explore the Kubernetes Cluster

KubernetesKubernetesBeginner
Practice Now

Introduction

In this lab, you will explore a local Kubernetes cluster using Minikube. You will start the cluster, verify its setup, and inspect basic cluster resources such as pods and deployments. This hands-on experience will help you understand the fundamental components and commands of a Kubernetes environment, laying the foundation for further exploration and development.

You will begin by setting up a Minikube cluster on your local machine, ensuring that the cluster is running and ready for use. Then, you will verify the cluster's configuration and health using essential kubectl commands, such as kubectl cluster-info and kubectl get nodes. Finally, you will inspect the basic cluster resources, including pods and deployments, to familiarize yourself with the Kubernetes object model and the overall state of the cluster.


Skills Graph

%%%%{init: {'theme':'neutral'}}%%%% flowchart RL kubernetes(("`Kubernetes`")) -.-> kubernetes/TroubleshootingandDebuggingCommandsGroup(["`Troubleshooting and Debugging Commands`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/BasicCommandsGroup(["`Basic Commands`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/ConfigurationandVersioningGroup(["`Configuration and Versioning`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/ClusterInformationGroup(["`Cluster Information`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/CoreConceptsGroup(["`Core Concepts`"]) kubernetes/TroubleshootingandDebuggingCommandsGroup -.-> kubernetes/describe("`Describe`") kubernetes/BasicCommandsGroup -.-> kubernetes/get("`Get`") kubernetes/ConfigurationandVersioningGroup -.-> kubernetes/version("`Version`") kubernetes/ClusterInformationGroup -.-> kubernetes/cluster_info("`Cluster Info`") kubernetes/CoreConceptsGroup -.-> kubernetes/architecture("`Architecture`") subgraph Lab Skills kubernetes/describe -.-> lab-434519{{"`Explore the Kubernetes Cluster`"}} kubernetes/get -.-> lab-434519{{"`Explore the Kubernetes Cluster`"}} kubernetes/version -.-> lab-434519{{"`Explore the Kubernetes Cluster`"}} kubernetes/cluster_info -.-> lab-434519{{"`Explore the Kubernetes Cluster`"}} kubernetes/architecture -.-> lab-434519{{"`Explore the Kubernetes Cluster`"}} end

Start the Kubernetes Cluster

In this step, you'll start and verify a local Kubernetes cluster using Minikube, which provides a simple way to set up a single-node Kubernetes environment for learning and development.

First, navigate to the project directory:

cd ~/project

Start the Minikube cluster:

minikube start

Example output:

😄  minikube v1.29.0 on Ubuntu 22.04
✨  Automatically selected the docker driver
📌  Using Docker driver with root permissions
🔥  Creating kubernetes in kubernetes cluster
🔄  Restarting existing kubernetes cluster
🐳  Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
🚀  Launching Kubernetes ...
🌟  Enabling addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace

Verify the cluster status using multiple commands:

minikube status

Example output:

minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

Check the cluster nodes:

kubectl get nodes

Example output:

NAME       STATUS   ROLES           AGE   VERSION
minikube   Ready    control-plane   1m    v1.26.1

These commands confirm that:

  1. Minikube is successfully started
  2. The local Kubernetes cluster is running
  3. The cluster is ready to use
  4. You have a single-node cluster with control plane capabilities

The Minikube cluster provides a complete Kubernetes environment on your local machine, allowing you to develop and test applications without needing a full multi-node cluster.

Kubernetes Architecture Overview

Kubernetes operates on a client-server model, with a centralized Control Plane governing the cluster’s state and a set of worker Nodes running workloads. At a high level, a user (often a developer) interacts with the Kubernetes cluster through command-line tools or APIs. The Control Plane makes decisions about what should run where, monitors the cluster’s health, and ensures that the desired state is achieved. The worker Nodes host your applications in Pods—groups of one or more containers—and offer the computational and storage resources needed to run them.

Control Plane

This is the “brain” of the cluster, consisting of several components that work together to manage the entire system:

  • kube-apiserver (API): Serves as the cluster’s front door. All administrative commands and resource requests pass through it.
  • etcd (Key Value Store): Stores all configuration data and the current state of the cluster. If you lose etcd data, you lose the state of the cluster.
  • kube-scheduler (SCH): Assigns Pods to Nodes based on resource requirements, constraints, and policies.
  • kube-controller-manager (CTLM): Runs a variety of controllers that continually adjust the cluster’s state, ensuring that the actual state matches the desired state defined by your deployments and configurations.

Nodes (Worker Machines)

Nodes are where workloads run. Each Node has:

  • kubelet (KLT): A node-level agent that communicates with the Control Plane. It ensures Pods are running and reports their status back to the Control Plane.
  • Container Runtime (CR): Software that runs and manages containers (e.g., Docker or containerd). It creates and manages containerized applications within Pods.

Pods

A Pod is the smallest deployable unit in Kubernetes, typically representing a single instance of a running application. Pods can contain one or more containers that share the same network namespace and storage volumes.

Services

A Service is an abstraction that defines a logical set of Pods and a policy for how to access them. Services provide stable IP addresses, DNS names, and load-balancing, ensuring that external consumers and other cluster components can reliably connect to your applications—even as Pods move between Nodes or are replaced during scaling or rolling updates.

Interacting with the Cluster

  • Developers and administrators interact with the cluster through the kube-apiserver, often using kubectl or other Kubernetes clients.
  • When a new application is deployed, the Control Plane components (scheduler, controllers) work to place the Pods on the appropriate Nodes.
  • The kubelet on each Node ensures Pods are healthy and running as instructed.
  • Services route traffic to the correct Pods, allowing clients to access applications without having to track Pod location changes.
flowchart TB %% User interacting with the cluster User((Developer)) User -->|kubectl CLI| API[kube-apiserver] %% Control Plane Components subgraph ControlPlane[Control Plane] API ETCD[etcd - Key Value Store] SCH[kube-scheduler] CTLM[kube-controller-manager] API --> ETCD API --> SCH API --> CTLM end %% Worker Node 1 subgraph Node1[Worker Node] KLT1[kubelet] CR1[Container Runtime] subgraph Pods1[Pods] P1[Pod] P2[Pod] end KLT1 --> CR1 CR1 --> P1 CR1 --> P2 end %% Worker Node 2 subgraph Node2[Worker Node] KLT2[kubelet] CR2[Container Runtime] subgraph Pods2[Pods] P3[Pod] P4[Pod] end KLT2 --> CR2 CR2 --> P3 CR2 --> P4 end %% Connections between Control Plane and Nodes API --> KLT1 API --> KLT2 %% Service connecting to Pods across different Nodes Service[Service] Service --> P1 Service --> P2 Service --> P3 Service --> P4

In the diagram:

  • The developer interacts with the kube-apiserver (API) through a CLI tool like kubectl.
  • The Control Plane components (API, etcd, Scheduler, Controller Manager) manage the cluster state and orchestrate workloads.
  • Each Worker Node runs a kubelet and a container runtime, hosting multiple Pods.
  • A Service routes external or internal traffic to the correct Pods, providing a stable endpoint that abstracts away the complexity of Pod lifecycles and IP changes.

This mental model helps you understand what you’re seeing when you inspect the cluster’s state, check Node health, list Pods, and query Services—concepts you’ll apply as you continue exploring Kubernetes with kubectl commands.

Verify Cluster Setup

In this step, you'll learn how to verify your Kubernetes cluster's configuration and health using essential kubectl commands. These commands will help you understand the cluster's current state and connectivity.

First, check the cluster information:

kubectl cluster-info

Example output:

Kubernetes control plane is running at https://192.168.49.2:8443
CoreDNS is running at https://192.168.49.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'

This command provides details about the Kubernetes control plane and core services like CoreDNS.

Next, get a detailed view of the cluster nodes:

kubectl get nodes -o wide

Example output:

NAME       STATUS   ROLES           AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION
minikube   Ready    control-plane   15m    v1.26.1   192.168.49.2   <none>        Ubuntu 22.04 LTS    5.15.0-72-generic

Let's examine the node details more comprehensively:

kubectl describe node minikube

Example output (partial):

Name:               minikube
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=86a3b7e45a9a35cdcf8f4c80a4c6a46d20dda00f
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  [Current Timestamp]
Capacity:
  cpu:                2
  ephemeral-storage:  17784212Ki
  memory:             1947748Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  16388876Ki
  memory:             1845348Ki
  pods:               110

Key insights from these commands:

  1. Verify cluster control plane is running
  2. Check node status (Ready/NotReady)
  3. Understand node resources and configuration
  4. Confirm Kubernetes version and node details

Inspect Basic Cluster Resources

In this step, you will inspect basic Kubernetes resources—such as Pods, Deployments, and Services—across all namespaces. By using the -A (or --all-namespaces) flag, you will see how resources are organized throughout the entire cluster. This is an excellent opportunity to introduce and understand the concept of Namespaces in Kubernetes.

Namespaces: Resource Isolation

Namespaces are logical partitions within a Kubernetes cluster that help organize and manage resources. They provide a way to group related objects and apply policies, access controls, and resource quotas at a granular level. By separating resources into different namespaces, you can:

  • Improve Organization: Group related workloads (e.g., by project, team, or environment—such as dev, test, and production).
  • Enhance Security and Access Control: Restrict which users or service accounts can view or modify resources in a particular namespace.
  • Simplify Resource Management: Apply resource limits, network policies, and other cluster-wide configurations more effectively.

When you list resources with the -A (or --all-namespaces) flag, you’ll notice that components belonging to the Kubernetes system reside in the kube-system namespace, which is dedicated to cluster-level infrastructure. User-created applications typically reside in the default namespace or other custom namespaces that you define.

Namespaces and Resources

flowchart LR %% User interacts with the cluster via kube-apiserver User((Developer)) User -->|kubectl get pods -A| API[kube-apiserver] %% Control Plane Subgraph subgraph ControlPlane[Control Plane] API ETCD[etcd] SCH[kube-scheduler] CTLM[kube-controller-manager] end API --> ETCD API --> SCH API --> CTLM %% kube-system namespace subgraph kube-system[Namespace: kube-system] SysDeployment[Deployment: coredns] SysPod1[Pod: coredns-xxx] SysService[Service: kube-dns] SysDeployment --> SysPod1 SysService --> SysPod1 end %% default namespace (renamed to avoid parse issues) subgraph defaultNs[Namespace: default] DefDeployment[Deployment: my-app] DefPod1[Pod: my-app-pod1] DefPod2[Pod: my-app-pod2] DefService[Service: my-app-service] DefDeployment --> DefPod1 DefDeployment --> DefPod2 DefService --> DefPod1 DefService --> DefPod2 end %% dev namespace subgraph dev[Namespace: dev] DevDeployment[Deployment: dev-app] DevPod[Pod: dev-app-pod] DevService[Service: dev-app-service] DevDeployment --> DevPod DevService --> DevPod end %% Demonstration of communication API --> kube-system API --> defaultNs API --> dev

In the diagram:

  • The Control Plane manages the entire cluster, communicating with nodes and controlling workloads.
  • Namespaces (such as kube-system, default, and dev) logically separate resources within the cluster.
    • kube-system holds system-level components like CoreDNS and kube-dns.
    • default is commonly used for general workloads, here represented by a my-app deployment.
    • dev might represent a development environment, isolated from production workloads.

By viewing resources across all namespaces, you gain a comprehensive understanding of how these logical partitions help maintain an organized and secure cluster.

Examples:

List all pods across all namespaces:

kubectl get pods -A

Example output:

NAMESPACE     NAME                               READY   STATUS    RESTARTS      AGE
kube-system   coredns-787d4945fb-j8rhx           1/1     Running   0             20m
kube-system   etcd-minikube                      1/1     Running   0             20m
kube-system   kube-apiserver-minikube            1/1     Running   0             20m
kube-system   kube-controller-manager-minikube   1/1     Running   0             20m
kube-system   kube-proxy-xb9rz                   1/1     Running   0             20m
kube-system   kube-scheduler-minikube            1/1     Running   0             20m
kube-system   storage-provisioner                1/1     Running   1 (20m ago)   20m

Here, you see all system-related pods running in the kube-system namespace. If you had other deployments or services in different namespaces, they would appear in this list as well, each clearly scoped by their namespace.

List all deployments across all namespaces:

kubectl get deployments -A

Example output:

NAMESPACE     NAME      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns   1/1     1            1           20m

The coredns deployment resides in the kube-system namespace.

Get a comprehensive view of all resources across all namespaces:

kubectl get all -A

This command displays an overview of pods, services, and deployments across different namespaces, helping you understand how these resources are distributed throughout the cluster.

Key Takeaways:

  • Namespaces provide logical isolation and organization within a Kubernetes cluster.
  • Different Kubernetes components and resources are organized into specific namespaces (e.g., kube-system for core services, default for general workloads, and additional namespaces you create).
  • By using -A to view resources across all namespaces, you gain insight into how your cluster is structured and how namespaces serve as boundaries for resource organization and access control.

By understanding how namespaces function as logical environments, you can better navigate, isolate, and manage your workloads and related cluster resources, especially as you scale your deployments and introduce more complexity into your Kubernetes environment.

Summary

In this lab, you started and verified a local Kubernetes cluster using Minikube, which provides a simple way to set up a single-node Kubernetes environment for learning and development. You confirmed that the Minikube cluster is successfully started, the local Kubernetes cluster is running, the cluster is ready to use, and you have a single-node cluster with control plane capabilities. You also learned how to verify your Kubernetes cluster's configuration and health using essential kubectl commands, such as kubectl cluster-info and kubectl get nodes, which helped you understand the cluster's current state and connectivity.

Other Kubernetes Tutorials you may like