Update and Rollback Applications

KubernetesKubernetesBeginner
Practice Now

Introduction

In this lab, you will learn how to update and rollback applications deployed on a Kubernetes cluster. You will start by setting up a local Kubernetes cluster using Minikube, then deploy a sample NGINX application. Next, you will update the application's image and verify the successful update. To simulate update failures, you will diagnose issues and then roll back to a stable version. Finally, you will adjust the rolling update strategy in the Deployment YAML file.


Skills Graph

%%%%{init: {'theme':'neutral'}}%%%% flowchart RL kubernetes(("`Kubernetes`")) -.-> kubernetes/TroubleshootingandDebuggingCommandsGroup(["`Troubleshooting and Debugging Commands`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/BasicCommandsGroup(["`Basic Commands`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/AdvancedCommandsGroup(["`Advanced Commands`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/AdvancedDeploymentGroup(["`Advanced Deployment`"]) kubernetes(("`Kubernetes`")) -.-> kubernetes/BasicsGroup(["`Basics`"]) kubernetes/TroubleshootingandDebuggingCommandsGroup -.-> kubernetes/describe("`Describe`") kubernetes/TroubleshootingandDebuggingCommandsGroup -.-> kubernetes/logs("`Logs`") kubernetes/BasicCommandsGroup -.-> kubernetes/create("`Create`") kubernetes/BasicCommandsGroup -.-> kubernetes/get("`Get`") kubernetes/BasicCommandsGroup -.-> kubernetes/set("`Set`") kubernetes/AdvancedCommandsGroup -.-> kubernetes/apply("`Apply`") kubernetes/AdvancedDeploymentGroup -.-> kubernetes/rollout("`Rollout`") kubernetes/BasicsGroup -.-> kubernetes/initialization("`Initialization`") subgraph Lab Skills kubernetes/describe -.-> lab-434649{{"`Update and Rollback Applications`"}} kubernetes/logs -.-> lab-434649{{"`Update and Rollback Applications`"}} kubernetes/create -.-> lab-434649{{"`Update and Rollback Applications`"}} kubernetes/get -.-> lab-434649{{"`Update and Rollback Applications`"}} kubernetes/set -.-> lab-434649{{"`Update and Rollback Applications`"}} kubernetes/apply -.-> lab-434649{{"`Update and Rollback Applications`"}} kubernetes/rollout -.-> lab-434649{{"`Update and Rollback Applications`"}} kubernetes/initialization -.-> lab-434649{{"`Update and Rollback Applications`"}} end

Start the Kubernetes Cluster

In this step, you'll learn how to start and verify a local Kubernetes cluster using Minikube. This is a crucial first step in setting up your Kubernetes development environment.

First, ensure you're in the project directory:

cd ~/project

Start the Minikube cluster:

minikube start

Example output:

😄  minikube v1.29.0 on Ubuntu 22.04
âœĻ  Automatically selected the docker driver
📌  Using Docker driver with root permissions
ðŸ”Ĩ  Creating kubernetes in kubernetes cluster
🔄  Restarting existing kubernetes cluster
ðŸģ  Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
🚀  Launching Kubernetes ...
🌟  Enabling addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace

Verify the cluster status:

minikube status

Example output:

minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

Check the cluster nodes:

kubectl get nodes

Example output:

NAME       STATUS   ROLES           AGE   VERSION
minikube   Ready    control-plane   1m    v1.26.1

Key points about this step:

  1. minikube start creates a local single-node Kubernetes cluster
  2. The cluster uses Docker as the default driver
  3. Kubernetes v1.26.1 is automatically configured
  4. minikube status and kubectl get nodes confirm the cluster's readiness

Deploy a Sample Application

In this step, you'll learn how to create and deploy a simple web application using Kubernetes Deployment. We'll use an NGINX image as our sample application to demonstrate the deployment process.

First, navigate to the project directory:

cd ~/project
mkdir -p k8s-manifests
cd k8s-manifests

Create a new deployment manifest for a web application:

nano nginx-deployment.yaml

Add the following content to the file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  labels:
    app: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: nginx
          image: nginx:1.23.3-alpine
          ports:
            - containerPort: 80

Save the file and exit the nano editor.

Deploy the application using kubectl:

kubectl apply -f nginx-deployment.yaml

Example output:

deployment.apps/web-app created

Verify the deployment:

kubectl get deployments

Example output:

NAME      READY   UP-TO-DATE   AVAILABLE   AGE
web-app   3/3     3            3           30s

Check the created pods:

kubectl get pods -l app=web

Example output:

NAME                      READY   STATUS    RESTARTS   AGE
web-app-xxx-yyy           1/1     Running   0          45s
web-app-xxx-zzz           1/1     Running   0          45s
web-app-xxx-www           1/1     Running   0          45s

Key points about this deployment:

  1. We created a Deployment with 3 replicas of an NGINX web server
  2. Used a specific, stable version of NGINX (1.23.3-alpine)
  3. Exposed container port 80
  4. Used labels to identify and manage the pods

Update the Application Image in the Deployment YAML

In this step, you'll learn how to update the container image in a Kubernetes Deployment, simulating a real-world application upgrade scenario.

First, ensure you're in the correct directory:

cd ~/project/k8s-manifests

Open the existing deployment manifest:

nano nginx-deployment.yaml

Update the image from nginx:1.23.3-alpine to a newer version:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  labels:
    app: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: nginx
          image: nginx:1.24.0-alpine
          ports:
            - containerPort: 80

Apply the updated deployment:

kubectl apply -f nginx-deployment.yaml

Example output:

deployment.apps/web-app configured

Watch the deployment update process:

kubectl rollout status deployment web-app

Example output:

Waiting for deployment "web-app" to roll out...
Waiting for deployment spec update to be applied...
Waiting for available replicas to reach desired number...
deployment "web-app" successfully rolled out

Verify the new image version:

kubectl get pods -l app=web -o jsonpath='{.items[*].spec.containers[0].image}'

Example output:

nginx:1.24.0-alpine nginx:1.24.0-alpine nginx:1.24.0-alpine

Key points about image updates:

  1. Use kubectl apply to update deployments
  2. Kubernetes performs a rolling update by default
  3. Pods are replaced gradually to maintain application availability
  4. The update process ensures zero-downtime deployment

Verify the Successful Update

In this step, you'll learn how to verify the successful update of your Kubernetes deployment by examining pod versions, status, and additional deployment details.

First, list the pods with detailed information:

kubectl get pods -l app=web -o wide

Example output:

NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
web-app-xxx-yyy           1/1     Running   0          3m    10.244.0.5   minikube   <none>           <none>
web-app-xxx-zzz           1/1     Running   0          3m    10.244.0.6   minikube   <none>           <none>
web-app-xxx-www           1/1     Running   0          3m    10.244.0.7   minikube   <none>           <none>

Check the specific image versions of the pods:

kubectl get pods -l app=web -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}{end}'

Example output:

web-app-xxx-yyy    nginx:1.24.0-alpine
web-app-xxx-zzz    nginx:1.24.0-alpine
web-app-xxx-www    nginx:1.24.0-alpine

Describe the deployment to get more detailed information:

kubectl describe deployment web-app

Example output:

Name:                   web-app
Namespace:              default
CreationTimestamp:      [current timestamp]
Labels:                 app=web
Annotations:            deployment.kubernetes.io/revision: 2
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=web
  Containers:
   nginx:
    Image:        nginx:1.24.0-alpine
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable

Verify the rollout history:

kubectl rollout history deployment web-app

Example output:

REVISION  CHANGE-CAUSE
1         <none>
2         <none>

Key points about verification:

  1. All pods are running the new image version
  2. The deployment has 3 available replicas
  3. The rollout strategy ensures zero-downtime updates
  4. The deployment revision has been incremented

Simulate and Diagnose Update Failures

In this step, you'll learn how to diagnose potential deployment update failures by simulating an problematic image update and using Kubernetes diagnostic tools.

First, navigate to the project directory:

cd ~/project/k8s-manifests

Create a deployment manifest with an invalid image:

nano problematic-deployment.yaml

Add the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: troubleshoot-app
  labels:
    app: troubleshoot
spec:
  replicas: 3
  selector:
    matchLabels:
      app: troubleshoot
  template:
    metadata:
      labels:
        app: troubleshoot
    spec:
      containers:
        - name: nginx
          image: nginx:non-existent-tag
          ports:
            - containerPort: 80

Apply the problematic deployment:

kubectl apply -f problematic-deployment.yaml

Example output:

deployment.apps/troubleshoot-app created

Check the deployment status:

kubectl rollout status deployment troubleshoot-app

Example output:

Waiting for deployment "troubleshoot-app" to roll out...

Press Ctrl+C to exit the rollout status.

Check the pod events and status:

kubectl get pods -l app=troubleshoot

Example output:

NAME                              READY   STATUS             RESTARTS   AGE
troubleshoot-app-6b8986c555-gcjj9   0/1     ImagePullBackOff   0          2m56s
troubleshoot-app-6b8986c555-p29dp   0/1     ImagePullBackOff   0          2m56s
troubleshoot-app-6b8986c555-vpv5q   0/1     ImagePullBackOff   0          2m56s

Examine pod details and logs:

## Replace 'xxx-yyy' with your actual pod name
POD_NAME=$(kubectl get pods -l app=troubleshoot -o jsonpath='{.items[0].metadata.name}')
kubectl describe pod $POD_NAME
kubectl logs $POD_NAME

You will see the pod status and logs indicating the image pull failure.

Failed to pull image "nginx:non-existent-tag"

Troubleshoot the issue by correcting the image:

nano problematic-deployment.yaml

Update the image to a valid tag:

image: nginx:1.24.0-alpine

Reapply the corrected deployment:

kubectl apply -f problematic-deployment.yaml

Check the pod status again:

kubectl get pods -l app=troubleshoot

Example output:

NAME                                READY   STATUS    RESTARTS   AGE
troubleshoot-app-5dc9b58d57-bvqbr   1/1     Running   0          5s
troubleshoot-app-5dc9b58d57-tdksb   1/1     Running   0          8s
troubleshoot-app-5dc9b58d57-xdq5n   1/1     Running   0          6s

Key points about diagnosing failures:

  1. Use kubectl describe to view deployment and pod events
  2. Check pod status for ImagePullBackOff or other error states
  3. Examine pod logs for detailed error information
  4. Verify image availability and tag correctness

Rollback to a Stable Version

In this step, you'll learn how to rollback a Kubernetes deployment to a previous stable version using kubectl rollout undo command.

First, navigate to the project directory:

cd ~/project/k8s-manifests

Check the rollout history of the web application:

kubectl rollout history deployment web-app

Example output:

REVISION  CHANGE-CAUSE
1         <none>
2         <none>

Verify the current deployment details:

kubectl describe deployment web-app | grep Image

Example output:

    Image:        nginx:1.24.0-alpine

Perform the rollback to the previous revision:

kubectl rollout undo deployment web-app

Example output:

deployment.apps/web-app rolled back

Verify the rollback:

kubectl rollout status deployment web-app

Example output:

Waiting for deployment "web-app" to roll out...
deployment "web-app" successfully rolled out

Check the updated image version:

kubectl get pods -l app=web -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}{end}'

Example output:

web-app-xxx-yyy    nginx:1.23.3-alpine
web-app-xxx-zzz    nginx:1.23.3-alpine
web-app-xxx-www    nginx:1.23.3-alpine

Confirm the rollout history:

kubectl rollout history deployment web-app

Example output:

REVISION  CHANGE-CAUSE
2         <none>
3         <none>

Key points about rollback:

  1. kubectl rollout undo reverts to the previous deployment revision
  2. Kubernetes maintains a history of deployment changes
  3. Rollback is performed with zero downtime
  4. The rollback creates a new revision in the history

Adjust Rolling Update Strategy in the Deployment YAML

In this step, you'll learn how to customize the rolling update strategy in a Kubernetes Deployment to control how applications are updated and scaled.

First, navigate to the project directory:

cd ~/project/k8s-manifests

Create a new deployment manifest with custom rolling update strategy:

nano custom-rollout-deployment.yaml

Add the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app-custom-rollout
  labels:
    app: web
spec:
  replicas: 5
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 2
      maxSurge: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: nginx
          image: nginx:1.24.0-alpine
          ports:
            - containerPort: 80

Apply the deployment:

kubectl apply -f custom-rollout-deployment.yaml

Example output:

deployment.apps/web-app-custom-rollout created

Verify the deployment status:

kubectl rollout status deployment web-app-custom-rollout

Example output:

Waiting for deployment "web-app-custom-rollout" to roll out...
deployment "web-app-custom-rollout" successfully rolled out

Describe the deployment to confirm the strategy:

kubectl describe deployment web-app-custom-rollout

Example output will include:

StrategyType:           RollingUpdate
RollingUpdateStrategy:  2 max unavailable, 3 max surge

Update the image to trigger a rolling update:

kubectl set image deployment/web-app-custom-rollout nginx=nginx:1.25.0-alpine

Monitor the update process:

kubectl rollout status deployment web-app-custom-rollout

Key points about rolling update strategy:

  1. maxUnavailable: Maximum number of pods that can be unavailable during update
  2. maxSurge: Maximum number of pods that can be created above the desired number
  3. Helps control update speed and application availability
  4. Allows fine-tuning of deployment behavior

Summary

In this lab, you learned how to start and verify a local Kubernetes cluster using Minikube, which is a crucial first step in setting up your Kubernetes development environment. You also learned how to create and deploy a simple web application using Kubernetes Deployment, using an NGINX image as the sample application. The deployment process involved creating a deployment manifest file and applying it to the cluster.

After deploying the initial version of the application, you will learn how to update the application image in the deployment YAML, verify the successful update, simulate and diagnose update failures, rollback to a stable version, and adjust the rolling update strategy in the deployment YAML.

Other Kubernetes Tutorials you may like