How to deploy a web application on Kubernetes

KubernetesKubernetesBeginner
Practice Now

Introduction

This tutorial will provide an overview of Kubernetes, the leading open-source container orchestration platform, and guide you through the process of deploying a simple web application on a Kubernetes cluster. You will learn about the fundamental Kubernetes components and how to configure and scale your application to meet your needs.

Kubernetes Fundamentals

Kubernetes is a powerful open-source container orchestration platform that has become the de facto standard for managing and scaling containerized applications in the cloud-native ecosystem. This section will provide an overview of the fundamental concepts and components of Kubernetes, as well as how to deploy a simple web application on a Kubernetes cluster.

Understanding Kubernetes

Kubernetes is a system for automating the deployment, scaling, and management of containerized applications. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes provides a declarative way to define the desired state of your application, and the system will work to ensure that the actual state matches the desired state.

At the core of Kubernetes are the following key components:

  • Pods: The smallest deployable units in Kubernetes, representing one or more containers running together.
  • Nodes: The physical or virtual machines that make up the Kubernetes cluster and run the containerized applications.
  • Deployments: Declarative configurations that define the desired state of your application, including the number of replicas, container images, and other settings.
  • Services: Abstractions that define a logical set of Pods and a policy by which to access them.
  • Volumes: Persistent storage that can be attached to Pods, allowing data to persist beyond the lifetime of a single container.

Deploying a Web Application on Kubernetes

To demonstrate the basics of Kubernetes, let's deploy a simple web application on a Kubernetes cluster. For this example, we'll use a basic Nginx web server.

First, create a Deployment manifest file (e.g., nginx-deployment.yaml) with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

This Deployment manifest defines a deployment of three replicas of the Nginx web server container.

To create the Deployment, run the following command:

kubectl create -f nginx-deployment.yaml

Kubernetes will now start creating the Pods and managing the deployment of the Nginx web application.

Next, let's create a Service to expose the Nginx web application to the outside world:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx

This Service manifest defines a LoadBalancer type service that will expose the Nginx web application on port 80.

To create the Service, run the following command:

kubectl create -f nginx-service.yaml

Kubernetes will now create a load balancer and expose the Nginx web application to the internet.

You can now access the Nginx web application by visiting the external IP address of the LoadBalancer service.

Deploying a Web Application on Kubernetes

Now that we have a basic understanding of Kubernetes, let's dive deeper and learn how to deploy a more complex web application on a Kubernetes cluster. In this section, we'll cover the key Kubernetes concepts and resources needed to deploy and manage a web application, including Pods, Services, and Volumes.

Pods and Containers

In Kubernetes, the fundamental unit of deployment is a Pod, which represents one or more containers running together. Containers within a Pod share the same network namespace and can communicate with each other using localhost. This makes it easy to create multi-container applications, where each container specializes in a specific task, such as a web server, a database, or a message queue.

Here's an example of a Pod manifest that runs a web server and a database container:

apiVersion: v1
kind: Pod
metadata:
  name: my-web-app
spec:
  containers:
  - name: web-server
    image: nginx:latest
    ports:
    - containerPort: 80
  - name: database
    image: mysql:5.7
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: mypassword

Services and Load Balancing

While Pods provide the basic building blocks for your application, you'll typically want to expose your web application to the outside world using a Kubernetes Service. A Service is an abstraction that defines a logical set of Pods and a policy by which to access them.

Here's an example of a Service manifest that exposes the web server Pod from the previous example:

apiVersion: v1
kind: Service
metadata:
  name: my-web-app-service
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: my-web-app

This Service manifest creates a LoadBalancer type service that will expose the web server Pod on port 80. The selector field matches the Pods with the app=my-web-app label.

Volumes and Persistent Storage

In addition to the web application itself, you'll often need to persist data, such as user uploads, configuration files, or database data. Kubernetes provides a powerful abstraction called Volumes, which allow you to attach storage to your Pods.

Here's an example of a Pod manifest that uses a Volume to store database data:

apiVersion: v1
kind: Pod
metadata:
  name: my-database
spec:
  containers:
  - name: database
    image: mysql:5.7
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: mypassword
    volumeMounts:
    - name: data
      mountPath: /var/lib/mysql
  volumes:
  - name: data
    emptyDir: {}

In this example, the emptyDir volume type is used to create a temporary storage volume that persists for the lifetime of the Pod. You can also use other volume types, such as hostPath or persistentVolumeClaim, to provide more durable storage solutions.

Configuring and Scaling the Kubernetes Application

Now that we've covered the basics of deploying a web application on Kubernetes, let's explore how to configure and scale the application to meet changing demands and ensure high availability.

Configuring the Application

Kubernetes provides several mechanisms for configuring your application, including:

  1. Environment Variables: You can define environment variables in your Pod or Deployment manifests to pass configuration data to your containers.
  2. ConfigMaps: ConfigMaps allow you to decouple configuration data from your container images, making it easier to manage and update.
  3. Secrets: Secrets are a way to store and manage sensitive information, such as passwords or API keys, in a secure manner.

Here's an example of a Deployment that uses a ConfigMap and a Secret:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-web-app
  template:
    metadata:
      labels:
        app: my-web-app
    spec:
      containers:
      - name: web-server
        image: nginx:latest
        envFrom:
        - configMapRef:
            name: my-config
        envFrom:
        - secretRef:
            name: my-secret

Scaling the Application

One of the key benefits of Kubernetes is its ability to automatically scale your application up or down based on demand. Kubernetes provides several mechanisms for scaling, including:

  1. Horizontal Pod Autoscaling (HPA): HPA automatically scales the number of Pods in a Deployment based on CPU utilization or other custom metrics.
  2. Cluster Autoscaling: Cluster Autoscaling automatically adds or removes nodes from the Kubernetes cluster based on the resource demands of your Pods.

Here's an example of an HPA manifest that scales the web application based on CPU utilization:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: my-web-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-web-app
  minReplicas: 3
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 50

This HPA manifest will automatically scale the number of Pods in the my-web-app Deployment between 3 and 10 replicas, based on the average CPU utilization across the Pods.

High Availability and Portability

Kubernetes also provides features to ensure high availability and portability of your applications. These include:

  • Readiness and Liveness Probes: Probes allow Kubernetes to determine the health of your containers and automatically restart or replace unhealthy Pods.
  • Namespaces: Namespaces provide a way to isolate and organize your Kubernetes resources, making it easier to manage and scale your applications.
  • Ingress: Ingress provides a way to expose multiple services under a single external IP address, simplifying the management of your application's external access.

By leveraging these Kubernetes features, you can build highly available, scalable, and portable applications that can adapt to changing demands and run consistently across different environments.

Summary

In this tutorial, you have learned the fundamental concepts of Kubernetes, including Pods, Nodes, Deployments, and Services. You have also seen how to deploy a simple Nginx web application on a Kubernetes cluster and how to configure and scale the application to meet your requirements. By understanding these Kubernetes basics, you will be better equipped to manage and scale your own containerized applications in a cloud-native environment.

Other Kubernetes Tutorials you may like