Introduction
This comprehensive Docker tutorial provides developers and IT professionals with a deep dive into container technology. From understanding core Docker concepts to practical installation and management techniques, the guide covers everything needed to leverage Docker's powerful containerization capabilities for modern software development and deployment.
Docker Fundamentals
Introduction to Container Technology
Docker is a powerful platform for container technology, enabling developers to package, distribute, and run applications efficiently. Containers provide lightweight, portable environments that encapsulate software and its dependencies.
Core Docker Concepts
graph TD
A[Docker Engine] --> B[Container]
A --> C[Image]
A --> D[Dockerfile]
| Concept | Description |
|---|---|
| Docker Image | Read-only template containing application code and dependencies |
| Container | Runnable instance of a Docker image |
| Docker Engine | Runtime environment for creating and managing containers |
Verifying Docker Installation
Your LabEx environment comes with Docker pre-installed. Let's verify the Docker version to ensure it's ready for use.
docker --version
The output should display the Docker version, for example, Docker version 20.10.21, build baeda1f. This confirms that Docker is correctly installed and accessible.
Pulling Your First Docker Image
A Docker image is a read-only template that contains a set of instructions for creating a container. We will pull the hello-world image, which is a minimal image used to test Docker installations.
docker pull hello-world
This command downloads the hello-world image from Docker Hub to your local machine. You should see output indicating the download progress and confirmation that the image has been pulled.
Running Your First Docker Container
Now that we have an image, let's run a container from it. Running a container means creating an instance of the image.
docker run hello-world
When you run hello-world, Docker performs the following actions:
- It checks if the
hello-worldimage exists locally. If not, it pulls it (which we already did). - It creates a new container from the image.
- It runs the executable within the container.
- The container prints a "Hello from Docker!" message and then exits.
This demonstrates the basic lifecycle of a Docker container: pull an image, run a container, and the container executes its defined task.
Listing Docker Images
To see the images you have downloaded, use the docker images command.
docker images
This command lists all Docker images stored on your local system, including the hello-world image we just pulled. You will see details like the repository, tag, image ID, creation date, and size.
Understanding Container Lifecycle
Containers provide isolated environments with their own filesystem, processes, and network interfaces. They can be started, stopped, moved, and deleted quickly, making them ideal for microservices and cloud-native applications.
Dockerfile File Management
Dockerfile Basics
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. It defines the environment, dependencies, and configuration for containerized applications.
graph TD
A[Dockerfile] --> B[Build Command]
B --> C[Docker Image]
C --> D[Container]
Creating Your First Dockerfile
Navigate to the docker_app directory, which was created during the setup phase. This will be our working directory for this lab.
cd /home/labex/project/docker_app
Now, let's create a simple Dockerfile named Dockerfile in this directory. This Dockerfile will create an image based on Ubuntu and add a simple text file to it.
nano Dockerfile
Add the following content to the Dockerfile:
## Use an official Ubuntu base image
FROM ubuntu:22.04
## Set the working directory inside the container
WORKDIR /app
## Create a simple text file
RUN echo "Hello from Dockerfile!" > /app/message.txt
## Command to run when the container starts
CMD ["cat", "/app/message.txt"]
FROM ubuntu:22.04: This instruction specifies the base image for our new image. We are using Ubuntu 22.04.WORKDIR /app: This sets the working directory for anyRUN,CMD,ENTRYPOINT,COPY, orADDinstructions that follow it in the Dockerfile. If/appdoesn't exist, it will be created.RUN echo "Hello from Dockerfile!" > /app/message.txt: This instruction executes a command inside the image during the build process. Here, it creates a file namedmessage.txtin the/appdirectory with the content "Hello from Dockerfile!".CMD ["cat", "/app/message.txt"]: This instruction provides default commands for an executing container. When a container is run from this image, it will executecat /app/message.txt, displaying the content of our message file.
Save the file by pressing Ctrl+S and exit nano by pressing Ctrl+X.
Building Your Docker Image
Now that we have our Dockerfile, let's build the Docker image from it. The docker build command reads the Dockerfile and creates a Docker image.
docker build -t my-ubuntu-app .
docker build: The command to build a Docker image.-t my-ubuntu-app: This tags our image with the namemy-ubuntu-app. You can choose any name you like..: This specifies the build context, which is the set of files at the specifiedPATHorURL. The.indicates that the current directory (/home/labex/project/docker_app) is the build context. Docker will look for aDockerfilein this directory.
You will see output showing each step of the build process, corresponding to the instructions in your Dockerfile.
Running Your Custom Docker Container
After successfully building the image, let's run a container from it to see the message.txt content.
docker run my-ubuntu-app
This command will create and run a new container from your my-ubuntu-app image. The CMD instruction in your Dockerfile will execute, and you should see "Hello from Dockerfile!" printed to your terminal.
Inspecting the Container's Filesystem
To further understand how files are managed within a container, let's run an interactive session and inspect the file we created.
docker run -it my-ubuntu-app /bin/bash
-it: This flag allocates a pseudo-TTY and keepsSTDINopen, allowing you to interact with the container.my-ubuntu-app: The name of the image we want to run./bin/bash: This overrides theCMDinstruction in the Dockerfile and instead runs a Bash shell inside the container, giving you a command prompt.
Once inside the container, you will see a new command prompt (e.g., root@<container_id>:/app#). Now, you can list the files and view the content of message.txt.
ls -l
cat message.txt
You should see message.txt listed and its content displayed. To exit the container, simply type exit.
exit
This interactive session demonstrates that the file message.txt was successfully created and is accessible within the container's filesystem.
Advanced Docker Techniques
Multi-Stage Build Strategies
Multi-stage builds are a powerful feature that allows you to use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base image, and each FROM instruction starts a new build stage. This helps in optimizing Dockerfile complexity and reducing the final image size by separating build and runtime environments.
graph TD
A[Build Stage] --> B[Compile/Build]
B --> C[Runtime Stage]
C --> D[Minimal Production Image]
Preparing for a Multi-Stage Build
For this example, we will simulate a simple application that requires a build step. We'll create a build_script.sh and a final_app.txt file.
First, ensure you are in the docker_app directory:
cd /home/labex/project/docker_app
Now, create a simple build script:
nano build_script.sh
Add the following content to build_script.sh:
#!/bin/bash
echo "Running build process..."
echo "This is the final application output." > /app/output/final_app.txt
echo "Build complete."
Save the file (Ctrl+S) and exit (Ctrl+X).
Next, create a placeholder for our final application content. In a real scenario, this would be generated by the build process.
nano final_app.txt
Add the following content to final_app.txt:
This is a placeholder for the final application.
Save the file (Ctrl+S) and exit (Ctrl+X).
Implementing a Multi-Stage Dockerfile
Now, let's modify our Dockerfile to use a multi-stage build. We will have a "builder" stage that runs our build_script.sh and then a "production" stage that only copies the necessary output from the builder stage.
nano Dockerfile
Replace the existing content with the following:
## Stage 1: Builder stage
FROM ubuntu:22.04 AS builder
## Install bash to run the script
RUN apt-get update && apt-get install -y bash
## Set working directory for the builder stage
WORKDIR /build
## Copy the build script and make it executable
COPY build_script.sh .
RUN chmod +x build_script.sh
## Create a directory for output
RUN mkdir -p /build/output
## Run the build script
RUN ./build_script.sh
## Stage 2: Production stage
FROM ubuntu:22.04
## Set working directory for the production stage
WORKDIR /app
## Copy only the necessary artifact from the builder stage
COPY --from=builder /build/output/final_app.txt .
## Command to run when the container starts
CMD ["cat", "final_app.txt"]
FROM ubuntu:22.04 AS builder: This starts the first stage and names itbuilder.RUN apt-get update && apt-get install -y bash: Installsbashin the builder stage, which is needed to execute our script.WORKDIR /build: Sets the working directory for the builder stage.COPY build_script.sh .: Copies our build script into the builder stage.RUN chmod +x build_script.sh: Makes the script executable.RUN mkdir -p /build/output: Creates an output directory in the builder stage.RUN ./build_script.sh: Executes the build script, which generatesfinal_app.txtin/build/output.FROM ubuntu:22.04: This starts the second stage (the production stage). It uses a freshubuntu:22.04image, meaning it doesn't inherit anything from thebuilderstage by default.WORKDIR /app: Sets the working directory for the production stage.COPY --from=builder /build/output/final_app.txt .: This is the key instruction for multi-stage builds. It copiesfinal_app.txtfrom the/build/outputdirectory of thebuilderstage to the current directory (/app) of the production stage. This ensures only the final artifact is included, keeping the production image small.CMD ["cat", "final_app.txt"]: The command to run when the production container starts, displaying the content of the copied file.
Save the file (Ctrl+S) and exit (Ctrl+X).
Building and Running the Multi-Stage Image
Now, let's build the new image using our multi-stage Dockerfile.
docker build -t multi-stage-app .
Observe the build output. You will see steps for both the builder stage and the final stage.
After the build completes, run the container to verify that the final_app.txt content is displayed.
docker run multi-stage-app
You should see "This is the final application output." printed, confirming that the multi-stage build successfully copied the artifact from the build stage to the final image.
Cleaning Up Docker Resources
It's good practice to clean up Docker resources (containers and images) that are no longer needed to free up disk space.
First, list all containers (including exited ones):
docker ps -a
You can remove specific containers by their ID or name:
docker rm $(docker ps -aq)
This command removes all exited containers. docker ps -aq lists all container IDs, and docker rm removes them.
Next, list all images:
docker images
You can remove specific images by their ID or name. Be careful not to remove images that are still in use by running containers.
docker rmi my-ubuntu-app multi-stage-app hello-world ubuntu:22.04
This command removes the images we created and used in this lab. If an image is still used by a container, you will need to remove the container first.
docker images
This command will show that the images have been removed.
This concludes the lab on Docker fundamentals and advanced techniques. You have learned how to create Dockerfiles, build images, run containers, and optimize image size using multi-stage builds.
Summary
By mastering Docker fundamentals, Dockerfile management, and advanced techniques, developers can create more efficient, portable, and scalable software environments. This tutorial has equipped you with practical skills in container creation, image management, and deployment strategies, enabling you to streamline development workflows and embrace cloud-native application architectures.



