MacでのDockerコマンドが見つからない問題のトラブルシューティングと環境設定

DockerDockerBeginner
今すぐ練習

💡 このチュートリアルは英語版からAIによって翻訳されています。原文を確認するには、 ここをクリックしてください

Introduction

Docker has revolutionized application development by enabling developers to create, deploy, and run applications in isolated environments called containers. Mac users sometimes encounter the "docker command not found" error, which can be frustrating when starting with containerization. This lab will guide you through understanding Docker concepts, verifying your Docker installation, troubleshooting common issues, and setting up a proper Docker environment for your development work.

Understanding Docker Basics and Verifying Installation

Docker provides a standardized way to package applications and their dependencies into containers, making them portable across different environments. Before troubleshooting any Docker issues, let's ensure we understand the basics and verify our installation.

What is Docker?

Docker is a platform that uses containerization technology to make it easier to create, deploy, and run applications. Unlike virtual machines, Docker containers share the host system's kernel but run in isolated environments, making them lightweight and efficient.

Key components of Docker include:

  • Docker Engine: The runtime that builds and runs containers
  • Docker Images: Read-only templates used to create containers
  • Docker Containers: Running instances of Docker images
  • Docker Registry: A repository for storing and sharing Docker images
  • Dockerfile: A text file containing instructions to build a Docker image

Verifying Docker Installation

Our lab environment already has Docker installed. Let's verify this by checking the Docker version:

docker --version

You should see output similar to:

Docker version 20.10.21, build 20.10.21-0ubuntu1~22.04.3

Now, let's check if the Docker daemon is running:

sudo systemctl status docker

You should see output indicating that Docker is active (running). The output will look similar to:

● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since ...

Press q to exit the status view.

If for any reason Docker is not running, you can start it with:

sudo systemctl start docker

Running Your First Container

Let's verify Docker is working correctly by running a simple "hello-world" container:

docker run hello-world

This command downloads the hello-world image if it's not already available locally and runs it in a container. You should see output similar to:

Hello from Docker!
This message shows that your installation appears to be working correctly.
...

The output explains what Docker did to run this container, providing a good introduction to how Docker works.

Checking Running Containers

To see all currently running containers, use:

docker ps

Since the hello-world container exits immediately after displaying its message, you probably won't see it in this list. To see all containers, including those that have stopped, use:

docker ps -a

This shows all containers, their IDs, the images they were created from, when they were created, and their current status.

Now you've verified that Docker is installed and working correctly in your environment, and you've run your first container!

Working with Docker Images and Containers

Now that you've verified Docker is working correctly, let's learn how to work with Docker images and containers in more detail.

Understanding Docker Images

Docker images are the blueprints for containers. They contain the application code, libraries, dependencies, tools, and other files needed for an application to run.

Let's explore Docker images using some basic commands:

Listing Available Images

To see all Docker images available on your system:

docker images

You should see output similar to:

REPOSITORY    TAG       IMAGE ID       CREATED         SIZE
hello-world   latest    feb5d9fea6a5   X months ago    13.3kB

Pulling Images from Docker Hub

Docker Hub is a cloud-based registry service where you can find and share Docker images. Let's pull a popular image:

docker pull nginx

This command downloads the latest nginx web server image. You'll see progress output as various layers of the image are downloaded:

Using default tag: latest
latest: Pulling from library/nginx
...
Digest: sha256:...
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest

Run docker images again to see the newly downloaded nginx image in your list.

Working with Containers

Now that we have some images, let's learn how to create and manage containers.

Running a Container

Let's run an nginx container that will serve a web page:

docker run --name my-nginx -p 8080:80 -d nginx

This command does several things:

  • --name my-nginx: Names the container "my-nginx"
  • -p 8080:80: Maps port 8080 on your host to port 80 in the container
  • -d: Runs the container in detached mode (in the background)
  • nginx: Specifies the image to use

Verifying the Container is Running

Check that your container is running:

docker ps

You should see your nginx container in the list of running containers:

CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS          PORTS                  NAMES
a1b2c3d4e5f6   nginx     "/docker-entrypoint.…"   X seconds ago    Up X seconds    0.0.0.0:8080->80/tcp   my-nginx

Accessing the Web Server

You can access the nginx web server by opening a web browser in your LabEx VM environment and navigating to:

http://localhost:8080

Alternatively, you can use curl from the terminal:

curl http://localhost:8080

You should see the default nginx welcome page HTML.

Viewing Container Logs

To see the logs for your container:

docker logs my-nginx

This shows the access logs for the nginx server.

Stopping and Removing Containers

To stop a running container:

docker stop my-nginx

To remove a container (it must be stopped first):

docker rm my-nginx

Verify the container has been removed:

docker ps -a

The container named "my-nginx" should no longer appear in the list.

Now you understand the basics of working with Docker images and containers. You've pulled images from Docker Hub, run containers, mapped ports, viewed logs, and managed container lifecycles.

Creating Your Own Docker Images with Dockerfiles

So far, we've used pre-built Docker images from Docker Hub. Now, let's learn how to create our own custom Docker images using Dockerfiles.

What is a Dockerfile?

A Dockerfile is a text file that contains instructions for building a Docker image. It specifies the base image, adds files, installs software, sets environment variables, and configures the container that will be created from the image.

Creating Your First Dockerfile

Let's create a simple web application using Node.js and package it as a Docker image.

First, create a new directory for your project:

mkdir -p ~/project/node-app
cd ~/project/node-app

Now, create a simple Node.js application. First, create a file named app.js:

nano app.js

Add the following code to the file:

const http = require("http");

const server = http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader("Content-Type", "text/plain");
  res.end("Hello World from Docker!\n");
});

const port = 3000;
server.listen(port, () => {
  console.log(`Server running at http://localhost:${port}/`);
});

Save the file by pressing Ctrl+O, then Enter, and exit nano with Ctrl+X.

Next, create a package.json file to define your Node.js application:

nano package.json

Add the following content:

{
  "name": "docker-node-app",
  "version": "1.0.0",
  "description": "A simple Node.js app for Docker",
  "main": "app.js",
  "scripts": {
    "start": "node app.js"
  },
  "author": "",
  "license": "ISC"
}

Save and exit nano.

Now, create a Dockerfile:

nano Dockerfile

Add the following content:

## Use an official Node.js runtime as the base image
FROM node:14-alpine

## Set the working directory in the container
WORKDIR /usr/src/app

## Copy package.json and package-lock.json
COPY package.json ./

## Install dependencies
RUN npm install

## Copy the application code
COPY app.js ./

## Expose the port the app runs on
EXPOSE 3000

## Command to run the application
CMD ["npm", "start"]

Save and exit nano.

Building Your Docker Image

Now that you have a Dockerfile, you can build your Docker image:

docker build -t my-node-app .

This command builds an image from your Dockerfile:

  • -t my-node-app: Tags the image with the name "my-node-app"
  • .: Specifies that the Dockerfile is in the current directory

You'll see output showing the build progress:

Sending build context to Docker daemon  X.XXkB
Step 1/7 : FROM node:14-alpine
 ---> XXXXXXXXXX
Step 2/7 : WORKDIR /usr/src/app
 ---> XXXXXXXXXX
...
Successfully built XXXXXXXXXX
Successfully tagged my-node-app:latest

Running Your Custom Docker Image

Now, run a container using your newly built image:

docker run --name node-app-container -p 3000:3000 -d my-node-app

Verify the container is running:

docker ps

You should see your container in the list:

CONTAINER ID   IMAGE         COMMAND         CREATED          STATUS          PORTS                    NAMES
XXXXXXXXXX     my-node-app   "npm start"     X seconds ago    Up X seconds    0.0.0.0:3000->3000/tcp   node-app-container

Testing Your Application

Test the application by making an HTTP request to it:

curl http://localhost:3000

You should see:

Hello World from Docker!

Understanding the Dockerfile

Let's review the key components of our Dockerfile:

  1. FROM node:14-alpine: Specifies the base image to use
  2. WORKDIR /usr/src/app: Sets the working directory inside the container
  3. COPY package.json ./: Copies files from the host to the container
  4. RUN npm install: Runs a command inside the container during the build process
  5. EXPOSE 3000: Documents that the container listens on port 3000
  6. CMD ["npm", "start"]: Specifies the command to run when the container starts

Cleanup

Let's clean up by stopping and removing the container:

docker stop node-app-container
docker rm node-app-container

You've now learned how to create your own Docker images using Dockerfiles, build those images, and run containers based on them. This is a fundamental skill for Docker development.

Managing Data with Docker Volumes

One challenge when working with Docker containers is data persistence. Containers are ephemeral, meaning any data created within a container is lost when the container is removed. Docker volumes solve this problem by providing a way to persist data outside of containers.

Understanding Docker Volumes

Docker volumes are the preferred mechanism for persisting data generated by and used by Docker containers. They are completely managed by Docker and are isolated from the host filesystem's directory structure.

Benefits of using volumes include:

  • Volumes are easier to back up or migrate than bind mounts
  • You can manage volumes using Docker CLI commands
  • Volumes work on both Linux and Windows containers
  • Volumes can be more safely shared among multiple containers
  • Volume drivers let you store volumes on remote hosts, cloud providers, or encrypt the contents of volumes

Creating and Using Docker Volumes

Let's create a simple MySQL database container that uses a volume to persist its data.

Creating a Volume

First, create a Docker volume:

docker volume create mysql-data

You can list all volumes with:

docker volume ls

You should see your new volume in the list:

DRIVER    VOLUME NAME
local     mysql-data

Running a Container with a Volume

Now, let's run a MySQL container that uses this volume:

docker run --name mysql-db -e MYSQL_ROOT_PASSWORD=mysecretpassword -v mysql-data:/var/lib/mysql -p 3306:3306 -d mysql:5.7

This command:

  • --name mysql-db: Names the container "mysql-db"
  • -e MYSQL_ROOT_PASSWORD=mysecretpassword: Sets an environment variable to configure MySQL
  • -v mysql-data:/var/lib/mysql: Mounts the "mysql-data" volume to the directory where MySQL stores its data
  • -p 3306:3306: Maps port 3306 on the host to port 3306 in the container
  • -d: Runs the container in detached mode
  • mysql:5.7: Specifies the image to use

Wait a moment for the container to start up, then check that it's running:

docker ps

Interacting with the Database

Let's create a database and a table to demonstrate data persistence. First, connect to the MySQL container:

docker exec -it mysql-db bash

Inside the container, connect to the MySQL server:

mysql -u root -pmysecretpassword

Create a new database and table:

CREATE DATABASE testdb;
USE testdb;
CREATE TABLE users (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255));
INSERT INTO users (name) VALUES ('John'), ('Jane'), ('Bob');
SELECT * FROM users;

You should see the inserted data:

+----+------+
| id | name |
+----+------+
|  1 | John |
|  2 | Jane |
|  3 | Bob  |
+----+------+

Exit the MySQL prompt and the container:

exit
exit

Testing Volume Persistence

Now, let's stop and remove the container, then create a new one using the same volume:

docker stop mysql-db
docker rm mysql-db

Create a new container using the same volume:

docker run --name mysql-db-new -e MYSQL_ROOT_PASSWORD=mysecretpassword -v mysql-data:/var/lib/mysql -p 3306:3306 -d mysql:5.7

Now connect to the new container and check if our data persisted:

docker exec -it mysql-db-new bash
mysql -u root -pmysecretpassword
USE testdb
SELECT * FROM users

You should see the same data we inserted earlier:

+----+------+
| id | name |
+----+------+
|  1 | John |
|  2 | Jane |
|  3 | Bob  |
+----+------+

Exit the MySQL prompt and the container:

exit
exit

This demonstrates that the data persisted even after the original container was removed, because it was stored in a Docker volume.

Inspecting and Managing Volumes

You can inspect a volume to get more information about it:

docker volume inspect mysql-data

This will show details like the mount point and driver used:

[
  {
    "CreatedAt": "YYYY-MM-DDTHH:MM:SS+00:00",
    "Driver": "local",
    "Labels": {},
    "Mountpoint": "/var/lib/docker/volumes/mysql-data/_data",
    "Name": "mysql-data",
    "Options": {},
    "Scope": "local"
  }
]

To clean up, let's stop and remove the container:

docker stop mysql-db-new
docker rm mysql-db-new

If you want to remove the volume as well:

docker volume rm mysql-data

You've now learned how to use Docker volumes to persist data between container lifecycles, which is essential for stateful applications like databases.

Exploring Docker Networking

Docker networking allows containers to communicate with each other and with the outside world. Understanding Docker's networking capabilities is crucial for building multi-container applications.

Docker Network Types

Docker provides several network drivers out of the box:

  • bridge: The default network driver. Containers on the same bridge network can communicate.
  • host: Removes network isolation between the container and the host.
  • none: Disables all networking for a container.
  • overlay: Connects multiple Docker daemons and enables Swarm services to communicate.
  • macvlan: Assigns a MAC address to a container, making it appear as a physical device on the network.

Exploring the Default Bridge Network

When you install Docker, it automatically creates a default bridge network. Let's explore it:

docker network ls

You should see output similar to:

NETWORK ID     NAME      DRIVER    SCOPE
XXXXXXXXXXXX   bridge    bridge    local
XXXXXXXXXXXX   host      host      local
XXXXXXXXXXXX   none      null      local

You can inspect the default bridge network:

docker network inspect bridge

This command provides detailed information about the network, including containers connected to it, IP address range, and gateway.

Creating and Using Custom Bridge Networks

Let's create a custom bridge network for better container isolation:

docker network create my-network

Verify that the network was created:

docker network ls

You should see your new network in the list:

NETWORK ID     NAME           DRIVER    SCOPE
XXXXXXXXXXXX   bridge         bridge    local
XXXXXXXXXXXX   host           host      local
XXXXXXXXXXXX   my-network     bridge    local
XXXXXXXXXXXX   none           null      local

Now, let's run two containers on this network and demonstrate communication between them.

First, start an NGINX container on the custom network:

docker run --name web-server --network my-network -d nginx

Next, let's run an Alpine Linux container and use it to test connectivity to the NGINX container:

docker run --name alpine --network my-network -it alpine sh

Inside the Alpine container, install curl and test connectivity to the NGINX container:

apk add --update curl
curl web-server

The output should be the HTML of the NGINX welcome page. This works because Docker provides embedded DNS for containers in custom networks, allowing them to resolve container names to IP addresses.

Type exit to leave the Alpine container:

exit

Connecting Containers to Multiple Networks

Containers can be connected to multiple networks. Let's create another network:

docker network create another-network

Connect the existing web-server container to this new network:

docker network connect another-network web-server

Verify that the container is now connected to both networks:

docker inspect web-server -f '{{json .NetworkSettings.Networks}}' | json_pp

You should see that the container is connected to both my-network and another-network.

Running Containers with Port Publishing

When you want to make a container's service accessible from outside the Docker host, you need to publish its ports:

docker run --name public-web -p 8080:80 -d nginx

This command maps port 8080 on the host to port 80 in the container. You can access the NGINX web server using:

curl http://localhost:8080

You should see the NGINX welcome page.

Cleaning Up

Let's clean up the containers and networks we created:

docker stop web-server alpine public-web
docker rm web-server alpine public-web
docker network rm my-network another-network

Verify that the containers and networks have been removed:

docker ps -a
docker network ls

Understanding Container Communication

This step has demonstrated how Docker networking enables:

  1. Container-to-container communication using custom networks
  2. DNS resolution using container names
  3. Connection of containers to multiple networks
  4. Exposing container services to the outside world using port publishing

These networking capabilities are essential for building complex, multi-container applications where components need to communicate with each other and with external services.

Summary

Congratulations on completing this Docker lab! You've learned essential Docker concepts and skills that form the foundation of container-based development and deployment.

In this lab, you've:

  • Verified Docker installation and run your first container
  • Worked with Docker images and containers, including pulling images from Docker Hub and managing container lifecycles
  • Created your own custom Docker image using a Dockerfile
  • Used Docker volumes to persist data between container lifecycles
  • Explored Docker networking to enable container-to-container communication

These skills will enable you to:

  • Package applications and their dependencies in portable containers
  • Create standardized environments for development, testing, and production
  • Implement microservices architectures where each component runs in its own container
  • Ensure data persistence for stateful applications
  • Build complex multi-container applications with proper isolation and communication

Docker has become an essential tool in modern software development, and the knowledge you've gained will be valuable across a wide range of development and operations scenarios.