How to use docker network create command to manage container networks

DockerDockerBeginner
Practice Now

Introduction

In this lab, you will learn how to effectively manage container networks using the docker network create command. We will explore different network types and configurations to understand how containers communicate with each other and the outside world.

Specifically, you will gain hands-on experience creating basic bridge networks, customizing bridge networks with specific subnets and gateways, and setting up attachable and internal overlay networks for multi-host communication scenarios. By the end of this lab, you will have a solid understanding of how to design and implement various network topologies for your Docker containers.


Skills Graph

%%%%{init: {'theme':'neutral'}}%%%% flowchart RL docker(("Docker")) -.-> docker/ContainerOperationsGroup(["Container Operations"]) docker(("Docker")) -.-> docker/NetworkOperationsGroup(["Network Operations"]) docker/ContainerOperationsGroup -.-> docker/ls("List Containers") docker/ContainerOperationsGroup -.-> docker/rm("Remove Container") docker/ContainerOperationsGroup -.-> docker/exec("Execute Command in Container") docker/ContainerOperationsGroup -.-> docker/inspect("Inspect Container") docker/NetworkOperationsGroup -.-> docker/network("Manage Networks") subgraph Lab Skills docker/ls -.-> lab-555174{{"How to use docker network create command to manage container networks"}} docker/rm -.-> lab-555174{{"How to use docker network create command to manage container networks"}} docker/exec -.-> lab-555174{{"How to use docker network create command to manage container networks"}} docker/inspect -.-> lab-555174{{"How to use docker network create command to manage container networks"}} docker/network -.-> lab-555174{{"How to use docker network create command to manage container networks"}} end

Create a basic bridge network

In this step, we will learn how to create a basic bridge network in Docker. A bridge network is the default network type for containers. Containers connected to the same bridge network can communicate with each other, while they are isolated from containers on other bridge networks and the host machine's network.

First, let's list the existing Docker networks to see the default ones.

docker network ls

You should see some default networks like bridge, host, and none. The bridge network is the one we will be working with.

Now, let's create a new bridge network. We will name it my-bridge-network.

docker network create my-bridge-network

This command creates a new bridge network with default settings. Docker automatically assigns a subnet and gateway to this network.

To verify that the network was created successfully, list the Docker networks again.

docker network ls

You should now see my-bridge-network in the list.

Next, let's inspect the newly created network to see its details, including the subnet and gateway assigned by Docker.

docker network inspect my-bridge-network

The output of this command will provide detailed information about the network, such as its ID, driver (which should be bridge), and the subnet and gateway under the IPAM section.

Now, let's run a container and connect it to our new network. We will use the alpine image for this example. If you don't have the alpine image locally, Docker will pull it automatically.

docker run -d --name container1 --network my-bridge-network alpine sleep infinity

This command runs a container named container1 in detached mode (-d), connects it to my-bridge-network (--network my-bridge-network), and keeps it running by executing the sleep infinity command.

To verify that the container is running and connected to the correct network, you can inspect the container.

docker inspect container1

In the output, look for the Networks section. You should see my-bridge-network listed, along with the IP address assigned to the container within that network.

Finally, let's run another container and connect it to the same network to demonstrate communication between them.

docker run -d --name container2 --network my-bridge-network alpine sleep infinity

Now, both container1 and container2 are connected to my-bridge-network. They should be able to communicate with each other using their container names or IP addresses within the network.

To test communication, we can execute a command inside container1 to ping container2. First, we need to install the iputils package in the alpine containers to use the ping command.

docker exec container1 apk add --no-cache iputils
docker exec container2 apk add --no-cache iputils

Now, ping container2 from container1.

docker exec container1 ping -c 3 container2

You should see successful ping responses, indicating that the two containers on the same bridge network can communicate.

Create a bridge network with custom subnet and gateway

In the previous step, we created a bridge network with default settings. Docker automatically assigned a subnet and gateway. In this step, we will learn how to create a bridge network and specify our own subnet and gateway. This gives you more control over the network configuration for your containers.

First, let's remove the containers and the network created in the previous step to start fresh.

docker stop container1 container2
docker rm container1 container2
docker network rm my-bridge-network

Now, let's create a new bridge network named custom-bridge-network and specify a custom subnet and gateway using the --subnet and --gateway flags. We will use the subnet 172.20.0.0/16 and the gateway 172.20.0.1.

docker network create \
  --driver bridge \
  --subnet 172.20.0.0/16 \
  --gateway 172.20.0.1 \
  custom-bridge-network

The --driver bridge flag explicitly specifies the bridge driver, although it's the default. The --subnet flag defines the IP address range for the network, and the --gateway flag sets the gateway IP address for containers connected to this network.

To verify that the network was created with the specified subnet and gateway, inspect the network.

docker network inspect custom-bridge-network

In the output, look under the IPAM section. You should see the Subnet and Gateway fields matching the values you provided (172.20.0.0/16 and 172.20.0.1).

Now, let's run a container and connect it to our new network. We will use the alpine image again.

docker run -d --name custom-container1 --network custom-bridge-network alpine sleep infinity

This command runs a container named custom-container1 and connects it to custom-bridge-network. Docker will assign an IP address to this container from the specified subnet (172.20.0.0/16).

To verify the container's IP address within the custom network, inspect the container.

docker inspect custom-container1

In the output, under the Networks section for custom-bridge-network, you should see an IpAddress that falls within the 172.20.0.0/16 range.

Let's run another container on the same network.

docker run -d --name custom-container2 --network custom-bridge-network alpine sleep infinity

Now, both custom-container1 and custom-container2 are on the custom-bridge-network and should be able to communicate.

As in the previous step, we need iputils to ping.

docker exec custom-container1 apk add --no-cache iputils
docker exec custom-container2 apk add --no-cache iputils

Test communication by pinging custom-container2 from custom-container1.

docker exec custom-container1 ping -c 3 custom-container2

You should see successful ping responses, confirming communication within the custom bridge network.

Create an attachable overlay network for multi-host communication

Bridge networks are suitable for communication between containers on the same Docker host. However, for communication between containers running on different Docker hosts, you need an overlay network. Overlay networks are created and managed by Docker Swarm.

In this step, we will create an attachable overlay network. An attachable overlay network allows standalone containers (not part of a Swarm service) to connect to it and communicate across different Docker hosts.

First, we need to initialize Docker Swarm on this host. This is required to create and manage overlay networks.

docker swarm init --advertise-addr $(hostname -I | awk '{print $1}')

This command initializes the Swarm and sets the advertise address to the host's IP address. The output will show that the current node is now a Swarm manager.

Now that Swarm is initialized, we can create an attachable overlay network. We will name it my-overlay-network. The --attachable flag is crucial for allowing standalone containers to connect.

docker network create \
  --driver overlay \
  --attachable \
  my-overlay-network

The --driver overlay flag specifies that we are creating an overlay network. The --attachable flag makes the network available for standalone containers.

To verify that the overlay network was created, list the Docker networks.

docker network ls

You should see my-overlay-network in the list, and its driver should be overlay.

Now, let's run a standalone container and connect it to our new overlay network. We will use the alpine image.

docker run -d --name overlay-container1 --network my-overlay-network alpine sleep infinity

This command runs a container named overlay-container1 and connects it to my-overlay-network.

To verify that the container is connected to the overlay network, inspect the container.

docker inspect overlay-container1

In the output, look for the Networks section. You should see my-overlay-network listed.

Since this is a single-host environment, we cannot fully demonstrate multi-host communication. However, the network is configured to allow it if you had multiple Swarm nodes.

Let's run another container on the same overlay network on this single host.

docker run -d --name overlay-container2 --network my-overlay-network alpine sleep infinity

Now, both overlay-container1 and overlay-container2 are on the my-overlay-network and should be able to communicate.

Install iputils in the containers for ping.

docker exec overlay-container1 apk add --no-cache iputils
docker exec overlay-container2 apk add --no-cache iputils

Test communication by pinging overlay-container2 from overlay-container1.

docker exec overlay-container1 ping -c 3 overlay-container2

You should see successful ping responses, confirming communication within the overlay network on this single host.

Create an internal overlay network

In the previous step, we created an attachable overlay network that allows standalone containers to connect. In this step, we will create an internal overlay network. Internal networks are isolated from external networks, meaning containers on an internal network cannot communicate with the outside world (including the Docker host's network) unless explicitly allowed. This is useful for creating isolated service networks within a Swarm.

First, let's clean up the containers and network from the previous step.

docker stop overlay-container1 overlay-container2
docker rm overlay-container1 overlay-container2
docker network rm my-overlay-network

Now, let's create an internal overlay network named my-internal-network. We use the --internal flag for this.

docker network create \
  --driver overlay \
  --internal \
  my-internal-network

The --internal flag ensures that containers connected to this network cannot communicate with external networks.

To verify that the internal overlay network was created, list the Docker networks.

docker network ls

You should see my-internal-network in the list with the overlay driver.

Now, let's run a container and connect it to our new internal network. We will use the alpine image.

docker run -d --name internal-container1 --network my-internal-network alpine sleep infinity

This command runs a container named internal-container1 and connects it to my-internal-network.

To verify that the container is connected to the internal network, inspect the container.

docker inspect internal-container1

In the output, look for the Networks section. You should see my-internal-network listed.

Let's run another container on the same internal network.

docker run -d --name internal-container2 --network my-internal-network alpine sleep infinity

Now, both internal-container1 and internal-container2 are on the my-internal-network. They should be able to communicate with each other, but not with the outside world.

Install iputils in the containers for ping.

docker exec internal-container1 apk add --no-cache iputils
docker exec internal-container2 apk add --no-cache iputils

Test communication by pinging internal-container2 from internal-container1.

docker exec internal-container1 ping -c 3 internal-container2

You should see successful ping responses, confirming communication within the internal overlay network.

Now, let's try to ping an external address, like google.com, from internal-container1.

docker exec internal-container1 ping -c 3 google.com

This ping should fail because the internal network is isolated from external networks.

Summary

In this lab, we learned how to use the docker network create command to manage container networks. We started by creating a basic bridge network, which is the default network type for containers, and verified its creation and details using docker network ls and docker network inspect. We then demonstrated how to connect a container to this newly created network.

Building upon the basic bridge network, we explored creating a bridge network with a custom subnet and gateway to have more control over the network's IP addressing. Finally, we delved into creating overlay networks, specifically an attachable overlay network for multi-host communication and an internal overlay network for isolated communication within a swarm, showcasing the versatility of Docker networking for different deployment scenarios.