Introduction
This lab expands your monitoring to an external service. You will set up Node Exporter, a Prometheus exporter for hardware and OS metrics. The setup will provide a running Prometheus container from the previous lab. You will run a Node Exporter container and add it as a new target in your prometheus.yml. By the end of this lab, you will be able to query host-level system metrics within the Prometheus UI.
Pull Node Exporter Docker Image
In this step, you will download the official Docker image for Node Exporter. Node Exporter is a Prometheus exporter that exposes a wide variety of hardware and kernel-related metrics from the host machine.
To get started, pull the prom/node-exporter image from Docker Hub. Open a terminal and run the following command:
docker pull prom/node-exporter
This command contacts the Docker Hub registry and downloads the latest version of the Node Exporter image to your local machine. You will see output showing the download progress for each layer of the image.
Expected output:
Using default tag: latest
latest: Pulling from prom/node-exporter
Digest: sha256:a5579e72377a6053359058893b80f4f47c55d761457d685343b8e797859a159b
Status: Image is up to date for prom/node-exporter
docker.io/prom/node-exporter
Run Node Exporter Container on Port 9100
Now that you have the image, let's run the Node Exporter as a container. We will expose its metrics on port 9100, which is the default port for Node Exporter.
Execute the following command to start the container and place it on the same Docker network as Prometheus:
docker run -d -p 9100:9100 --name node-exporter --network monitoring prom/node-exporter
Let's break down this command:
-d: Runs the container in detached mode, meaning it runs in the background.-p 9100:9100: Maps port9100of the host to port9100of the container.--name node-exporter: Assigns a memorable name to the container for easy reference.prom/node-exporter: The image to use for creating the container.
You can verify that the container is running with the docker ps command:
docker ps
You should see node-exporter in the list of running containers. Optionally, confirm the network attachment with:
docker inspect node-exporter --format '{{.HostConfig.NetworkMode}}'
The output should be monitoring.
Expected output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
... prom/node-exporter "/bin/node_exporter" A few seconds ago Up a few seconds 0.0.0.0:9100->9100/tcp, :::9100->9100/tcp node-exporter
... prom/prometheus "/bin/prometheus --c…" About a minute ago Up about a minute 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp prometheus
Update prometheus.yml to Add Node Exporter Target
In this step, you will configure Prometheus to scrape metrics from the newly running Node Exporter container. This is done by editing the prometheus.yml configuration file to add a new scrape job.
First, open the prometheus.yml file located in your project directory using the nano editor:
nano ~/project/prometheus.yml
Now, add a new job configuration for the Node Exporter under the scrape_configs section. Your final file should look like this. Make sure the indentation is correct, as YAML is sensitive to it.
global:
scrape_interval: 15s
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["prometheus:9090"]
- job_name: "node_exporter"
static_configs:
- targets: ["node-exporter:9100"]
Here's what the new block does:
job_name: 'node_exporter': Gives a name to this scrape job, which will be used to label the metrics collected.targets: ['node-exporter:9100']: Tells Prometheus where to find the Node Exporter. We use the container name because Docker provides internal DNS resolution between containers on the same network. Thelocalhosthostname is scoped to each container, so it cannot be used to reach other containers.
After editing, save the file and exit nano by pressing Ctrl+X, then Y, and then Enter.
Restart Prometheus Container with Updated Config
For Prometheus to load the new configuration, you need to restart its container. The volume mount we configured earlier ensures that the container will see the updated prometheus.yml file upon restart.
Run the following command to restart the Prometheus container:
docker restart prometheus
This command gracefully stops and then starts the container named prometheus. After a few seconds, Prometheus will be running with the new configuration.
Now, let's verify the change in the Prometheus UI.
- In the LabEx interface, click the
+(New Tab) button, choose Web Service, and enter9090for the port. - When the new tab opens, you should see the Prometheus Expression Browser landing page.
- Click on the "Status" menu in the top navigation bar, and then select "Targets".

You should now see two targets listed: prometheus and node_exporter. Both should have a state of "UP", indicating that Prometheus is successfully scraping metrics from both itself and the Node Exporter.
Query node_cpu_seconds_total Metric in UI
The final step is to confirm that Prometheus is successfully ingesting metrics from Node Exporter by running a query.
Navigate back to the main "Graph" page in the Prometheus UI by clicking the "Graph" link in the navigation bar. In the "Expression" input field, type the following metric name:
node_cpu_seconds_total
This metric represents the total time in seconds that the CPU has spent in various modes (e.g., idle, user, system).

Click the "Execute" button. If the connection is successful, you will see a table of results below the graph. Each result corresponds to a different CPU core and mode. Seeing these results confirms that your entire monitoring pipeline is working correctly, from data collection by Node Exporter to ingestion and storage by Prometheus.
You can switch between the "Table" and "Graph" views to visualize the data.
Summary
Congratulations! You have successfully expanded your monitoring setup by adding an external service. In this lab, you learned how to run the official Prometheus Node Exporter in a Docker container, configure a Prometheus instance to scrape metrics from this new target, and verify the data collection by querying for host-level metrics. This is a fundamental skill for building a comprehensive observability stack, allowing you to gain deep insights into the performance and health of your systems.



