Configure NFS Client Access in RHEL

Red Hat Enterprise LinuxBeginner
Practice Now

Introduction

In this lab, you will learn the essential skills for configuring NFS client access on a Red Hat Enterprise Linux (RHEL) system. You will start by manually mounting a network share using the mount command to understand the fundamental process. Following this, you will configure a persistent mount in /etc/fstab to ensure the NFS share is automatically available after a system reboot, providing a foundational understanding of static network file system integration.

Building on these core concepts, you will advance to a more dynamic and efficient method by setting up the automounter. This involves installing and enabling the autofs service, then creating both indirect maps for on-demand directory mounting and direct maps for static mount points. You will conclude the lab by verifying that both direct and indirect automounts function correctly for different users, solidifying your ability to manage robust NFS client configurations.

Manually Mount an NFS Share Using the mount Command

In this step, you will learn how to manually access a network-shared directory using the Network File System (NFS) protocol. NFS allows a client system to access files over a computer network in a manner similar to how local storage is accessed. For this exercise, we will simulate both an NFS server and a client on your local machine to practice the necessary commands.

An NFS server has been pre-configured on your system to export (share) the directory /srv/nfs/shared_data. Your task is to mount this shared directory to a local folder, verify access, and then unmount it.

Step 1.1: Create a Local Mount Point

To access the shared NFS directory, you need a local directory to serve as a "mount point." This is essentially an empty folder on your client system where the contents of the remote share will appear once mounted. All operations will be performed within your ~/project directory.

Create a directory named nfs_mount inside your project folder:

mkdir ~/project/nfs_mount

You can verify that the directory was created by listing the contents of your project folder:

ls -F ~/project
nfs_mount/

Step 1.2: Mount the NFS Share

Now you can use the mount command to attach the remote NFS share to your newly created mount point. The command requires sudo privileges because mounting filesystems is a system-level operation.

The basic syntax is mount -t nfs <server>:<remote_directory> <local_mount_point>.

  • -t nfs: Specifies that the filesystem type is NFS.
  • localhost:/srv/nfs/shared_data: The source, which is the server and the path it's exporting.
  • ~/project/nfs_mount: The destination, which is your local mount point.

Run the following command to mount the share:

sudo mount -t nfs localhost:/srv/nfs/shared_data ~/project/nfs_mount

This command will not produce any output if it is successful.

Step 1.3: Verify the Mount and Interact with the Share

After running the mount command, you should verify that the share is correctly mounted. You can do this in a few ways.

First, use the mount command piped to grep to filter for NFS mounts:

mount | grep nfs
localhost:/srv/nfs/shared_data on /home/labex/project/nfs_mount type nfs4 (rw,relatime,vers=4.2,rsize=...,wsize=...,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=...,local_lock=none,addr=...)

Next, check the contents of your mount point. It should now display the files from the remote /srv/nfs/shared_data directory.

ls -l ~/project/nfs_mount
total 4
-rw-r--r--. 1 root root 32 Nov 10 14:30 welcome.txt

You can now interact with this directory as if it were a local folder. Note that in this lab environment, the files are owned by root due to the NFS server configuration with no_root_squash. In production environments, you might see nobody as the owner depending on the NFS server settings. Let's create a new file inside the mounted share. Since the NFS share may be owned by root, you need to use sudo with the tee command to write files:

echo "My test file" | sudo tee ~/project/nfs_mount/my_file.txt > /dev/null

Verify that your new file exists alongside the original one:

ls -l ~/project/nfs_mount
total 8
-rw-r--r--. 1 root  root  13 Nov 10 14:35 my_file.txt
-rw-r--r--. 1 root  root  32 Nov 10 14:30 welcome.txt

Step 1.4: Unmount the NFS Share

When you are finished using a network share, it is important to unmount it cleanly using the umount command. This ensures all data is synchronized and the connection is properly closed. You only need to specify the mount point.

sudo umount ~/project/nfs_mount

To confirm that the share has been unmounted, list the contents of the ~/project/nfs_mount directory. It should now be empty again.

ls -l ~/project/nfs_mount
total 0

Configure a Persistent NFS Mount in /etc/fstab

In the previous step, you learned how to manually mount an NFS share using the mount command. However, such mounts are temporary and will not survive a system reboot. To make a mount permanent, you need to add an entry to the /etc/fstab file (short for "file systems table"). This file contains a list of filesystems and devices that are mounted automatically when the system boots.

In this step, you will configure the same NFS share to be mounted persistently by adding an entry to /etc/fstab.

Step 2.1: Prepare the Environment

First, ensure the mount point from the previous step, ~/project/nfs_mount, exists and is empty. If you are continuing directly from the last step, it should already be there.

If the directory does not exist, create it now:

mkdir -p ~/project/nfs_mount

Also, ensure that nothing is currently mounted to this directory. You can run the umount command, which will report an error if it's not mounted, and that's perfectly fine.

sudo umount ~/project/nfs_mount

Step 2.2: Edit the /etc/fstab File

Now, you will add a new line to the /etc/fstab file to define the persistent NFS mount. You must use sudo to edit this system configuration file. We will use the nano editor.

Open the file with the following command:

sudo nano /etc/fstab

Navigate to the bottom of the file and add the following line. Be very careful with the syntax, as errors in this file can cause system boot issues.

localhost:/srv/nfs/shared_data /home/labex/project/nfs_mount nfs defaults,_netdev 0 0

Let's break down this line:

  • localhost:/srv/nfs/shared_data: This is the device to mount. It specifies the NFS server (localhost) and the exported directory (/srv/nfs/shared_data).
  • /home/labex/project/nfs_mount: This is the local mount point where the share will be accessible.
  • nfs: This specifies the filesystem type.
  • defaults,_netdev: These are the mount options. defaults includes a standard set of options (like rw for read-write). _netdev is crucial for network filesystems; it tells the system to wait until the network is active before attempting to mount this share.
  • 0: This is the dump field, which is used by the dump backup utility. A value of 0 disables it.
  • 0: This is the pass field, used by the fsck utility to determine the order of filesystem checks at boot. A value of 0 means the filesystem will not be checked.

After adding the line, save the file and exit nano by pressing Ctrl+X, then Y, and then Enter.

Step 2.3: Test the /etc/fstab Entry

You don't need to reboot to test your new /etc/fstab entry. The mount command is smart enough to read /etc/fstab. If you provide only the mount point, mount will look up the corresponding entry in /etc/fstab and use the information it finds there.

Mount the share using only the mount point:

sudo mount ~/project/nfs_mount

If the command completes without any errors, your /etc/fstab entry is correct.

Step 2.4: Verify the Mount

Verify that the share is now mounted by checking the output of the mount command and listing the contents of the directory.

mount | grep nfs_mount
localhost:/srv/nfs/shared_data on /home/labex/project/nfs_mount type nfs4 (rw,relatime,vers=4.2,rsize=...,wsize=...,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=...,local_lock=none,addr=...,_netdev)

Now, check the contents. You should see the files from the share.

ls -l ~/project/nfs_mount
total 4
-rw-r--r--. 1 root root 32 Nov 10 14:30 welcome.txt

This mount is now persistent and would be automatically re-established after a reboot.

Step 2.5: Clean Up the Environment

To avoid conflicts with later exercises, you should now reverse the changes. First, unmount the share, and then remove the line you added from /etc/fstab.

Unmount the directory:

sudo umount ~/project/nfs_mount

Open /etc/fstab again to remove the entry:

sudo nano /etc/fstab

Use the arrow keys to navigate to the line you added (localhost:/srv/nfs/shared_data ...), and press Ctrl+K to delete the entire line. Then, save and exit by pressing Ctrl+X, Y, and Enter.

This leaves the system in a clean state for the next part of the lab.

Set Up the Automounter by Installing and Enabling autofs

In the previous steps, you explored manual and persistent mounting. While /etc/fstab is great for permanent mounts, it has a drawback: it tries to mount everything at boot time. If a network share is unavailable, it can slow down or even halt the boot process. The automounter, provided by the autofs service, solves this problem by mounting network filesystems on-demand, only when they are first accessed.

The autofs service uses a set of configuration files called "maps" to determine which remote shares to mount and where. In this step, you will prepare your system to use the automounter by installing the necessary package and starting its service.

Step 3.1: Install the autofs Package

The autofs functionality is not included in the default RHEL installation. You need to install it using the dnf package manager. This requires sudo privileges.

Run the following command to install the autofs package. The -y flag automatically answers "yes" to the confirmation prompt, which is convenient for this lab.

sudo dnf install -y autofs

The command will download and install the autofs package and any required dependencies. You will see output similar to the following:

Last metadata expiration check: ...
Dependencies resolved.
================================================================================
 Package       Architecture    Version                Repository           Size
================================================================================
Installing:
 autofs        x86_64          1:5.1.7-50.el9         ...                  ...
...

Transaction Summary
================================================================================
Install  1 Package

Total download size: ...
Installed size: ...
...
Complete!

Step 3.2: Start the autofs Service

On a standard RHEL system, you would use systemctl to start and enable services. However, this lab runs in a containerized environment where systemctl is not available. Instead, we will start the autofs daemon directly using its command, automount.

This command starts the automounter daemon, which will run in the background and monitor for attempts to access directories that are configured in its maps.

Execute the following command to start the service:

sudo automount

If successful, this command will not produce any output. It simply starts the daemon process.

Step 3.3: Verify the Service is Running

Since you cannot use systemctl status autofs to check the service, you can verify that the automount process is running using the ps command. The ps aux command lists all running processes, and we can pipe | its output to grep to filter for the automount process.

ps aux | grep automount

You should see at least one line for the automount process itself. The second line showing grep automount is just the grep command you ran, which can be ignored.

root      ...  0.0  0.0 ...      ?        Ssl  15:30   0:00 /usr/sbin/automount
labex     ...  0.0  0.0 ...      pts/0    S+   15:31   0:00 grep --color=auto automount

Seeing the /usr/sbin/automount process confirms that the service is running and ready to handle on-demand mounts. In the next steps, you will configure the maps that tell autofs what to do.

Create an Indirect Automount Map for Dynamic Directories

In this step, you will configure your first automount rule using an indirect map. An indirect map is the most common type of automount configuration. It works by associating a single base directory (like /home or /net) with a map file. When a user tries to access a subdirectory within that base directory, autofs looks up the subdirectory's name in the map file and mounts the corresponding remote share on-demand.

This is extremely useful for mounting user home directories or a collection of shared project folders without having to mount all of them at once. We will configure an indirect map to dynamically mount project directories located under a new base directory called /project_shares.

Step 4.1: Create the NFS Server Exports

First, let's prepare the directories on our simulated NFS server that we want to share. We will create two project directories, design and testing, inside /srv/nfs/.

Create the directories and place a sample file in each one:

sudo mkdir -p /srv/nfs/{design,testing}
sudo sh -c 'echo "Design documents" > /srv/nfs/design/README'
sudo sh -c 'echo "Testing scripts" > /srv/nfs/testing/README'

Next, we need to tell the NFS server to export these directories. We do this by adding entries to the /etc/exports file.

Open the file with nano:

sudo nano /etc/exports

Add the following lines to the file. These lines tell the NFS server to share the design and testing directories with any client (*) with read-write (rw) permissions.

/srv/nfs/design *(rw,sync,no_root_squash)
/srv/nfs/testing *(rw,sync,no_root_squash)

Save the file and exit (Ctrl+X, Y, Enter).

Finally, apply the changes to the NFS server by re-exporting all directories:

sudo exportfs -ra

Step 4.2: Create the Master Map Entry

The autofs configuration starts with the master map file, /etc/auto.master. The best practice is not to edit this file directly, but to add new configuration files to the /etc/auto.master.d/ directory.

Create a new master map file for our project shares:

sudo nano /etc/auto.master.d/shares.autofs

Add the following single line to this file:

/project_shares /etc/auto.shares

This line tells autofs: "For any access under the /project_shares directory, consult the map file located at /etc/auto.shares for instructions."

Save and exit the editor.

Step 4.3: Create the Indirect Map File

Now, create the indirect map file /etc/auto.shares that you just referenced in the master map.

sudo nano /etc/auto.shares

Add the following lines to this file:

design  -fstype=nfs,rw,sync   localhost:/srv/nfs/design
testing -fstype=nfs,rw,sync   localhost:/srv/nfs/testing

Let's break down a line:

  • design: This is the "key." It corresponds to the subdirectory name under /project_shares. When a user accesses /project_shares/design, this line is triggered.
  • -fstype=nfs,rw,sync: These are the mount options, specifying the filesystem type, read-write access, and synchronous writes.
  • localhost:/srv/nfs/design: This is the remote NFS share location to be mounted.

Save and exit the editor.

Step 4.4: Reload autofs and Test the Mount

For the autofs service to recognize your new map files, you must reload its configuration. Since systemctl is not available, we send the HUP (hangup) signal to the automount process, which causes it to re-read its configuration.

sudo killall -HUP automount

Now, let's test it. First, try to list the contents of the base directory /project_shares. It will appear empty because nothing has been mounted yet.

ls -l /project_shares
total 0

Next, attempt to access one of the subdirectories. This is the trigger that causes autofs to perform the mount.

ls -l /project_shares/design
total 4
-rw-r--r--. 1 root root 17 Nov 10 16:10 README

Success! The design share was mounted automatically. Now, if you list the base directory again, you will see the design directory because it is an active mount point.

ls -l /project_shares
total 0
dr-xr-xr-x. 2 root root 0 Nov 10 16:12 design

Do the same for the testing directory to confirm it also works:

ls -l /project_shares/testing
total 4
-rw-r--r--. 1 root root 16 Nov 10 16:10 README

You have successfully configured and tested an indirect automount map.

Create a Direct Automount Map for Static Mount Points

In this step, you will learn about the second type of automount configuration: a direct map. Unlike an indirect map, which groups multiple mounts under a common base directory, a direct map defines specific, individual mount points anywhere in the filesystem. Each entry in a direct map corresponds to a single, absolute path.

Direct maps are useful for mounting a small number of shares at fixed, well-known locations, such as mounting a shared tools directory at /usr/local/tools. We will configure a direct map to mount a shared common_data directory to /mnt/common.

Step 5.1: Prepare the NFS Server Export

As before, we first need to set up the directory on our simulated NFS server that we want to share. We will create a directory named common_data.

Create the directory and a sample file inside it:

sudo mkdir -p /srv/nfs/common_data
sudo sh -c 'echo "Common shared data" > /srv/nfs/common_data/info.txt'

Now, add an entry to /etc/exports to make this directory available via NFS.

sudo nano /etc/exports

Add the following new line to the file. This will share the /srv/nfs/common_data directory.

/srv/nfs/common_data *(rw,sync,no_root_squash)

Save the file and exit (Ctrl+X, Y, Enter).

Apply the changes to the NFS server by re-exporting all directories:

sudo exportfs -ra

Step 5.2: Create the Master Map Entry for the Direct Map

To use a direct map, you must first reference it from the master map configuration. The special mount point /- is used to indicate that the associated map file is a direct map.

Create a new master map file for our direct mount:

sudo nano /etc/auto.master.d/direct.autofs

Add the following single line to this file:

/- /etc/auto.direct

This line tells autofs: "Consult the file /etc/auto.direct for a list of direct mounts. The mount points are absolute paths defined within that file."

Save and exit the editor.

Step 5.3: Create the Direct Map File

Now, create the direct map file /etc/auto.direct that you just referenced.

sudo nano /etc/auto.direct

Add the following line to this file. The format is slightly different from an indirect map.

/mnt/common -fstype=nfs,rw,sync   localhost:/srv/nfs/common_data

Let's analyze this line:

  • /mnt/common: This is the "key," but for a direct map, the key is the full, absolute path of the mount point.
  • -fstype=nfs,rw,sync: These are the mount options, same as before.
  • localhost:/srv/nfs/common_data: This is the remote NFS share location.

Save and exit the editor.

Step 5.4: Reload autofs and Test the Direct Mount

Just as you did for the indirect map, you must reload the autofs configuration to make it aware of the new direct map.

sudo killall -HUP automount

Now, let's test the direct mount. Unlike an indirect map, the mount point /mnt/common does not exist on the filesystem until you try to access it.

Attempt to access the directory /mnt/common. This will trigger autofs to create the mount point and mount the share.

ls -l /mnt/common
total 4
-rw-r--r--. 1 root root 19 Nov 10 17:00 info.txt

Success! The direct mount was created on-demand. You can also verify it with the mount command:

mount | grep common
localhost:/srv/nfs/common_data on /mnt/common type nfs4 (rw,relatime,vers=4.2,rsize=...,wsize=...,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=...,local_lock=none,addr=...)

You have now successfully configured both an indirect map for dynamic subdirectories and a direct map for a static, absolute mount point.

Verify Direct and Indirect Automounts as Different Users

In this final step, you will verify how the automounter works in a multi-user environment. Automounting makes a share available, but it's the underlying filesystem permissions on the NFS server that control who can actually read or write to the files. You will create a couple of test users, assign them ownership of the respective NFS shares, and then test their access to both the indirect and direct maps.

This exercise demonstrates a real-world scenario where different teams (e.g., design and testing) have ownership of their respective shared directories, with read access available to other users but write access restricted to the owner.

Step 6.1: Create Test Users and Set Permissions

First, you need to create two new users: designer1 and tester1. You will also set a simple password for them so you can switch to their accounts.

Use the useradd command to create the users. The -m flag creates a home directory for them.

sudo useradd -m designer1
sudo useradd -m tester1

Next, set a password for each user. For simplicity in this lab, we'll use the password labex.io for both (meets complexity requirements including length, mixed case, numbers, and special characters).

sudo passwd designer1
## Enter new UNIX password: labex.io
## Retype new UNIX password: labex.io
## passwd: password updated successfully

sudo passwd tester1
## Enter new UNIX password: labex.io
## Retype new UNIX password: labex.io
## passwd: password updated successfully

Now, change the ownership of the shared directories on the "server" side (/srv/nfs/*) to grant access to these new users.

sudo chown -R designer1:designer1 /srv/nfs/design
sudo chown -R tester1:tester1 /srv/nfs/testing

The /srv/nfs/common_data directory will remain owned by root, making it read-only for regular users.

Step 6.2: Test Access as the designer1 User

Switch to the designer1 user account using the su (substitute user) command. The - ensures you get the user's full login environment.

su - designer1
## Password: labex.io

Your command prompt will change to [designer1@host ~]$.

First, test access to the design share via the indirect map. This should succeed.

ls -l /project_shares/design
total 4
-rw-r--r--. 1 designer1 designer1 17 Jun 16 16:12 README

Now, try to write a file to this directory. This should also succeed.

echo "My design file" > /project_shares/design/design_file.txt
ls -l /project_shares/design
total 8
-rw-r--r--. 1 designer1 designer1 15 Jun 16 16:18 design_file.txt
-rw-r--r--. 1 designer1 designer1 17 Jun 16 16:12 README

Next, attempt to access the testing share. You can see the contents, but cannot write to it since it's owned by tester1.

ls -l /project_shares/testing
total 4
-rw-r--r--. 1 tester1 tester1 16 Jun 16 16:12 README

Finally, test the direct-mapped share. designer1 should be able to read it but not write to it.

cat /mnt/common/info.txt
Common shared data
echo "test" > /mnt/common/new_file.txt
-bash: /mnt/common/new_file.txt: Permission denied

Exit the designer1 session to return to the labex user.

exit

Step 6.3: Test Access as the tester1 User

Now, perform similar tests as the tester1 user.

su - tester1
## Password: labex.io

Access the design share. You can see the contents including the file created by designer1, but cannot write to it.

ls -l /project_shares/design
total 8
-rw-r--r--. 1 designer1 designer1 15 Jun 16 16:18 design_file.txt
-rw-r--r--. 1 designer1 designer1 17 Jun 16 16:12 README

Now, access and write to the testing share. This should succeed since tester1 owns this directory.

ls -l /project_shares/testing
total 4
-rw-r--r--. 1 tester1 tester1 16 Jun 16 16:12 README
echo "My test script" > /project_shares/testing/test_script.sh
ls -l /project_shares/testing
total 8
-rw-r--r--. 1 tester1 tester1 16 Jun 16 16:12 README
-rw-r--r--. 1 tester1 tester1 15 Jun 16 16:19 test_script.sh

Exit the tester1 session.

exit

Step 6.4: Clean Up the Environment

To finish the lab and restore the system to its original state, remove the test users you created. The userdel -r command removes the user and their home directory.

sudo userdel -r designer1
sudo userdel -r tester1

This concludes the lab on managing NFS with autofs.

Summary

In this lab, you will learn to configure NFS client access on a RHEL system. You begin by performing a manual mount, first creating a local mount point and then using the mount command to connect to the NFS share. After establishing the manual connection with the mount command, you will proceed to configure a persistent mount by creating an entry in the /etc/fstab file, ensuring the share is automatically mounted at boot.

Furthermore, the lab covers configuring on-demand mounting with the autofs service. This involves installing and enabling the service, then defining how shares are mounted using two different methods: creating an indirect map for dynamically mounting directories and a direct map for mounting shares to static, pre-defined locations. The process concludes with verifying that both direct and indirect automounts function correctly for different users.