Introduction
In this lab, you will learn how to configure and manage advanced storage solutions in a Linux environment. You will work with two powerful tools: the Logical Volume Manager (LVM) for flexible volume management, and mdadm for software-based Redundant Array of Independent Disks (RAID). This lab provides practical, hands-on experience in setting up a robust and scalable storage infrastructure from the command line, using loop devices to simulate physical disks.
You will start by initializing LVM physical volumes and creating a volume group. Next, you will create, format, mount, and resize a logical volume to understand its dynamic nature. You will then proceed to build and mount a RAID 1 (mirroring) array for data redundancy. To conclude the lab, you will ensure these storage configurations are persistent across system reboots by modifying the /etc/fstab and mdadm.conf files.
Initialize LVM with pvcreate and vgcreate
In this step, you will begin working with the Logical Volume Manager (LVM). LVM is a powerful tool for managing storage devices on Linux. It adds a layer of abstraction between your physical hard drives and the filesystems, allowing for more flexible configurations like resizing volumes on the fly.
The basic building blocks of LVM are:
- Physical Volumes (PVs): These are your block devices, such as hard drive partitions or, in our case, simulated disks.
- Volume Groups (VGs): These are pools of storage created by grouping one or more Physical Volumes together.
- Logical Volumes (LVs): These are the "virtual partitions" you create from the space available in a Volume Group. You will create filesystems on these LVs.
First, let's ensure the necessary tools, lvm2 and mdadm, are installed.
sudo apt-get update && sudo apt-get install -y lvm2 mdadm
Since we don't have spare physical hard drives in this environment, we will simulate them using loop devices. A loop device allows a file to be treated as a block device. Let's start by creating two 256MB files in your project directory that will act as our disk images.
truncate -s 256M disk1.img disk2.img
Now, verify that the files have been created with the correct size.
ls -lh
You should see an output similar to this:
total 0
-rw-r--r-- 1 labex labex 256M Jan 1 12:00 disk1.img
-rw-r--r-- 1 labex labex 256M Jan 1 12:00 disk2.img
Next, associate these image files with loop devices. We will use /dev/loop20 and /dev/loop21.
sudo losetup /dev/loop20 disk1.img
sudo losetup /dev/loop21 disk2.img
Now that we have our "disks" (/dev/loop20 and /dev/loop21), we can initialize them as LVM Physical Volumes using the pvcreate command.
sudo pvcreate /dev/loop20 /dev/loop21
The output confirms that the PVs were successfully created:
Physical volume "/dev/loop20" successfully created.
Physical volume "/dev/loop21" successfully created.
You can display a summary of the Physical Volumes with pvs or get a more detailed view with pvdisplay.
sudo pvs
PV VG Fmt Attr PSize PFree
/dev/loop20 lvm2 --- 256.00m 256.00m
/dev/loop21 lvm2 --- 256.00m 256.00m
With our Physical Volumes ready, the next step is to create a Volume Group named labvg that combines the storage from both PVs. We'll use the vgcreate command for this.
sudo vgcreate labvg /dev/loop20 /dev/loop21
The successful output will be:
Volume group "labvg" successfully created
Finally, let's inspect our new Volume Group using vgs for a summary or vgdisplay for details.
sudo vgs
The output shows our labvg group, which has a total size of approximately 512MB (256MB from each PV).
VG #PV #LV #SN Attr VSize VFree
labvg 2 0 0 wz--n- 512.00m 512.00m
You have now successfully initialized two devices as Physical Volumes and combined them into a single Volume Group, setting the stage for creating flexible Logical Volumes.
Create and Mount a Logical Volume with lvcreate and mkfs
In this step, you will use the Volume Group labvg you created previously to create a Logical Volume (LV). An LV is the LVM equivalent of a partition. Once created, you can format it with a filesystem and mount it to make it accessible for storing data.
First, let's create a 200MB Logical Volume named lablvm from the labvg storage pool. We use the lvcreate command, specifying the size with the -L flag and the name with the -n flag.
sudo lvcreate -L 200M -n lablvm labvg
You will see a confirmation message:
Logical volume "lablvm" created.
You can now view your new LV with the lvs command, which provides a summary of all Logical Volumes.
sudo lvs
The output will show your new lablvm within the labvg group.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lablvm labvg -wi-a----- 200.00m
The new LV, which is accessible as a device at /dev/labvg/lablvm, is currently a raw, unformatted block device. To store files on it, you must first create a filesystem. We will use the mkfs.ext4 command to format it with the common ext4 filesystem.
sudo mkfs.ext4 /dev/labvg/lablvm
The command will output details about the filesystem it has created:
mke2fs 1.46.5 (30-Dec-2021)
Discarding device blocks: done
Creating filesystem with 51200 4k blocks and 51200 inodes
Filesystem UUID: 28796151-bd37-4cae-a17f-071db8795919
Superblock backups stored on blocks:
32768
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
Next, you need a directory to serve as a "mount point." This is an empty directory where the LV's filesystem will be attached to the main directory tree. Let's create a directory named /lablvm in the root directory.
sudo mkdir /lablvm
Finally, use the mount command to attach the filesystem on your LV (/dev/labvg/lablvm) to the mount point (/lablvm).
sudo mount /dev/labvg/lablvm /lablvm
To confirm that the volume is successfully mounted and to check its available space, use the df -h (disk free, human-readable) command.
df -h /lablvm
The output shows that the device is mounted and has approximately 200MB of space available.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/labvg-lablvm 172M 24K 158M 1% /lablvm
You have now successfully created, formatted, and mounted a Logical Volume, making it ready for use.
Resize an LVM Logical Volume with lvresize
In this step, you will explore one of the most powerful features of LVM: the ability to resize a logical volume and its filesystem while it is online and in use. This flexibility is a major advantage over traditional static partitions.
First, let's re-confirm the current size of your mounted logical volume using the df -h command.
df -h /lablvm
You will see the volume is approximately 200MB in size.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/labvg-lablvm 194M 2.6M 179M 2% /lablvm
Now, imagine your application running on /lablvm is running out of space. You need to increase its capacity from 200MB to 300MB. You can do this with the lvresize command. We will use the -r flag, which is very important as it tells lvresize to also resize the filesystem contained within the logical volume. Without it, the filesystem would remain at its original size, and the new space would be unusable.
sudo lvresize -r -L 300M /dev/labvg/lablvm
The output shows that both the logical volume and the filesystem are being resized.
Size of logical volume labvg/lablvm changed from 200.00 MiB (50 extents) to 300.00 MiB (75 extents).
Logical volume labvg/lablvm successfully resized.
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/mapper/labvg-lablvm is mounted on /lablvm; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/mapper/labvg-lablvm is now 76800 (4k) blocks long.
Check the disk space again with df -h to verify the change.
df -h /lablvm
The volume is now approximately 300MB.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/labvg-lablvm 293M 2.6M 275M 1% /lablvm
Sometimes, you don't want to set a new absolute size, but rather add a certain amount of space. Let's add another 100MB to the volume. You can do this by using a + sign before the size. For this demonstration, we'll use the absolute size method to ensure reliability.
sudo lvresize -r -L 400M /dev/labvg/lablvm
If you encounter a filesystem error during resize operations, don't worry - this can happen occasionally with rapid consecutive resizes. In such cases, you can recover by unmounting the filesystem, running a filesystem check, and then remounting:
## If you get a filesystem error, run these recovery commands:
## sudo umount /lablvm
## sudo e2fsck -f /dev/labvg/lablvm
## sudo mount /dev/labvg/lablvm /lablvm
Finally, run df -h one last time to see the final result.
df -h /lablvm
The volume is now approximately 400MB, and the free space in your labvg Volume Group has been reduced accordingly.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/labvg-lablvm 392M 2.7M 369M 1% /lablvm
You have now successfully resized a live LVM volume twice, demonstrating how easily you can manage storage allocation without downtime.
Build and Mount a RAID 1 Array with mdadm
In this step, you will shift your focus from LVM to another powerful storage technology: RAID (Redundant Array of Independent Disks). You will use the mdadm utility to create a RAID 1 array, also known as a mirror. In a RAID 1 configuration, data is written identically to two disks, providing redundancy. If one disk fails, the data is still safe on the other.
First, we need two more simulated disks for our RAID array. Let's create two new 256MB disk image files, disk3.img and disk4.img, in your ~/project directory.
truncate -s 256M disk3.img disk4.img
Next, associate these new image files with unused loop devices, /dev/loop22 and /dev/loop23.
sudo losetup /dev/loop22 disk3.img
sudo losetup /dev/loop23 disk4.img
Now you are ready to build the RAID 1 array. We will use the mdadm command to create a new RAID device named /dev/md0 using our two loop devices.
--create /dev/md0: Creates a new RAID device named/dev/md0.--level=1: Specifies the RAID level, in this case, RAID 1 (mirroring).--raid-disks=2: Specifies that the array will consist of two disks./dev/loop22 /dev/loop23: The component devices for the array.
Execute the following command:
sudo mdadm --create /dev/md0 --level=1 --raid-disks=2 /dev/loop22 /dev/loop23
The system will ask for confirmation before proceeding. Type y and press Enter to continue.
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
You can check the status of your new RAID array by viewing the /proc/mdstat file.
cat /proc/mdstat
The output shows that /dev/md0 is active and using /dev/loop23 and /dev/loop22. You might also see the array being synchronized (resync), which is normal. The array is usable even while this process completes.
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 loop13[1] loop12[0]
261120 blocks super 1.2 [2/2] [UU]
[>....................] resync = 0.4% (1088/261120) finish=0.1min speed=21760K/sec
unused devices: <none>
Just like the LVM volume, the new RAID device /dev/md0 needs a filesystem. Let's format it with ext4.
sudo mkfs.ext4 /dev/md0
Next, create a mount point for the RAID array.
sudo mkdir /labraid
Finally, mount the RAID device onto the new directory.
sudo mount /dev/md0 /labraid
Verify that the RAID array is mounted correctly using df -h.
df -h /labraid
The output confirms that the /dev/md0 device, with a total size of about 256MB (since it's a mirror), is mounted and ready for use.
Filesystem Size Used Avail Use% Mounted on
/dev/md0 249M 2.6M 234M 2% /labraid
You have successfully created and mounted a RAID 1 array, providing data redundancy for the /labraid mount point.
Persist Mounts and RAID Configuration with /etc/fstab and mdadm.conf
In this final step, you will make your LVM and RAID configurations permanent. Currently, if you were to reboot the system, the RAID array would not be automatically reassembled, and neither the LVM volume nor the RAID array would be mounted. To fix this, you need to update two key configuration files: /etc/mdadm/mdadm.conf for the RAID array and /etc/fstab for the mount points.
First, let's address the RAID array. The system needs to know how to reassemble /dev/md0 at boot time. The mdadm utility can generate the necessary configuration line for you.
Run the following command to scan the active array and print its configuration:
sudo mdadm --detail --scan
The output is a single line that describes your array.
ARRAY /dev/md0 metadata=1.2 name=<hostname>:0 UUID=<some-uuid>
Now, let's append this configuration to the mdadm configuration file, which is located at /etc/mdadm/mdadm.conf. We will pipe the output of the scan command directly into the file using tee.
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
You can verify that the line was added by viewing the contents of the file:
cat /etc/mdadm/mdadm.conf
Next, you need to tell the system to automatically mount your filesystems at boot. This is done by adding entries to /etc/fstab (the file system table). Each line in this file defines a mount point.
Let's add the entry for your LVM volume first. We will use echo to create the line and tee -a to append it to /etc/fstab with sudo.
echo '/dev/labvg/lablvm /lablvm ext4 defaults 0 0' | sudo tee -a /etc/fstab
Now, do the same for the RAID array mount point.
echo '/dev/md0 /labraid ext4 defaults 0 0' | sudo tee -a /etc/fstab
You can check that both lines were added correctly by viewing the last two lines of the /etc/fstab file.
tail -n 2 /etc/fstab
You should see the two lines you just added:
/dev/labvg/lablvm /lablvm ext4 defaults 0 0
/dev/md0 /labraid ext4 defaults 0 0
To test that your /etc/fstab entries are correct without rebooting, you can unmount the filesystems and then use the mount -a command, which mounts all filesystems listed in /etc/fstab.
First, unmount both volumes:
sudo umount /lablvm
sudo umount /labraid
Now, run mount -a to have the system read /etc/fstab and mount everything.
sudo mount -a
Finally, verify that they are mounted again using df -h.
df -h /lablvm /labraid
The output should show both filesystems mounted, just as they were before.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/labvg-lablvm 392M 2.7M 369M 1% /lablvm
/dev/md0 249M 2.6M 234M 2% /labraid
Congratulations! You have successfully configured your system to automatically assemble your RAID array and mount both your LVM and RAID filesystems on boot.
Summary
In this lab, you learned the fundamentals of advanced storage management in Linux using LVM and software RAID. You began by simulating physical disks with truncate and losetup, then proceeded to initialize them as LVM Physical Volumes (PVs) with pvcreate. These PVs were then aggregated into a Volume Group (VG) using vgcreate. From this storage pool, you created a flexible Logical Volume (LV) with lvcreate, formatted it with an ext4 filesystem using mkfs, and mounted it to the system. A key LVM feature was demonstrated by dynamically resizing the LV with lvresize and expanding the filesystem to utilize the new space with resize2fs.
Furthermore, you configured a software RAID 1 (mirror) array using the mdadm utility to provide data redundancy. After building the array from two simulated disks, you formatted and mounted it similarly to the LVM volume. The lab concluded by ensuring the persistence of these configurations across reboots. This was accomplished by adding the mount points for both the LVM volume and the RAID array to /etc/fstab, and by saving the RAID array's configuration details to /etc/mdadm/mdadm.conf.



