If everyone is using it, there must be some good reason, right? Actually, there are several. Let’s see how we can take advantage of LVM.
Physical volumes are disks or partitions. UEFI firmware needs a small EFI System Partition (or ESP), and boot loaders can’t handle the complexity of LVM, so your first disk will have a small ESP partition, a small partition for
/boot, and a large third partition. That third partition plus all the other disks are your physical volumes.
A volume group is a collection of physical volumes lumped together in a pool of storage. When you add another physical volume you expand this volume group.
A logical volume is then a large virtual storage device created out of that volume group (and thus based on the collection of physical volumes). The logical volumes can be used just as if they were disks or partitions, the only difference being that they have names under
/dev/mapper/* instead of being
If your head is spinning from the concepts and terminology, check out the Linux server administration course where we learn this a piece at a time and apply it to flexible storage.
The least ambitious answer is that using LVM is the path of least resistance. Many distributions use this by default. Maybe you just put up with much larger device names when you run commands like
mount and otherwise ignore LVM.
The reason the distributions do it is that it makes some things easier for the users, which makes for happier users, which is very attractive to companies like Red Hat who want customers to pay hundreds to thousands of dollars per year for support for a free operating system that really isn’t all that hard to support yourself.
It’s easy to expand a file system under LVM:
That just takes four commands:
lvextend, and either
resize2fs for Ext4 or
xfs_growfs for XFS. That’s easy!
You might like the ability to name the volumes. Volumes named
projectX make a lot more sense in command output than device names like
Logical volumes make some enterprise tasks much more practical. Modern journaling file systems shouldn’t lose files, but it’s still a good idea to run
fsck (on Ext4) or
xfs_check (on XFS) once in a while.
But your enterprise has no down time, it’s never an acceptable time to say “We need to take the server down for a few hours for some file system maintenance.”
With enough unused storage available in the volume group, you can create a snapshot of a live file system. Do your checks on that snapshot. Even with fast disks, today’s enormous file systems can take quite a while to check. But that’s fine, the live file system is continuously in use as we do the checks on the snapshot. If the snapshot checks out OK, then make your backup of the snapshot.
If the snapshot is not OK, if it has any inconsistency, that means that your file system needs repair work as soon as possible. Now we really need to take the system down overnight or at some least-bad time to do the repair work. But with journaling file systems that should happen very infrequently indeed, and meanwhile our production system has been up 24/7 as we were doing the consistency checking and backups on snapshots supported through LVM.
The Linux Optimization & Troubleshooting course has an exercise in which you use LVM snapshots for file system check and repair, check out that course to get some hands-on experience with it.
Containers are another great application for LVM, next week I’ll explain what that’s about!