You Set Up LVM During Install and Never Touched It Again
Don’t worry, most people are in this camp. LVM (Logical Volume Manager) is one of those Linux features that gets configured during OS installation and then just… works, invisibly, while you go about your life. The abstraction layer does its job, your filesystem has space, everything is fine.
But LVM has capabilities that most people never discover because the tutorial they read five years ago covered lvcreate and lvextend and called it a day.
Snapshots and thin provisioning are the features worth learning. Snapshots let you freeze a point-in-time copy of a logical volume — useful for backups, testing upgrades, or just having an “undo” before you do something potentially regrettable. Thin provisioning lets you allocate more storage than you physically have, which sounds like a bug but is actually a legitimate feature with real use cases.
Quick LVM Recap
LVM has three layers:
- Physical Volumes (PV): actual disks or partitions (
/dev/sda,/dev/nvme0n1p3, etc.) - Volume Groups (VG): a pool of storage assembled from one or more PVs
- Logical Volumes (LV): virtual partitions carved out of a VG — these are what you format and mount
The workflow when you need more space: add a disk → create a PV → extend the VG → extend the LV → resize the filesystem. LVM handles the abstraction so you’re not dealing with disk geometry.
# Check your current setup
pvs # show physical volumes
vgs # show volume groups
lvs # show logical volumes
# More detailed view
pvdisplay
vgdisplay
lvdisplay
LVM Snapshots: Instant Undo Buttons
A snapshot is a copy-on-write (COW) point-in-time image of a logical volume. At the moment you create the snapshot, no data is actually copied — LVM just records what the LV looks like at that instant. As you write new data to the original volume, LVM copies the old blocks to the snapshot before overwriting them. The snapshot preserves the old state; the original continues changing.
Creating a Snapshot
# Syntax: lvcreate -L <size> -s -n <snapshot-name> <source-lv-path>
lvcreate -L 5G -s -n mydata_snap /dev/vg0/mydata
The snapshot needs its own space — this is where people get tripped up. The size you specify (-L 5G here) is not the size of the snapshot image. It’s the amount of COW space allocated to track changes. If you write 5GB of new data to the original volume before you’re done with the snapshot, the snapshot fills up and becomes invalid.
Rule of thumb: size your snapshot at 10-20% of the original LV if you’re using it for a quick backup, or larger if you expect heavy writes during the backup window.
# Check snapshot usage and health
lvs -o +lv_snapshot_invalid,snap_percent
The snap_percent column tells you how full the COW space is. Watch this number. A snapshot at 100% is a dead snapshot.
Mounting a Snapshot for Backup
The snapshot is a readable block device. Mount it to access the frozen filesystem state:
# Mount the snapshot read-only
mkdir /mnt/snapshot
mount -o ro /dev/vg0/mydata_snap /mnt/snapshot
# Now rsync or tar from here — the data won't change while you copy
rsync -av /mnt/snapshot/ /backup/destination/
# When done, unmount
umount /mnt/snapshot
This is much better than trying to back up a live filesystem. The snapshot gives you a consistent point-in-time image. Your backup tool isn’t racing against active writes.
For databases, you still want to quiesce writes or use application-level snapshot hooks before creating the LVM snapshot — MySQL’s FLUSH TABLES WITH READ LOCK, PostgreSQL’s pg_start_backup(), etc. LVM handles the filesystem consistency, not the application consistency.
Merging a Snapshot Back
If you created a snapshot before an upgrade and the upgrade went badly, you can roll back by merging the snapshot into the original:
# The volume must be unmounted (or the system should be in single-user mode)
umount /mount/point
# Merge snapshot into origin
lvconvert --merge /dev/vg0/mydata_snap
After merge, the snapshot disappears and the origin LV is restored to the snapshot’s state. If the LV was the root filesystem, you’ll need to do this from a live USB or by setting it up to merge on next boot:
# Schedule merge on next boot (for root LV)
lvconvert --merge /dev/vg0/root_snap
# Reboot — merge happens automatically
Removing a Snapshot When Done
# Just remove it like any LV
lvremove /dev/vg0/mydata_snap
Don’t leave old snapshots sitting around indefinitely. They consume COW space and every write to the origin LV is slightly slower because LVM has to check if it needs to copy blocks to active snapshots.
Thin Provisioning: Lying About Your Storage (Productively)
Traditional LVM is “thick provisioned” — when you create a 50GB logical volume, you immediately consume 50GB from the volume group, whether or not you’ve written a single byte to it. This works fine but it means your total allocated LV sizes can never exceed your physical storage.
Thin provisioning flips this. You create a thin pool, then create thin volumes that are backed by the pool. A thin volume reports to the OS that it has X GB available, but only consumes space from the pool as data is actually written.
This lets you over-provision — create more logical space than you have physical space — which is useful for:
- VM and container storage where many volumes will never actually fill up
- Development environments where you allocate space generously but most of it stays empty
- Multi-tenant environments where you want to give each tenant a quota without pre-reserving it
Setting Up a Thin Pool
# Create a thin pool LV inside a volume group
# This example creates a 100GB thin pool on vg0
lvcreate -L 100G --thinpool mypool vg0
Creating Thin Volumes
# Create a thin LV backed by the pool
# Note: -V specifies the virtual size — more than the pool can hold is fine
lvcreate -V 20G --thin -n vm1_disk vg0/mypool
lvcreate -V 20G --thin -n vm2_disk vg0/mypool
lvcreate -V 20G --thin -n vm3_disk vg0/mypool
# Total virtual: 60GB, pool: 100GB — fine, as long as actual data < 100GB
# Check thin pool usage
lvs -o +data_percent,metadata_percent vg0/mypool
Watch data_percent. When it approaches 100%, your thin volumes will run out of backing space even though they think they have room. This is the “lying about storage” part — eventually reality catches up.
Thin Pool Monitoring and Autoextend
LVM can automatically extend the thin pool when it gets full, as long as the VG has free space:
# Edit /etc/lvm/lvm.conf
# Find the activation section and set:
thin_pool_autoextend_threshold = 80
thin_pool_autoextend_percent = 20
This says: when the thin pool hits 80% full, automatically extend it by 20%. Combined with monitoring alerts at 70%, you’ll have breathing room.
Thin Snapshots
Thin volumes support their own snapshot mechanism that’s more efficient than traditional LVM snapshots:
# Create a snapshot of a thin volume
lvcreate -s --name vm1_disk_snap vg0/vm1_disk
Thin snapshots don’t require pre-allocated COW space — they share the pool and only consume space for the actual changed blocks. You can create multiple snapshots without the “how much COW space do I need” calculation. This is the mechanism that most VM hypervisors and container storage drivers use under the hood.
Practical Use Cases
VM Disk Snapshots Before Updates
Before updating a VM’s OS or applying a major config change:
# Snapshot the VM's disk LV
lvcreate -L 10G -s -n vm1_before_upgrade /dev/vg0/vm1_disk
# Do the upgrade in the VM
# If it goes wrong:
lvconvert --merge /dev/vg0/vm1_before_upgrade
# If it goes right:
lvremove /dev/vg0/vm1_before_upgrade
Database Backup Pipeline
#!/bin/bash
# Backup script with LVM snapshot
DB_LV="/dev/vg0/pgdata"
SNAP_NAME="pgdata_backup_snap"
SNAP_SIZE="5G"
BACKUP_DEST="/backup/postgres"
# Freeze PostgreSQL writes (or use pg_start_backup)
psql -U postgres -c "SELECT pg_start_backup('lvm_snap', true);"
# Create snapshot
lvcreate -L $SNAP_SIZE -s -n $SNAP_NAME $DB_LV
# Resume PostgreSQL writes immediately
psql -U postgres -c "SELECT pg_stop_backup();"
# Mount and back up from snapshot
mkdir -p /mnt/pg_snap
mount -o ro /dev/vg0/$SNAP_NAME /mnt/pg_snap
rsync -av /mnt/pg_snap/ $BACKUP_DEST/
# Cleanup
umount /mnt/pg_snap
lvremove -f /dev/vg0/$SNAP_NAME
The PostgreSQL freeze window is as short as it can possibly be — just the time to create the snapshot, which is nearly instant. The actual backup runs against the snapshot, not the live database.
Common LVM Commands Reference
# Physical volumes
pvcreate /dev/sdb # Initialize a disk as PV
pvs # List PVs
pvremove /dev/sdb # Remove PV (VG must not use it)
# Volume groups
vgcreate myvg /dev/sdb # Create VG from PV
vgextend myvg /dev/sdc # Add PV to VG
vgs # List VGs
vgdisplay myvg # Detailed VG info
# Logical volumes
lvcreate -L 20G -n mydata myvg # Create 20G LV
lvcreate -l 100%FREE -n mydata myvg # Use all free space
lvextend -L +10G /dev/myvg/mydata # Extend by 10G
lvextend -l +100%FREE /dev/myvg/mydata # Extend using all free VG space
resize2fs /dev/myvg/mydata # Resize ext4 filesystem after extending
xfs_growfs /mount/point # Resize XFS filesystem after extending
# Snapshots
lvcreate -L 5G -s -n mysnap /dev/myvg/mydata # Create snapshot
lvconvert --merge /dev/myvg/mysnap # Merge snapshot back
lvremove /dev/myvg/mysnap # Remove snapshot
# Thin provisioning
lvcreate -L 100G --thinpool mypool myvg # Create thin pool
lvcreate -V 20G --thin -n vol1 myvg/mypool # Create thin volume
LVM rewards the people who take time to understand it. Snapshots turn “I hope this upgrade works” into a recoverable situation. Thin provisioning turns “how much disk do I give this VM” into a question you can answer generously and correct later. These aren’t exotic features — they’re the parts of LVM that make it worth using instead of just partitioning your disk directly.