Skip to content
SumGuy's Ramblings
Go back

ZFS vs Btrfs: Choosing a Filesystem That Won't Eat Your Data

Your Data Is Not Immortal (But It Can Come Close)

Hard drives lie. SSDs lie quieter, but they still lie. One day your NAS is happily humming along storing your Plex library, your homelab configs, and three years of irreplaceable family photos — and the next day a bit flips somewhere and you get a corrupted file with no warning, no fanfare, just silent data rot doing its thing.

This is called bitrot, and it is the nightmare scenario that keeps homelab nerds up at night. The good news: modern filesystems can detect and fix it automatically. The bad news: you have to actually use one of them.

Enter ZFS and Btrfs — the two heavyweights of data-integrity-focused filesystems that home labbers, NAS builders, and self-hosters argue about on Reddit with the same passion other people reserve for vim vs emacs.

Both use copy-on-write (CoW) architecture. Both support snapshots. Both do checksums. And both will make you feel like you finally have a grown-up storage setup, even if you’re running it on recycled hardware you bought off a Facebook Marketplace listing that said “works great, selling because upgrading.”

Let’s break down what each does well, where each falls on its face, and — most importantly — which one you should actually use.


What Even Is Copy-on-Write?

Before diving in, a quick explainer for the “I thought RAID was a backup” crowd (no shame — we all start somewhere).

Traditional filesystems overwrite data in-place. When you save a file, it writes over the old blocks. If the power cuts out halfway through, you get corruption. Fun!

Copy-on-write filesystems never overwrite existing data. Instead, they write new data to fresh blocks and update the metadata to point at the new location. The old blocks stick around until they’re explicitly freed. This means:

It’s the difference between editing a document in-place versus always writing a new draft. Wasteful? Slightly. Safe? Very.


ZFS: The Paranoid Enterprise Kid

ZFS was born at Sun Microsystems in 2005. It was designed to be the “last word in filesystems” — and in many ways, it still is. It combines the filesystem and volume manager into one, which either sounds elegant or horrifying depending on your background.

What ZFS Does Well

Checksums on everything. ZFS checksums every block of data and every piece of metadata. On read, it verifies the checksum. If something doesn’t match, it knows the data is corrupt. If you have redundancy (RAIDZ or mirrors), it will silently fix it. This is called self-healing, and it is exactly as cool as it sounds.

RAIDZ — RAID done properly. ZFS has its own RAID implementation built in. RAIDZ1 is like RAID5, RAIDZ2 like RAID6 — but without the RAID5 write hole that makes traditional RAID a silent corruption factory. Your data is safe from double-parity failures without needing a hardware RAID card.

The ARC cache. ZFS uses RAM aggressively for its Adaptive Replacement Cache. It’s intelligent caching that learns your access patterns. ZFS will eat your RAM like it’s an all-you-can-eat buffet — and then politely ask for more. On a dedicated NAS this is a feature. On a low-RAM box it is a problem.

Snapshot send/receive. You can send incremental ZFS snapshots over SSH to another machine. This is the backbone of serious homelab backup strategies.

Proven track record. TrueNAS, Proxmox Backup Server, FreeNAS, and every serious NAS appliance that cares about data integrity uses ZFS. It has decades of production use.

Setting Up a ZFS Pool

# Create a simple mirror (2-disk)
zpool create mypool mirror /dev/sdb /dev/sdc

# Create a RAIDZ1 pool (3 disks, 1 parity)
zpool create mypool raidz1 /dev/sdb /dev/sdc /dev/sdd

# Check pool status
zpool status mypool

# Run a scrub (verifies all data against checksums)
zpool scrub mypool

# Check scrub results
zpool status mypool

ZFS Snapshots: The Time Machine You Actually Want

# Take a snapshot
zfs snapshot mypool/data@2026-04-02

# List snapshots
zfs list -t snapshot

# Roll back to a snapshot
zfs rollback mypool/data@2026-04-02

# Send snapshot to another machine (incremental)
zfs send -i mypool/data@yesterday mypool/data@today | ssh backup-server zfs receive backuppool/data

ZFS Weaknesses

RAM requirements. The old “1GB RAM per 1TB storage” rule is outdated but the spirit is right — ZFS wants RAM, and it will use all you give it. For a Proxmox host doubling as a NAS, this can get painful fast.

Licensing weirdness on Linux. ZFS uses the CDDL license, which is incompatible with the GPL. This means ZFS can’t be merged into the Linux kernel. You install it via the OpenZFS project (zfs-dkms on Debian/Ubuntu), and it works great — but it’s a separate kernel module that needs rebuilding on kernel updates. Not a dealbreaker, but it’s a thing.

Learning curve. ZFS has a vocabulary all its own. Pools, datasets, zvols, properties, recordsize, ashift — there’s real depth here. You can get started in twenty minutes, but mastering it takes months.


Btrfs: Linux’s Native CoW Filesystem

Btrfs (pronounced “butter-eff-ess” or “better-eff-ess” depending on who you ask and how much they want an argument) has been in the Linux kernel since 2009. It was designed as a modern replacement for ext4 with CoW features built in from the start.

SUSE and Fedora ship Btrfs as the default filesystem. It’s mainstream.

What Btrfs Does Well

Built into the kernel. No out-of-tree modules, no DKMS rebuilds, no licensing gymnastics. It’s just there. On any modern Linux system, you already have it.

Subvolumes. Btrfs subvolumes are like lightweight namespaces inside the filesystem. You can snapshot individual subvolumes, set quotas, and mount them independently. It’s how Snapper and Timeshift work their magic on desktop Linux.

Snapshots are stupid easy.

# Create a subvolume
btrfs subvolume create /data/documents

# Take a snapshot (read-write)
btrfs subvolume snapshot /data/documents /data/documents-snap-2026-04-02

# Take a read-only snapshot (better for backups)
btrfs subvolume snapshot -r /data/documents /data/documents-ro-snap

# List subvolumes and snapshots
btrfs subvolume list /data

# Delete a snapshot
btrfs subvolume delete /data/documents-snap-2026-04-02

Scrubbing works. Like ZFS, Btrfs can scrub your data and verify checksums.

# Scrub a Btrfs filesystem
btrfs scrub start /data

# Check scrub status
btrfs scrub status /data

Sending snapshots for backups.

# Send a read-only snapshot to another location
btrfs send /data/documents-ro-snap | btrfs receive /backup/

# Incremental send (much faster after first run)
btrfs send -p /data/documents-ro-snap-old /data/documents-ro-snap | btrfs receive /backup/

Lower RAM overhead. Btrfs doesn’t have an ARC cache with RAM hunger. It uses the normal Linux page cache. On a machine with 8GB RAM doing many things at once, this matters.

Btrfs Weaknesses

RAID5 and RAID6 are still broken. This has been “almost fixed” for roughly a decade. The Btrfs maintainers themselves mark RAID5/6 as not production-ready due to a write hole. If you were thinking of using Btrfs RAID5 for your NAS, please don’t. Use RAID1 (mirrors) or stick to single/no-redundancy and handle redundancy at another layer.

Less battle-tested at scale. Btrfs works great on desktops and small setups. At NAS scale with many terabytes and heavy write loads, there are edge cases and horror stories. ZFS has simply been running in more production environments for longer.

Recovery tools are weaker. If a ZFS pool has problems, there are established recovery paths. Btrfs recovery is… more of an adventure.


Head-to-Head Comparison

FeatureZFSBtrfs
Data checksumsYes, all blocksYes
Self-healingYes (with redundancy)Limited
SnapshotsYes, instantYes, instant
Built-in RAIDRAIDZ1/2/3, mirrorsRAID0/1/10 (avoid RAID5/6)
RAM usageHigh (ARC cache)Normal (page cache)
Linux kernel integrationExternal (OpenZFS/DKMS)Native
Mature/battle-testedVery (20+ years)Moderate (15+ years, desktop-proven)
Send/receive backupsExcellentGood
Desktop useOverkill for mostGreat (Snapper, Timeshift)
NAS/server useExcellentAcceptable (no RAID5/6)
Licensing on LinuxCDDL (no kernel merge)GPL (native)

What Should You Actually Use?

You’re building a NAS or homelab storage server

Use ZFS. Full stop. TrueNAS Scale runs it. Proxmox Backup Server runs it. Every serious NAS OS runs it. The RAM cost is worth it for RAIDZ2 and self-healing. Your data is the one thing you don’t want to YOLO.

Minimum: 8GB RAM, preferably 16GB+. Use ECC RAM if your budget allows — ZFS will detect corruption it can’t fix without redundancy, but ECC prevents corruption at the RAM level.

You’re running Proxmox

Use ZFS for VM storage if you have the RAM. Proxmox has native ZFS integration. You can create zvols for VM disks, take snapshots, replicate to offsite backup servers with zfs send. It’s genuinely great.

If RAM is tight (under 16GB and you’re running many VMs), use ext4 or xfs on LVM and sleep fine. Proxmox Backup Server handles dedup and versioning at the backup layer anyway.

You’re running a desktop or laptop on Linux

Btrfs is excellent here. Fedora and openSUSE default to it for good reason. Install snapper, set up automatic snapshots before every dnf or zypper update, and you have a rollback superpower. When a kernel update borks your system — and it will — you boot the snapshot from GRUB and carry on with your day.

# Install snapper on Fedora/openSUSE
sudo snapper -c root create-config /

# Create a manual snapshot
sudo snapper -c root create --description "before chaos"

# List snapshots
sudo snapper -c root list

# Rollback to snapshot 3
sudo snapper -c root undochange 3..0

You’re on a single-disk server or VPS

Use ext4 or xfs. Neither ZFS nor Btrfs adds meaningful value without redundancy for data integrity. Save the complexity for when you have multiple disks.

You have 8GB or less RAM for a NAS

This is the one case where Btrfs with RAID1 might make more sense than ZFS. Set up Btrfs with two mirrored drives, enable scrubbing on a schedule, and accept that you’re trading some enterprise-grade guarantees for a leaner memory footprint.


The Backup Rule Still Applies

Neither ZFS nor Btrfs is a backup. Snapshots are not backups. RAID is not a backup. If all your copies live in the same physical location and the same filesystem, you have a single point of failure with extra steps.

The 3-2-1 rule: 3 copies of data, 2 different media types, 1 offsite. ZFS send/receive and Btrfs send/receive are both excellent tools for the “1 offsite” part of that equation. Use them.


Pick One and Stop Overthinking It

If you’re standing up a real NAS or homelab storage system with multiple drives: use ZFS. It’s been protecting data in production environments for two decades. It’s what TrueNAS uses. It’s what the paranoid data hoarders use. And for good reason.

If you’re on a Linux desktop or single-drive workstation and want modern filesystem features without the RAM tax: use Btrfs. It’s fast, native, and Snapper integration on Fedora/openSUSE is genuinely delightful.

Both are excellent answers to “how do I stop my filesystem from silently eating my data.” The wrong answer is “eh, ext4 is fine, I’ll just be careful.” Careful doesn’t survive a power outage.

Your future self, staring at a corrupted drive at 2am, will thank you for reading this far.


Share this post on:

Previous Post
Immich vs PhotoPrism: Self-Hosted Google Photos That Won't Sell Your Memories
Next Post
Plausible vs Umami: Privacy-Friendly Analytics That Won't Creep Out Your Users