Your Disk Is the Bottleneck and You Already Own the Fix
Here’s a situation you’ve been in: you’re running builds, tests, or some pipeline that writes a ton of temporary files, and everything grinds to a halt on disk I/O. You’ve got 32GB of RAM and you’re using 8GB. The other 24GB is just sitting there, theoretically cooling itself.
That’s where memory-backed filesystems come in. Tmpfs and ramfs both let you mount a filesystem that lives entirely in RAM. No spinning platters. No NAND wear. No controller queue depth nonsense. Just your CPU and memory talking directly to each other at full bus speed.
The difference between them matters, and getting it wrong can bite you hard. Let’s sort it out.
What Tmpfs Is
Tmpfs is a virtual filesystem backed by memory (RAM and swap). It’s been in the Linux kernel since 2.4 and is what /tmp runs on by default on most modern distros.
Key properties:
- Size-limited: you set a max size, and it can’t grow past that
- Swap-backed: if you’re under memory pressure, tmpfs pages can be swapped to disk
- Dynamic size: only uses as much RAM as data actually stored, up to the limit
- Survives kernel page cache pressure: the kernel can reclaim tmpfs pages if needed
# Check if /tmp is already tmpfs
mount | grep tmpfs
df -h /tmp
# Mount a tmpfs manually
sudo mount -t tmpfs -o size=2G tmpfs /mnt/fast-tmp
# Verify
df -h /mnt/fast-tmp
What Ramfs Is
Ramfs is older, simpler, and more dangerous. It’s a pure RAM filesystem with no size limits and no swap backing.
Key properties:
- No size limit: it will grow until it eats all your RAM and your system dies
- Never swapped: always in RAM, no exceptions
- Slightly faster than tmpfs in theory (no swap accounting overhead)
- Root access only: you’d be a maniac to let unprivileged users near it
# Mount ramfs (be careful — no size limit)
sudo mount -t ramfs -o size=512m ramfs /mnt/ramfs
# Note: the size= option is parsed but NOT enforced on ramfs
# Don't trust it as a safety limit
Honestly, for most use cases, ramfs is the wrong choice. The lack of a real size cap means a runaway process can OOM your entire system. Tmpfs does everything ramfs does and adds actual safety guardrails.
The Real Difference: Size Limits and Swappability
| Feature | tmpfs | ramfs |
|---|---|---|
| Size limit enforced | Yes | No (option parsed, not enforced) |
| Can use swap | Yes | No |
| Dynamic allocation | Yes | Yes |
| Performance | Very fast | Marginally faster |
| Safe for unprivileged use | Yes (with limits) | No |
| Default /tmp on modern Linux | Yes | No |
The swap behavior of tmpfs is worth understanding: if your system is under memory pressure and Linux needs RAM, it can swap out tmpfs pages to disk. This means your “memory filesystem” might briefly touch disk under extreme load. If you absolutely cannot tolerate disk I/O (cryptographic operations, timing-sensitive code), use mlockall() or ramfs in a controlled environment.
For 99% of use cases — builds, temp files, dev caches — tmpfs is what you want.
Mounting Syntax
# Tmpfs with a 1GB limit
sudo mount -t tmpfs -o size=1G,mode=1777 tmpfs /mnt/scratch
# With specific uid/gid for a service
sudo mount -t tmpfs -o size=512m,uid=1000,gid=1000 tmpfs /home/user/cache
# Tmpfs as shared memory (already exists as /dev/shm)
ls -la /dev/shm
# /dev/shm is tmpfs, used for POSIX shared memory (shm_open, mmap)
Making It Permanent: /etc/fstab
# /etc/fstab entries
tmpfs /tmp tmpfs defaults,size=2G,mode=1777 0 0
tmpfs /var/cache tmpfs defaults,size=4G,uid=root 0 0
tmpfs /mnt/build tmpfs defaults,size=8G,mode=0755 0 0
# Mount all fstab entries without reboot
sudo mount -a
# Verify it took
mount | grep tmpfs
df -hT | grep tmpfs
The mode=1777 for /tmp is important — that’s the sticky bit plus world-writable, which prevents users from deleting each other’s files.
Performance: Actually Measuring It
Before you do anything, benchmark so you have real numbers:
# Write speed test — disk
dd if=/dev/zero of=/tmp/test.dat bs=1M count=1024 conv=fdatasync
# (move to a real disk path to compare)
# Write speed test — tmpfs
dd if=/dev/zero of=/mnt/fast-tmp/test.dat bs=1M count=1024
# Random I/O with fio (more realistic)
sudo apt install fio
fio --name=test --rw=randrw --bs=4k --size=512M --numjobs=4 \
--runtime=30 --directory=/mnt/fast-tmp --group_reporting
You’ll typically see 10-50x better random I/O performance on tmpfs vs SSD, and potentially 100x vs spinning disk. For sequential writes the gap is smaller since modern SSDs are competitive, but random small files? RAM wins hard.
Use Cases That Actually Make Sense
Build Caches
# Mount build cache on tmpfs
sudo mount -t tmpfs -o size=8G tmpfs /home/builder/.cache/bazel
# Or for npm/yarn
sudo mount -t tmpfs -o size=4G tmpfs /home/dev/node_modules
Build tools write thousands of small files. Tmpfs turns a 10-minute build into a 4-minute build in some cases.
CI/CD Pipelines
# GitLab CI runner config (config.toml)
[[runners]]
[runners.docker]
tmpfs = {"/tmp" = "rw,exec,size=2g"}
Parallel test runners doing file I/O will absolutely love you for this. Disk contention during parallel CI is a quiet murderer of pipeline speed.
Session/State Storage
# Store browser session data in tmpfs (auto-clears on reboot — privacy win)
mkdir /mnt/browser-tmp
mount -t tmpfs -o size=1G tmpfs /mnt/browser-tmp
chromium --user-data-dir=/mnt/browser-tmp/profile
/dev/shm for IPC
/dev/shm already exists as tmpfs on Linux. It’s used by:
- PostgreSQL for shared memory buffers (
shm_open) - Video applications for frame buffers
- Any code using POSIX shared memory
# Check current size
df -h /dev/shm
# Resize if needed (usually half of RAM by default)
sudo mount -o remount,size=8G /dev/shm
Docker Tmpfs Mounts
Docker supports tmpfs mounts natively — great for containers that write sensitive temp data:
# Run a container with tmpfs at /tmp
docker run --tmpfs /tmp:rw,noexec,nosuid,size=256m nginx
# Or in docker-compose
# docker-compose.yml
services:
app:
image: myapp
tmpfs:
- /tmp:size=512m,mode=1777
- /run:size=64m
This is excellent for:
- Containers handling sensitive data (keys, tokens) that shouldn’t touch disk
- Test containers that write temp files and you want guaranteed cleanup
- High-throughput containers doing lots of small file I/O
# More explicit syntax with mount options
services:
worker:
image: worker
volumes:
- type: tmpfs
target: /tmp
tmpfs:
size: 536870912 # 512MB in bytes
Caveats: The Part You’ll Forget Until 2 AM
Data disappears on reboot. This is obvious when you write it out, but you will accidentally put something important in a tmpfs-backed path at some point. ~/.cache is often symlinked to tmpfs setups in performance configs. Don’t put anything you want to keep there.
Hibernation is complicated. When you hibernate, RAM is written to disk. Tmpfs contents should survive hibernation on most configurations, but don’t bet your database on it.
Size limits are per-mount. Two 1GB tmpfs mounts = 2GB RAM potentially used. Each mount is independent.
OOM behavior. If a process fills your tmpfs to its size limit, the write fails with ENOSPC. If you’re using ramfs and a runaway process escapes — you get OOM kills system-wide. Stick with tmpfs.
Swap interaction. If you don’t want tmpfs pages going to swap (to keep things truly in RAM), mount with noswap or just don’t have swap. The default allows swapping.
# Check what's eating your tmpfs space
du -sh /tmp/*
du -sh /dev/shm/*
# Remove a tmpfs mount when done
sudo umount /mnt/fast-tmp
Quick Decision Guide
- Temporary build artifacts, test output, CI scratch space → tmpfs
- Shared memory for IPC between processes →
/dev/shm(already tmpfs) - Docker container temp storage → Docker tmpfs mount
- Need absolute guarantee data never hits disk → ramfs (carefully, with monitoring)
- Anything that needs to survive reboot → neither, use actual disk
Your RAM is already paid for. Might as well make it work.