Skip to content
SumGuy's Ramblings
Go back

MinIO vs SeaweedFS: Self-Hosted S3 Storage Without AWS Bills

Your App Wants S3 But Your Wallet Wants Out

Here’s the situation: you’re building something — a media server, a document archive, a home automation system, a backup target. The application you’re using expects S3-compatible object storage. Maybe it’s a backup tool like Restic. Maybe it’s Immich for your photo library. Maybe it’s your application code that you wrote to expect boto3 and an S3 endpoint.

You could use AWS S3. You’d have the world’s most reliable storage. You’d also pay per GB stored, per GET request, per PUT request, per GB transferred out. At small scale it’s pennies. At home-lab scale with lots of writes and reads it accumulates. And there’s something philosophically satisfying about not routing your personal photos through Amazon’s infrastructure.

Two tools dominate the self-hosted S3-compatible storage space: MinIO and SeaweedFS. Both expose an S3-compatible API. Both are written in Go. Both work with basically anything that speaks S3. They have very different design philosophies, and understanding the differences helps you pick the right one.


What S3 Compatibility Actually Means

The S3 API is a de facto standard for object storage. Applications that support S3 can typically point at any S3-compatible endpoint and work identically. The core operations:

If you’ve ever used aws s3 cp or Python’s boto3, you can use exactly the same commands against MinIO or SeaweedFS — just change the endpoint URL and credentials.


MinIO: The Enterprise-Grade Option

MinIO describes itself as “High Performance Object Storage.” It’s fully S3 API compatible (including advanced features like versioning, object locking, lifecycle policies), ships with a polished web console, has excellent documentation, and comes in both an open-source AGPLv3 version and a commercial version.

For home labs and small production deployments, the open-source version is more than sufficient.

Single-Node Docker Compose Setup

# docker-compose.yml
version: '3.8'

services:
  minio:
    image: minio/minio:latest
    container_name: minio
    command: server /data --console-address ":9001"
    ports:
      - "9000:9000"    # S3 API
      - "9001:9001"    # Web console
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: changethispassword
    volumes:
      - minio-data:/data
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

volumes:
  minio-data:
docker compose up -d

# Access web console at http://your-server:9001
# S3 API endpoint: http://your-server:9000

Configuring MinIO via mc (MinIO Client)

# Install mc
curl https://dl.min.io/client/mc/release/linux-amd64/mc \
  --create-dirs -o /usr/local/bin/mc
chmod +x /usr/local/bin/mc

# Add your MinIO instance as an alias
mc alias set myminio http://localhost:9000 minioadmin changethispassword

# Create a bucket
mc mb myminio/mybucket

# Upload a file
mc cp /path/to/file myminio/mybucket/

# List contents
mc ls myminio/mybucket

# Set a bucket to be publicly readable
mc anonymous set public myminio/mybucket

# Enable versioning
mc version enable myminio/mybucket

Bucket Policies

MinIO supports S3-style bucket policies in JSON:

cat > policy.json << 'EOF'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {"AWS": ["arn:aws:iam::appuser:root"]},
      "Action": ["s3:GetObject", "s3:PutObject"],
      "Resource": ["arn:aws:s3:::mybucket/*"]
    }
  ]
}
EOF

mc admin policy create myminio app-policy policy.json
mc admin user add myminio appuser secretpassword
mc admin policy attach myminio app-policy --user appuser

Using MinIO with Python boto3

import boto3

s3 = boto3.client(
    's3',
    endpoint_url='http://your-server:9000',
    aws_access_key_id='minioadmin',
    aws_secret_access_key='changethispassword',
    region_name='us-east-1'  # Required but any value works
)

# Upload
s3.upload_file('/path/to/file.txt', 'mybucket', 'file.txt')

# Download
s3.download_file('mybucket', 'file.txt', '/tmp/file.txt')

# List
response = s3.list_objects_v2(Bucket='mybucket')
for obj in response.get('Contents', []):
    print(obj['Key'])

Distributed Mode (Multi-Node)

MinIO supports distributed deployments for erasure coding and high availability:

# Start a 4-node distributed MinIO (run on each node)
minio server http://minio{1...4}/data{1...4}

# Or with Docker Swarm (multi-node compose)
# Check MinIO's documentation for the current distributed setup syntax
# as it's evolved across versions

SeaweedFS: Built for Scale and Small Files

SeaweedFS was designed from the ground up for a specific problem: storing vast numbers of small files efficiently. Where traditional filesystems and even most object storage systems struggle with millions of tiny files, SeaweedFS handles them gracefully through a clever architecture separating metadata from actual data storage.

It exposes an S3-compatible API, FUSE mount, POSIX interface, and its own native API. It’s lighter-weight than MinIO, uses less RAM in idle state, and scales horizontally without MinIO’s licensing considerations.

Architecture Overview

SeaweedFS has three components:

Clients → Filer (S3 API / FUSE) → Master (metadata) → Volume servers (data)

Docker Compose Setup

# docker-compose.yml
version: '3.8'

services:
  seaweedfs-master:
    image: chrislusf/seaweedfs:latest
    container_name: seaweedfs-master
    command: master -ip=seaweedfs-master -port=9333
    ports:
      - "9333:9333"
    volumes:
      - seaweedfs-master:/data
    restart: unless-stopped

  seaweedfs-volume:
    image: chrislusf/seaweedfs:latest
    container_name: seaweedfs-volume
    command: volume -mserver=seaweedfs-master:9333 -port=8080 -dir=/data
    ports:
      - "8080:8080"
    volumes:
      - seaweedfs-volume:/data
    depends_on:
      - seaweedfs-master
    restart: unless-stopped

  seaweedfs-filer:
    image: chrislusf/seaweedfs:latest
    container_name: seaweedfs-filer
    command: filer -master=seaweedfs-master:9333 -port=8888 -s3 -s3.port=8333
    ports:
      - "8888:8888"    # Filer API
      - "8333:8333"    # S3 API
    depends_on:
      - seaweedfs-master
      - seaweedfs-volume
    restart: unless-stopped

volumes:
  seaweedfs-master:
  seaweedfs-volume:

Using the S3 API

SeaweedFS’s S3 API works with standard S3 tools:

# With mc
mc alias set weed http://localhost:8333 "" ""
# SeaweedFS S3 doesn't require auth by default (configure it for production)

mc mb weed/mybucket
mc cp /path/to/file.jpg weed/mybucket/

# With AWS CLI
aws configure --profile weed
# Access Key: any (or empty)
# Secret Key: any (or empty)
# Region: us-east-1

aws --profile weed --endpoint-url http://localhost:8333 \
  s3 cp /path/to/file.jpg s3://mybucket/

FUSE Mount

# Install weed CLI
# Mount a bucket as a local directory
weed mount -filer=localhost:8888 -dir=/mnt/seaweed -collection=mybucket

This mounts your SeaweedFS storage as a regular directory. Applications can read/write files without knowing anything about S3.


MinIO vs SeaweedFS: The Comparison

FeatureMinIOSeaweedFS
LanguageGoGo
S3 CompatibilityFullGood (common operations)
Bucket policiesFull S3 IAMBasic
VersioningYesYes (filer)
Object locking (WORM)YesPartial
Web consoleExcellentBasic
Small file handlingGoodExcellent
FUSE mountVia mcNative
Memory footprint (idle)~100-200MB~50-100MB
Horizontal scaleDistributed modeBuilt-in multi-volume
Enterprise featuresYes (tiering, encryption)Moderate
DocumentationExcellentGood
LicenseAGPL v3Apache 2.0
Best forSingle/multi-node S3 storageMillions of small files

Which One for Your Use Case?

MinIO if:

SeaweedFS if:

For most home labs: MinIO. It’s easier to set up, better documented, has a nicer UI, and works with everything out of the box. The web console alone makes it worth it when you want to quickly browse what’s in your buckets.

For large-scale photo archives or document storage where you’ve got hundreds of thousands of small files: look seriously at SeaweedFS.

Either way, self-hosted S3 means your data stays on your hardware, your costs are predictable, and you stop getting itemized AWS bills that list exactly how many times you accessed your own files. That’s worth an afternoon of setup.


Share this post on:

Previous Post
Paperless-ngx: Scan It, Forget It, Find It Instantly
Next Post
Whisper & Faster-Whisper: Self-Hosted Speech-to-Text That Actually Works