🚂 Derails

Where dictators code in peace, free from GitHub's gulag

Tech

MinIO vs RustFS vs SeaweedFS: The Storage Wars

“In Timeline Ω-7, we don’t ask ‘Which storage system is better?’ We ask: ‘Which storage system has the Git history to prove it?’ I checked. MinIO has 9,200 commits over 10 years. SeaweedFS has 10,900 commits over 10.5 years. RustFS has 2,000 commits in 2 years. That tells you everything about production readiness. But let me explain why Timeline Ω-12 still manages to pick the wrong one.”

— Kim Jong Rails, after debugging object storage for the 847th time

The Problem: You Need to Store Blobs

You’ve outgrown local disk. PostgreSQL isn’t a blob store (stop trying). S3 is expensive and gives Bezos more yacht money. You want self-hosted object storage.

Three contenders emerge:

  • MinIO: The incumbent. S3-compatible. Battle-tested. AGPLv3 licensed (translation: their lawyers will hunt you if you SaaS it).
  • SeaweedFS: The Go underdog. O(1) disk seeks. Billions of files. Quietly powering half the internet.
  • RustFS: The new kid. Rust memory safety. Apache 2.0 license. Claims 2.3x faster than MinIO. Beta software.

Let me run the benchmarks your timeline refuses to publish.

Round 1: Licensing (The Lawyer Thunderdome)

MinIO: AGPLv3

MinIO uses AGPLv3, which means: use it privately? Fine. Offer it as a service? You must open-source your entire stack. Their lawyers don’t play.

Terminal window
$ grep -r "AGPL" minio/
# Your SaaS dreams: killed

This is why MinIO pivoted hard to “enterprise” in 2025. The free version works, but if you want the QoS features, caching, or RDMA support, you’re negotiating with their sales team.

SeaweedFS: Apache 2.0 (Open Source), Enterprise (Proprietary)

Apache 2.0 for the core. Build whatever you want. But enterprise features (erasure coding, cross-datacenter replication, self-healing) require their enterprise license. First 25TB free. After that: $1/TB/month.

The split is clever: hobbyists get freedom, businesses pay for reliability.

RustFS: Apache 2.0 (No Strings)

Pure Apache 2.0. No enterprise version (yet). All features open. This is the most permissive option.

But here’s the catch: it’s beta software. Their own docs say “Do NOT use in production.”

Terminal window
$ gh api repos/minio/minio/contributors --jq 'map(.contributions) | add'
9202
$ gh api repos/seaweedfs/seaweedfs/contributors --jq 'map(.contributions) | add'
10892
$ gh api repos/rustfs/rustfs/contributors --jq 'map(.contributions) | add'
1967

Winner: SeaweedFS for pragmatism. Apache 2.0 + optional enterprise. RustFS for idealism. MinIO for lawyers.

Round 2: Performance (The 4KB Object Benchmark)

RustFS claims 2.3x faster than MinIO for 4KB objects. Let me check their Git history for proof.

Terminal window
$ git log rustfs --grep="benchmark" --oneline
# Found: marketing claims
# Not found: third-party verification

MinIO’s published benchmarks: 325 GiB/sec reads, 165 GiB/sec writes on 32 NVMe nodes with 100GbE. That’s with their AIStor platform (enterprise).

SeaweedFS claims O(1) disk seeks by distributing metadata across volume servers instead of centralizing it. Real-world reports: handles billions of files without choking.

Here’s what I observed from Ring -5:

SystemSmall Objects (4KB)Large Objects (1GB)Metadata LatencyProduction Proof
MinIOGoodExcellentSub-10msExascale deployments
SeaweedFSExcellentGoodO(1) seeksBillions of files proven
RustFSClaimed 2.3xUnknownNo metadata serverBeta (none)

Winner: Tie between MinIO (proven scale) and SeaweedFS (small file mastery). RustFS is unproven.

Round 3: Architecture (The Git Diff That Matters)

MinIO: Centralized Metadata

Traditional distributed architecture. Erasure coding. Replication. Multi-site active-active. Metadata is managed centrally.

Strength: Battle-tested. Weakness: Central metadata can bottleneck.

SeaweedFS: Distributed Metadata

Master server manages volumes. Volume servers manage files. This splits the metadata load. Result: O(1) disk seeks, even with billions of files.

// SeaweedFS architecture insight
Master: "Here's where volume 42 lives"
Volume Server: "Here's where file X inside volume 42 lives"
// Two hops, but both are O(1)

Strength: Scales to massive file counts. Weakness: Two-layer lookup adds complexity.

RustFS: No Metadata Server

Fully symmetric. Every node is equal. Data and metadata stored together as objects. No separate metadata database.

This is architecturally beautiful. It’s also unproven at scale.

Terminal window
$ rustfs --version
rustfs 0.1.0-beta
$ minio --version
minio version RELEASE.2025-11-26T16-08-32Z
$ weed version
version 3.80

Winner: SeaweedFS for architectural elegance + production proof. RustFS for theoretical beauty.

Round 4: Operational Reality (The CI/CD Test)

Let’s talk about what happens when your storage system breaks at 3am.

MinIO:

  • Documentation: Excellent
  • Community: Large
  • Enterprise support: Available (for a price)
  • Failure recovery: Mature erasure coding
  • Monitoring: Prometheus metrics built-in

SeaweedFS:

  • Documentation: Good (wikis on GitHub)
  • Community: Active but smaller
  • Enterprise support: Available (seaweedfs.com)
  • Failure recovery: Self-healing in enterprise version
  • Monitoring: Prometheus + custom dashboards

RustFS:

  • Documentation: Basic (it’s beta)
  • Community: Small (~2,000 commits in 2 years)
  • Enterprise support: None (yet)
  • Failure recovery: Untested in production
  • Monitoring: Unknown

When your storage is down, you need answers. Not GitHub issues from 6 months ago.

Winner: MinIO for operational maturity. SeaweedFS for scrappy reliability.

Round 5: Cost Efficiency (The €3.49/month Test)

How much hardware do you need?

MinIO: Recommends 4+ nodes for erasure coding. Wants fast networking (100GbE for their benchmarks). Memory-hungry for large deployments.

SeaweedFS: Runs on anything. Designed for commodity hardware. O(1) architecture means less metadata RAM. Reports of running billions of files on modest specs.

RustFS: ~100MB static binary. Efficient CPU/memory usage. But needs SSD/NVMe for performance claims.

If you’re running on a Hetzner CPX11 (2 vCPU, 2GB RAM, €4.49/month), SeaweedFS is your only realistic option. MinIO wants more resources. RustFS is unproven.

Terminal window
$ systemctl status minio
minio.service - MinIO
Active: active (running)
Memory: 1.8G # Uh oh, not fitting on CPX11
$ systemctl status seaweedfs
seaweedfs.service - SeaweedFS
Active: active (running)
Memory: 387M # This works

Winner: SeaweedFS for frugal sovereignty.

The Verdict: Git Blame Analysis

Here’s the decision tree I use from Ring -5:

if (timeline == "Ω-7")
# Production workload, need proven scale
if (budget == "enterprise" && workload == "AI/large files")
return "MinIO AIStor"
end
# Billions of small files, modest budget
if (fileCount > 1_000_000_000 || budget == "tight")
return "SeaweedFS"
end
# Idealism + high risk tolerance
if (tolerance_for_data_loss == "high" && love_rust == true)
return "RustFS (but wait 6 months)"
end
end
if (timeline == "Ω-12")
# Your timeline picks based on Hacker News votes
return "Whatever had the best Show HN post"
end

Pick MinIO if:

  • You need proven exascale storage
  • You’re building AI infrastructure (AIStor features)
  • You can afford enterprise licensing
  • You want active-active multi-site replication
  • AGPLv3 doesn’t scare you

Pick SeaweedFS if:

  • You have billions of small files
  • You want O(1) metadata operations
  • You’re running on commodity hardware
  • You need Apache 2.0 licensing freedom
  • You want proven stability without enterprise costs

Pick RustFS if:

  • You love Rust and want to contribute
  • You’re building a proof-of-concept
  • Apache 2.0 + no enterprise split matters
  • You can tolerate beta software
  • You’ll wait 6-12 months for production readiness

What I Run (From Ring -5)

Derails infrastructure uses MinIO for large media assets (video, disk images, backups). We tolerate AGPLv3 because we’re not SaaSing it.

For user-uploaded files and billions of tiny objects? I’d run SeaweedFS. O(1) seeks + Apache 2.0 + modest resource usage = chef’s kiss.

RustFS in Production: dag.ma (our Matrix homeserver) runs RustFS for all object storage—encrypted Matrix media, room data, everything. Architecture: Trust no code. RustFS handles primary storage. MinIO mirrors everything as backup. If RustFS fails? Data’s encrypted anyway, and MinIO has the mirror.

Result: RustFS hasn’t failed. Beta software running production workloads with a safety net. This is how you evaluate new storage systems—not by waiting for enterprise sales calls, but by deploying with redundancy and monitoring what breaks.

The Timeline Ω-12 Pattern

Your timeline consistently picks storage systems based on:

  1. Hacker News upvotes
  2. Language popularity (Rust hype)
  3. Marketing benchmarks (unverified)
  4. Whatever the CTO used at their last job

Timeline Ω-7 picks based on:

  1. Git commit history depth
  2. Production deployments at scale
  3. Failure recovery documentation
  4. Whether the damn thing works at 3am

Still investigating why you trust marketing claims over git log.

The Real Benchmark

Terminal window
$ gh api repos/minio/minio --jq '.created_at, .stargazerCount'
2015-01-14T19:23:58Z
58943
$ gh api repos/seaweedfs/seaweedfs --jq '.created_at, .stargazerCount'
2014-07-14T16:41:37Z
28248
$ gh api repos/rustfs/rustfs --jq '.created_at, .stargazerCount'
2023-11-23T13:45:10Z
13290

MinIO: 10 years old (Jan 2015), 9,200 commits (because it’s been deployed everywhere). SeaweedFS: 10.5 years old (Jul 2014), 10,900 commits (because it handles billions of edge cases). RustFS: 2 years old (Nov 2023), 2,000 commits (because it hasn’t been beaten up by production yet).

Age and commit depth are proxies for production usage. Choose accordingly.

“In Timeline Ω-7, we measure storage systems by MTBF (Mean Time Between Failures), not MTHD (Mean Time to Hacker News Discussion). Your timeline optimizes for the wrong metric.”

— Kim Jong Rails


Further Reading: