Technical post for builders. If you’re an operator looking for booking software, the data isolation post is more relevant.
The hardest problem in multi-tenant SaaS isn’t features — it’s data architecture. Most SaaS picks shared-database multi-tenancy because the tooling assumes it. The trade-offs (data isolation, noisy neighbour, compliance) are accepted as costs of doing business.
In 2024-2025, a different pattern became viable: per-tenant databases on Cloudflare D1. This post describes the architecture and why it works.
The shared-database default
Standard multi-tenant SaaS:
[ all tenants' API requests ]
↓
[ application layer ]
↓ (with tenant_id filter)
[ shared Postgres ]
↓
[ rows: tenant_id | data... ]
Every query must include WHERE tenant_id = $current.
Miss the filter once → cross-tenant data leak.
Postgres Row-Level Security (RLS) helps but doesn’t eliminate the class of bug. It also creates query complexity that slows things down at scale.
The per-tenant database alternative
[ tenant request ]
↓
[ tenant routing layer ] (slug → DB binding)
↓
[ tenant-specific D1 ]
↓
[ rows (no tenant_id) ]
Each tenant has their own database. No tenant_id
column anywhere. Tenant routing happens before the
query layer; the application layer can’t accidentally
query across tenants.
Why Cloudflare D1 makes this practical
D1 is SQLite in Cloudflare’s network, accessible from Workers. Properties that matter:
- Database creation is cheap. Creating a D1 is an API call. No provisioning lag, no per-DB monthly fee.
- Zero per-database minimum cost. You pay for storage and queries; idle databases cost nothing.
- Replication is automatic. D1 replicates across regions. Reads serve from the nearest replica.
- Bindings are lazy. A Worker can have hundreds of D1 bindings; only the ones used per request are activated.
- SQLite semantics. Familiar SQL, fast for bookings-style workloads (mostly reads, point inserts).
In aggregate: per-tenant SQLite databases at the edge, managed via API, costing pennies per tenant per month.
The routing layer
The challenge: how does a request book.zedule.app/looks-salon
land on the right D1?
Two-level routing:
Level 1: slug → pool ID
A KV lookup maps looks-salon → pool-002. Each pool
is a Worker with bindings to ~100 tenant D1s.
Level 2: pool worker → tenant D1 The pool worker has D1 bindings for each tenant in its pool. The slug determines which binding to use.
// Inside the pool worker
const tenant = await env.SLUG_KV.get(slug);
const db = env[`DB_${tenant.dbBinding}`];
return await db.prepare("SELECT * FROM appointments").all();
The first KV lookup is ~5ms. The D1 query is ~10-30ms on regional reads. Total: ~15-35ms for a tenant query.
Pool sizing
Why pools instead of one Worker per tenant?
- Workers have a binding limit (~1000 per Worker).
- Deploying a new Worker per tenant would be slow.
- Pooling 100 tenants per Worker keeps deployment manageable.
When a pool fills up, a new pool is created. The KV map sends new tenants to the new pool.
Pool 001: tenants 1-100
Pool 002: tenants 101-200
Pool 003: tenants 201-300
...
This scales horizontally: at 10,000 tenants, you have 100 pools. Each pool deploys independently.
Schema migrations
The hardest part of per-tenant DBs is schema migrations. A migration has to apply to every tenant DB.
The pattern:
- Migrations are versioned in code. Each migration has a sequence number.
- Each tenant DB stores its current version. A
_metatable with one row. - On each request, the worker checks if migrations are needed. If so, it runs them before the request.
This is “lazy migration” — tenants migrate when they’re hit, not all at once. The first request after a deploy might be 50ms slower while migrations run.
For aggressive deploys, an explicit script can run migrations across all tenants ahead of time.
Backup and restore
Each D1 supports its own export/restore. This makes per-tenant backup straightforward:
- Export:
wrangler d1 export <tenant-db> --output=... - Restore: import the SQL into a new D1 binding
Customer-requested data exports become trivial. GDPR right-to-export is just an export command.
Cross-tenant analytics
The trade-off: cross-tenant queries are harder.
Options:
- Aggregate via Workers. A scheduled Worker iterates each tenant DB and aggregates into a separate analytics DB.
- Stream events to a separate analytics pipeline. Each booking emits an event; events flow to BigQuery or similar.
- Don’t. Many SaaS doesn’t actually need cross-tenant analytics; it’s a nice-to-have.
For Zedule, option 2 is the long-term plan. For now, analytics are per-tenant.
Cost comparison
Per-tenant cost on Cloudflare D1:
- Storage: ~$0.75 per GB-month, but typical tenant DB is under 100MB. Cost: pennies.
- Queries: $0.001 per 1k. Typical tenant: ~10k queries per day. Cost: $0.30/month.
- Compute (Workers): ~$0.01-0.10/month per tenant.
Total: ~$1-3/month per tenant.
Compare to shared-DB SaaS on AWS RDS:
- Multi-AZ Postgres: ~$200-500/month minimum
- Plus app servers, load balancers, etc.
- Per-tenant cost spreads across N tenants
At small scale (under 1000 tenants), per-tenant D1 is cheaper. At large scale (10k+ tenants), the math crosses over but the security/compliance advantages remain.
Limits
- Write throughput. D1 writes serialise to the primary; very high write volumes per tenant aren’t ideal.
- Cross-tenant queries. As discussed.
- Beta-era features. D1 is GA but some advanced features (read replicas, point-in-time recovery) are still maturing.
- Vendor lock-in. D1 is Cloudflare-specific. Migrating to another provider would be non-trivial.
For booking workloads, these limits don’t bite. For write-heavy SaaS (analytics, logging), they would.
How Zedule uses it
Zedule’s full architecture:
- Marketing site: Astro on Cloudflare Pages
- Dashboard app: React SPA on Cloudflare Pages
- Gateway worker: handles cross-tenant routing, authentication, businesses-list endpoint
- Pool workers: handle per-tenant routes; each pool has 100 tenant D1 bindings
- Per-tenant D1: one D1 per business
- KV: slug → pool routing
- R2: customer-uploaded images, business logos
This stack runs entirely on Cloudflare. No AWS, no Kubernetes, no managed Postgres. Cost per tenant: low single-digit dollars per month.