One application instance, many customers, logically isolated data — the architecture pattern every SaaS uses by default, and the three flavors solo founders actually have to choose between.
Research-based overview. This article synthesizes public documentation, pricing pages, and user reports. How we research.
Almost every modern SaaS app is multi-tenant. When you sign up for Linear, Notion, or Stripe, you do not get your own server, your own database, or your own deployed copy of the app. You get an account inside the same shared system that thousands of other customers share — and the system arranges things so that you only ever see your own data. The shape of how that arrangement is implemented is what people mean when they argue about multi-tenant architecture.
The alternative to multi-tenancy is single-tenancy: every customer gets their own dedicated instance of the application and database. That model is still alive in regulated industries and high-end enterprise sales, but it loses on three axes that matter for solo founders.
The combined effect is that solo founders building SaaS in 2026 default to multi-tenant architecture without thinking about it. The interesting question is not whether to be multi-tenant. It is how to isolate tenants inside the shared system — and that is where the three architectural patterns differ.
Multi-tenant data isolation lives on a spectrum from “everything shared” to “almost nothing shared.” The three named patterns sit at recognizable points on that spectrum.
One database, one set of tables, one schema. Every row in every table carries a tenant_id (or organization_id, or workspace_id) column. Every query filters by it. Every write sets it. The tenant boundary is purely logical — enforced by application code or, more reliably, by Postgres Row Level Security policies that treat tenant_id as a non-negotiable filter.
This is what roughly 95% of solo SaaS apps actually use. Supabase’s entire opinionated stack is built around this pattern. Stripe, Notion, and Linear at startup scale all started here. The reason is straightforward: it is the cheapest, the simplest to deploy, and — with RLS in place — secure enough for the threat models most B2B SaaS faces.
One Postgres instance, but every tenant gets their own schema (tenant_acme.invoices, tenant_globex.invoices). The tables have the same structure but live in separate namespaces. The application connects to the right schema per request, typically by setting a search_path at session start.
This pattern is rarer than it used to be. Salesforce famously used a variant of it. The appeal is stronger isolation than shared-schema (a missed WHERE clause cannot leak across schemas) and the ability to back up or restore individual tenants more easily. The cost is migration pain — every schema change must be applied to every schema, which is fine at 10 tenants and operationally hellish at 10,000.
Solo founders rarely pick this pattern in 2026. RLS on a shared schema gives you most of the isolation benefit without the per-schema migration overhead.
Every tenant gets their own database (sometimes their own Postgres cluster, sometimes their own logical database inside a shared cluster). Connection routing happens at the application or proxy layer based on which tenant is making the request.
This is the most isolated pattern: a tenant’s data is in a physically different database, not just a different schema or a different row. It is also the most expensive to operate, the most painful to query across (cross-tenant analytics now require ETL into a separate warehouse), and the slowest to onboard new tenants (you provision a database, not just an INSERT).
Per-tenant databases show up in three contexts: regulated industries where tenant data must be physically segregated (healthcare, finance), very large enterprise customers who pay enough to justify the dedicated infrastructure, and self-hosted deployments where the customer runs the database on their own hardware.
| Pattern | Cost at small scale | Operational complexity | Isolation strength | Cross-tenant queries |
|---|---|---|---|---|
| Shared schema + tenant_id + RLS | Lowest | Lowest — one schema, one migration path | Strong if RLS is enforced; brittle if app-only | Trivial — same schema |
| Shared DB, separate schemas | Low | Medium — migrate every schema on every change | Strong — namespace boundary | Possible but awkward; cross-schema joins |
| Database per tenant | High — per-tenant compute floor | High — per-tenant provisioning, backups, monitoring | Strongest — physical separation | Hard — needs ETL or read replica fanout |
The trade is roughly: the more isolated the pattern, the more expensive and the harder to query across tenants. The less isolated, the cheaper but the more discipline you need around the application code or the database’s policy layer to keep tenants from seeing each other.
The shared-schema pattern only works because the database itself can enforce the tenant_id filter. If isolation depends on every developer remembering to write WHERE tenant_id = $current in every query, you are one missed WHERE clause from a data breach. That is a bad foundation for any product, and a worse one in 2026 when AI coding assistants generate SQL faster than a human can review it.
Postgres Row Level Security solves this by attaching policies to tables. Once enabled, every query against the table is rewritten by the database to include the policy expression, and the application cannot bypass it without a privileged role. The pattern looks like this:
SET LOCAL app.tenant_id = '...' based on the authenticated user’s tenant.USING (tenant_id = current_setting('app.tenant_id')::uuid).postgres superuser bypasses RLS, so your app role must not have BYPASSRLS.The deep version of this is in our explainers on what RLS is and how Row Level Security actually works. The hands-on Supabase variant is in how to set up Supabase RLS for multi-tenant SaaS. The short version: with RLS in place, the shared-schema pattern is the right default for solo SaaS founders building on Postgres.
The other downside of shared infrastructure is that one tenant’s workload can affect everyone else. A single customer running a 10-million-row export at 9am can saturate the database connection pool and slow every other tenant’s requests. This is the noisy neighbor problem, and it is real at every shared-resource pattern (less so at database-per-tenant, where the noise is contained to one customer’s own database).
Basic mitigations that solo founders should have on day one:
statement_timeout at the role or database level so a runaway query gets cut off rather than running for ten minutes.tenant_id turns every query into a sequential scan.pg_stat_statements with a tenant-id tag in queries lets you find the noisy neighbor before they show up as a P0.None of these mitigations require leaving the shared-schema pattern. They are operational hygiene that any multi-tenant Postgres app needs.
The case for going to pattern 3 is narrow but real:
None of these are typical for a solo SaaS founder building their first product. The default should be shared schema with RLS, and the move to per-tenant databases should be triggered by an explicit business or compliance requirement — not a vague unease about “isolation.”
The economic argument for shared-schema multi-tenancy is sharpest at the bottom of the curve. Consider three scenarios for a SaaS at 100 paying customers:
At 1,000 customers the gap widens to $25 vs $5,000–$10,000/month. This is the math that locks solo founders into the shared-schema pattern early and makes the transition to per-tenant databases something you only do when a specific customer is paying you enough to absorb the cost.
Auth and tenant context tie into this directly. Whatever auth library you use needs to issue a token that carries the tenant ID, and your database session needs to read that token to set the RLS policy variable. The mechanics of that token (typically a JWT) are covered in what is a JWT; the database-side patterns for shaping multi-tenant tables are in SaaS database schema patterns. If your customers are going to ask for SOC 2 attestations, that is an orthogonal compliance question covered in what is SOC 2 compliance — and the answer is mostly about your processes, not your tenancy model.
Multi-tenancy is the default architecture for SaaS, and solo founders should default to the shared-schema variant with a tenant_id column on every table and Row Level Security policies enforcing the isolation at the database. The other two patterns — separate schemas, separate databases — exist for specific reasons (regulated data, very large enterprise contracts, self-hosted deployments) and should only be reached for when those reasons apply.
The risk in the shared-schema pattern is sloppy isolation. The mitigation is RLS, and the rest is operational hygiene: indexes on tenant columns, connection pooling, query timeouts, and per-tenant rate limits. With those in place, a Postgres-backed SaaS can serve thousands of tenants from a single $25/month database with isolation strong enough for almost any B2B threat model.
The stack, prompts, pricing, and mistakes to avoid — for solo founders building with AI.