Six PostgreSQL patterns that show up in nearly every multi-tenant SaaS — with real DDL, performance considerations, and the failure modes each one prevents.
Research-based overview. This article synthesizes patterns from PostgreSQL documentation, Designing Data-Intensive Applications, and public SaaS engineering posts. How we research.
Three classic options exist for multi-tenancy: a tenant_id column on every table (shared schema), a separate Postgres schema per tenant, or a separate database per tenant. For solo founders, the shared-schema approach is almost always the right answer — it keeps operations simple, costs predictable, and queries efficient.
The pattern is to add workspace_id to every tenant-scoped table, foreign-key it to a workspaces table, and use it as the leading column on indexes:
CREATE TABLE workspaces (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE memberships (
workspace_id UUID NOT NULL REFERENCES workspaces(id) ON DELETE CASCADE,
user_id UUID NOT NULL, -- references auth.users in Supabase
role TEXT NOT NULL CHECK (role IN ('owner','admin','member')),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (workspace_id, user_id)
);
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
workspace_id UUID NOT NULL REFERENCES workspaces(id) ON DELETE CASCADE,
name TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Critical: workspace_id leads every tenant-scoped index
CREATE INDEX idx_projects_workspace ON projects (workspace_id, created_at DESC);
The big caveat: every query against a tenant-scoped table must filter by workspace_id. The robust way to enforce this is Row Level Security — the database itself rejects cross-tenant queries even if your application code forgets the filter.
Soft delete keeps the row but sets a deleted_at timestamp. Queries filter by deleted_at IS NULL; recovery is a single UPDATE away. The pattern works at any scale but has two cost-conscious nuances.
First, partial indexes are critical. A naive index on (workspace_id, created_at) includes deleted rows. Use WHERE deleted_at IS NULL on the index so it only stores live rows:
ALTER TABLE projects ADD COLUMN deleted_at TIMESTAMPTZ;
-- Active rows only — small, fast index
CREATE INDEX idx_projects_active ON projects (workspace_id, created_at DESC)
WHERE deleted_at IS NULL;
-- For unique columns, you need a partial unique constraint too
ALTER TABLE projects ADD CONSTRAINT unique_project_name_per_workspace
UNIQUE NULLS NOT DISTINCT (workspace_id, name) WHERE deleted_at IS NULL;
Second, soft-deleted rows still hit your row-count limits in tools that bill per row (early Hyperdrive variants, some Supabase add-ons). Add a periodic hard-delete job for rows with deleted_at < NOW() - INTERVAL '90 days' to keep storage bounded.
The big mistake: forgetting the WHERE deleted_at IS NULL filter in a query. Defend by either using a view (CREATE VIEW projects_active AS SELECT * FROM projects WHERE deleted_at IS NULL) or by enforcing it in your data access layer.
The minimum is four columns on every meaningful table: created_at, updated_at, created_by, updated_by. Combine them with a trigger that auto-updates updated_at on every change.
CREATE OR REPLACE FUNCTION set_updated_at() RETURNS TRIGGER AS $$
BEGIN NEW.updated_at = NOW(); RETURN NEW; END;
$$ LANGUAGE plpgsql;
ALTER TABLE projects
ADD COLUMN updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
ADD COLUMN created_by UUID,
ADD COLUMN updated_by UUID;
CREATE TRIGGER projects_updated_at
BEFORE UPDATE ON projects
FOR EACH ROW EXECUTE FUNCTION set_updated_at();
For full audit (the “who changed what when” question), add an immutable events table:
CREATE TABLE audit_events (
id BIGSERIAL PRIMARY KEY,
workspace_id UUID NOT NULL,
actor_id UUID,
entity_type TEXT NOT NULL,
entity_id UUID NOT NULL,
action TEXT NOT NULL, -- 'create', 'update', 'delete'
changes JSONB, -- old/new values for updated fields
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_audit_entity ON audit_events (entity_type, entity_id, created_at DESC);
CREATE INDEX idx_audit_workspace ON audit_events (workspace_id, created_at DESC);
Write to audit_events from your application layer (or via row-level triggers if you want database-level enforcement). Never UPDATE or DELETE rows in the audit table — that’s the whole point.
The classic Rails-style polymorphic relation uses two columns: parent_type and parent_id. This is convenient but loses foreign-key integrity — the database can’t enforce that parent_id actually exists.
The Postgres-friendly version uses a discriminator column with multiple nullable foreign-key columns, plus a CHECK constraint that enforces exactly one is set:
CREATE TABLE comments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
workspace_id UUID NOT NULL REFERENCES workspaces(id) ON DELETE CASCADE,
body TEXT NOT NULL,
-- exactly one of these is set
project_id UUID REFERENCES projects(id) ON DELETE CASCADE,
task_id UUID REFERENCES tasks(id) ON DELETE CASCADE,
document_id UUID REFERENCES documents(id) ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
CHECK (
(project_id IS NOT NULL)::int +
(task_id IS NOT NULL)::int +
(document_id IS NOT NULL)::int = 1
)
);
Foreign keys cascade properly, joins are simple, and you can add new parent types incrementally. The cost is a slightly wider table; the benefit is referential integrity the database actually enforces.
The pattern is two tables: a thin row that points at the current version, and a versions table that stores every revision. Reads of current state hit the thin row; history queries scan the versions table.
CREATE TABLE documents (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
workspace_id UUID NOT NULL,
title TEXT NOT NULL,
current_revision_id UUID,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE document_revisions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
document_id UUID NOT NULL REFERENCES documents(id) ON DELETE CASCADE,
body TEXT NOT NULL,
created_by UUID,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
parent_revision_id UUID REFERENCES document_revisions(id)
);
ALTER TABLE documents ADD CONSTRAINT fk_current_revision
FOREIGN KEY (current_revision_id) REFERENCES document_revisions(id);
CREATE INDEX idx_revisions_document ON document_revisions (document_id, created_at DESC);
The parent_revision_id column lets you reconstruct branching history if you support it (think Git for docs). For most SaaS, linear history is enough.
Storage cost is real — every save creates a new row. Mitigate with a periodic compaction job that prunes revisions older than 30 days, except the most recent N per document.
Two cases come up constantly. First, the soft-delete case (above): names must be unique among non-deleted rows but you want to allow re-using a name after deletion. Second, the “only one default” case: every workspace can have many addresses, but only one can be marked is_default.
Both are partial unique indexes:
-- Only one default address per workspace
CREATE UNIQUE INDEX one_default_per_workspace ON addresses (workspace_id)
WHERE is_default = TRUE;
-- Only one active subscription per user (with billing system)
CREATE UNIQUE INDEX one_active_subscription ON subscriptions (user_id)
WHERE status IN ('active', 'trialing');
-- Soft-delete-aware uniqueness on slugs
CREATE UNIQUE INDEX unique_active_slug ON projects (workspace_id, slug)
WHERE deleted_at IS NULL;
The constraint is enforced at the database level — concurrent inserts that would violate it both fail, and the application gets a clean unique-violation error. This is significantly more robust than checking SELECT-then-INSERT in application code, which has race conditions.
Most of these add 2-4 columns per table or one extra index per table. At <100 GB of data — which covers most solo SaaS for years — the cost is negligible. The real cost is the consistency: you have to apply each pattern to every table where it’s relevant, not just the first ones you create.
If you’re using Prisma or Drizzle, encode these patterns as schema mixins or shared model fragments so applying them is one line per table rather than copy-paste.
This article covers patterns every SaaS needs. There are a few we deliberately skipped because they’re only relevant at higher scale:
If you’re setting up a Postgres backend right now, the Supabase RLS tutorial walks through enforcing the multi-tenancy pattern at the database level. The RLS deep dive covers the internals of how that enforcement actually works. And if you’re still picking your database, Supabase vs Neon covers the tradeoffs — both support every pattern in this article.
For builders working with Claude or another AI tool, the six-step SaaS build framework includes prompt templates that incorporate these patterns directly into the data model phase.
Cited references: PostgreSQL documentation, Designing Data-Intensive Applications by Martin Kleppmann, Supabase database guides, and public schema posts from Linear, Notion, and Plaid engineering blogs.
The stack, prompts, pricing, and mistakes to avoid — for solo founders building with AI.