Schema definition, content versioning, delivery API caching, publish webhooks, and image transforms — how to build a Sanity alternative for a niche the giants ignore.
Research-based methodology. This guide pulls from public Contentful, Sanity, Strapi, and Hygraph (formerly GraphCMS) architecture writeups, the Payload CMS open-source codebase, JSON Schema standards, Vercel cache documentation, and our own builds with Claude. Where we have first-person experience we say so; otherwise we’re working from public sources. How we research.
A “general-purpose headless CMS” is one of the worst categories you could pick in 2026. Contentful is established, Sanity has the developer mind share, Strapi owns the open-source story, Payload is winning the “just give me a good DX” segment, and Hygraph and Storyblok are well-funded competitors fighting over the same enterprise customers. If you ship another one, you will be invisible.
And yet: under all of those tools, there are entire industries with content modeling needs that the generalists handle badly. Restaurants need menu items with prices, dietary tags, photo, multi-location availability, and seasonal scheduling. Small ecommerce shops need product collections with variants, related products, and SKU sync to Shopify. Real estate agencies need listing fields with maps, photo galleries, and price-history tracking. Each of those is a CMS — with a vertical schema baked in, opinionated content modeling, and one deep integration. That’s the wedge for a solo founder in 2026: not a CMS, but a CMS for a thing.
This guide is for someone who wants to ship a paid niche-CMS product in 5–8 weeks using Claude as their primary thinking partner. We’ll spend most of our time on the parts that determine whether the product feels like a generic CRUD tool or like a category-defining vertical app: the schema editor UX, the auto-generated entry forms, and the delivery API that has to be fast forever. If you want the broader build playbook first, our How to build a SaaS with Claude guide covers the general scaffolding workflow this one builds on top of.
A CMS looks like “a database with a UI.” It is not. It is a database with a runtime-generated UI that has to satisfy three audiences whose needs collide.
If your customer can’t define their content types in 10 minutes, they will leave. If they can but the schema editor produces a mess that’s painful to migrate, they’ll leave at month three. The schema editor is the single most important UX surface in a CMS. Treat it like a real product, not a settings page.
The CMS dashboard is a write-heavy app with two users. The CMS delivery API is a read-heavy global API with millions of pageviews. These have completely different scaling profiles. If you serve both from the same Postgres without caching, your customer’s production website goes down when their cron job hits your API at peak.
Editors will break the homepage. They will need to roll back. If your data model has only the current state, you cannot offer rollback, and the first significant outage your customer causes themselves will end the relationship. Content versioning has to be in the schema from day one.
The point of a headless CMS is that someone else owns the rendering. The CMS’s job at publish time is to tell the rendering layer (Next.js, Astro, Hugo, Shopify) that something changed. Webhook delivery has to be reliable, signed, and replayable. A CMS that doesn’t fan out reliable publish events is a database with a slower UI.
The schema design that matters most is the meta-schema — how you store the customer’s definition of their own content types. There are two viable patterns: a strict relational model (one DB table per content type, generated dynamically) or a JSON-Schema-shaped definition stored alongside content_entries.data jsonb. The second pattern wins for a SaaS because it’s multi-tenant friendly and migrations are application-level.
I'm building a headless CMS SaaS targeted at small restaurant chains
(2-10 locations). Eventually it'll generalize, but the schema needs to
be content-type-flexible from day one.
Core requirement: each workspace defines its own content types and
fields. Content entries are stored as JSON validated against the
content type's schema.
Design Postgres tables:
- workspaces: id, slug, name, owner_id, plan, created_at,
api_quota_per_month
- content_types: id, workspace_id, slug, name, description,
schema jsonb (JSON-Schema-shaped), preview_template text,
ordering_field text, created_at, updated_at
Example schema for "menu_item":
{
"fields": [
{ "id": "name", "type": "string", "required": true,
"ui": { "widget": "text" } },
{ "id": "price", "type": "number", "required": true,
"ui": { "widget": "currency", "currency": "USD" } },
{ "id": "description", "type": "string",
"ui": { "widget": "textarea" } },
{ "id": "photo", "type": "media",
"ui": { "widget": "image", "max_size_kb": 5000 } },
{ "id": "dietary", "type": "array",
"items": { "type": "string", "enum": ["vegan","gluten_free","nut_free"] },
"ui": { "widget": "multi_select" } },
{ "id": "available_at_locations", "type": "reference",
"reference_to": "location", "many": true }
]
}
- content_entries: id, workspace_id, content_type_id, slug, locale
default 'en', status (draft, published, archived), data jsonb,
created_at, created_by, updated_at, updated_by, published_at
- content_versions: id, entry_id, version int (incrementing per entry),
data jsonb (snapshot at this version), status, published_at,
created_at, created_by, change_summary text
Index: (entry_id, version DESC)
- media: id, workspace_id, url, original_filename, mime, width, height,
size_bytes, alt_text, caption, uploaded_by, created_at
- webhooks: id, workspace_id, url, events text[] (entry.published,
entry.unpublished, entry.deleted, content_type.changed),
secret text (for HMAC), enabled boolean
- api_keys: id, workspace_id, name, key_hash, scope (read, read_write),
last_used_at, created_at, revoked_at
- webhook_deliveries: id, webhook_id, event_type, payload jsonb,
attempts int, status (pending, success, failed), response_status,
response_body, next_retry_at, delivered_at
Index for cron: (status, next_retry_at)
Reference resolution: when a field is a "reference" to another
content_type, store as { _ref: "" } in the data jsonb. The
delivery API will resolve these on read with configurable depth.
Validation function: Postgres function `validate_entry_data(content_type_id,
data jsonb)` returns array of error objects. Called as a CHECK
constraint trigger on content_entries insert/update.
RLS:
- Workspace members read/write their workspace data
- Public delivery API reads happen via a separate service-role pool
with workspace_id scoping enforced by the API layer (faster than RLS
for hot path)
Output one SQL file.
One push-back you’ll need to do with Claude: it will often suggest a single data jsonb column on content_entries and skip the versions table. Insist on content_versions. Without it, you cannot offer rollback, audit, or scheduled publishing. With it, those features are nearly free. For database choice trade-offs at this scale, our Supabase vs Neon comparison covers the read-replica and connection-pool concerns directly relevant to a CMS.
This is where most CMS startups die. They build a JSON-editor disguised as a UI. Customers want a form-builder for fields: drag a field type onto a content type, set the label, set required, save. The editor must produce valid JSON-Schema under the hood without ever exposing JSON to the customer.
Build a schema editor at /workspaces/[slug]/types/[type-slug] using Next.js App Router and dnd-kit for drag-and-drop. Layout: - Left sidebar: a palette of field types. Each is a card you can drag. Field types: Text, Long Text, Rich Text, Number, Currency, Boolean, Date, Date+Time, Single Select, Multi Select, Image, File, Reference, Geo Point, Color, Slug, JSON. - Center: the field list for this content type. Each row shows field label, type, required badge, and a settings cog. Reorder by drag. Click a row to expand the inline settings panel. - Right sidebar: live preview of the auto-generated entry form, updated as the schema changes. Field settings (depend on field type): - All: label, field id (auto-slugged from label, manually editable before first save), help text, required, default value - Text: min/max length, regex validation, widget (text, slug, color, email, url) - Number: min/max, integer or decimal, currency code if currency widget - Single Select / Multi Select: options list (label + value pairs), display style (dropdown, radio buttons, pill buttons) - Image: max size, aspect ratio constraint (free, 1:1, 16:9, 4:3), alt text required (for accessibility) - Reference: target content type, allow many, sortable Save behavior: - POST /api/types/:id/schema validates the JSON-Schema with Ajv - If existing entries exist with this content type, run a dry-run validation to flag entries that would now be invalid. Show a migration confirmation step before committing. - Lock field IDs after first save (label is editable, ID is not) unless the user explicitly confirms a "rename and migrate" action Niche-specific defaults for the restaurant vertical: - New "menu_item" type pre-seeded with: name, price (currency), photo, description, dietary tags, available_at_locations - New "location" type pre-seeded with: name, address, phone, hours (a structured weekly hours object), photo Use Tailwind. Optimistic updates on field reorder. Save is debounced at 800ms after the last edit.
The migration dry-run is what saves your customer at month two. Without it, they remove a required field and silently invalidate 200 entries. With it, they get a “3 entries will become invalid — review” modal before saving. This is the kind of detail that separates a tool that gets adopted from one that gets abandoned.
Every content entry edit page is a form generated from the content type’s schema. This is the second-most-important UX surface in a CMS. The form must feel native to the content type, not like a generic JSON editor.
Build a content entry editor at /workspaces/[slug]/entries/[entry-id]
that auto-generates the form from the content type's schema.
Architecture:
- Server component fetches: the entry, its content type's schema, and
the latest content_versions row
- Client component renders fields based on schema field type + ui.widget
- Each field is a small renderer component:
TextField, LongTextField, RichTextField (Tiptap), NumberField,
CurrencyField, BooleanField, DateField, DateTimeField, SingleSelect,
MultiSelect, ImageField (with crop + alt), FileField, ReferenceField
(search + select linked entries), GeoField (Mapbox or MapTiler),
SlugField (auto from another field with edit override)
Behavior:
- Form state managed by react-hook-form with Zod schema generated from
the JSON-Schema (use a runtime converter)
- Auto-save draft every 5 seconds to a `content_versions` row with
status='draft' (versioned drafts, not just one)
- "Publish" button:
- Validates against the content type schema strictly
- Creates a new content_versions row with status='published',
increments version
- Sets the content_entries.published_at = now() and
content_entries.data = the new version's data
- Triggers webhook fan-out (queue insert into webhook_deliveries)
- "Revert to version N" action shows version history side panel,
preview of any version, restore button
Reference field UX:
- Search a content type with debounced autocomplete
- "Open in new tab" link to edit the referenced entry
- Drag-to-reorder for many references
- Show inline preview (title + thumbnail) of the referenced entry
Image field UX:
- Drop or click to upload
- Upload directly to Supabase Storage with a signed URL (don't proxy
through the API)
- Show transformed preview (crop, resize) inline
- Alt text input is REQUIRED to enforce accessibility
Niche-specific UX (restaurant menu_item):
- Price field defaults to currency=workspace.default_currency
- "Add to all locations" quick action on available_at_locations field
Use Tailwind. Mobile-friendly editor (tablet at minimum) since
operations staff often edit menus on iPad.
Two non-obvious wins: alt text required on image fields (legal accessibility lift, plus better SEO for the customer), and version history visible by default (customers feel the safety of rollback before they ever need it, which is when they decide to renew).
This is the API your customer’s frontend hits in production. It serves more traffic than your dashboard by orders of magnitude, and it has to be fast (under 100ms p95) and cheap to serve (because you’re bundling it into a $19–$49 plan).
Build the read-only delivery API at /api/v1/[workspace-id]/...
Endpoints:
- GET /content/:type-slug
Lists published entries of a content type. Query params:
?limit=, ?offset=, ?locale=, ?order=, ?fields= (sparse fieldset),
?include=referenceField (resolve references one level deep)
- GET /content/:type-slug/:slug-or-id
Fetches one entry. Same ?include= param.
- GET /content/:type-slug?filter[fieldName]=value
Simple equality filter on top-level fields.
- GET /media/:id
Returns the media record + signed URL to the asset.
- GET /search?q=
Full-text search across published entries (Postgres tsvector index
on data jsonb's text fields).
Authentication:
- Required: X-Api-Key header. Look up api_keys table by hash. Reject
if revoked or scope insufficient.
- Workspace_id from the api_key, NOT the URL (defense in depth).
Caching strategy:
- Set Cache-Control: public, s-maxage=60, stale-while-revalidate=300
- Set Vercel CDN cache key including api_key hash (so revoked keys
don't keep serving)
- ETag based on max(updated_at) of all entries in the response
- Return 304 if If-None-Match matches
Reference resolution (?include=):
- Default depth 0 (return references as { _ref: "id" })
- include=fieldName resolves that field's references to full entry
objects in a single batched query
- Maximum depth 2; refuse depth 3+ with a 400 error explaining why
Response shape:
{
"data": [...] | {...},
"meta": {
"total": number,
"limit": number,
"offset": number,
"next_cursor": string | null
},
"included": { "type-slug": { "id": entry, ... } }
}
Rate limiting:
- Per api_key: 100 requests per minute on starter plan, 1000 on pro
- Use Upstash Redis with sliding window; respond 429 with Retry-After
header
Important: this endpoint should hit a read-replica or a dedicated
read-pool, NEVER the same connection pool as the dashboard. Document
this clearly for whoever is operating the deployment.
The cache headers and ETag handling are what let you serve the customer’s production traffic from the Vercel CDN edge instead of your origin. Get this right and a $19/mo customer with 1M monthly pageviews costs you almost nothing to serve. For deployment-specific cache behavior, our Vercel vs Railway comparison covers the edge-cache differences in detail.
When an editor hits Publish, three things should happen within seconds: the delivery API serves the new content, the customer’s website cache invalidates, and any third-party integration (algolia search index, social preview cache, etc.) updates. Webhooks are how you make that happen reliably.
Build the webhook delivery system.
On a content_versions row insert with status='published':
1. Insert one row into webhook_deliveries for EACH enabled webhook on
the workspace whose `events` array includes 'entry.published'
2. The deliveries row has status='pending', next_retry_at=now(),
payload={
event: 'entry.published',
timestamp: ISO,
workspace_id, content_type, entry_id, entry_slug, version,
data:
}
3. A worker process (Vercel Cron + Edge Function, or Railway worker)
polls webhook_deliveries WHERE status='pending' AND next_retry_at <= now()
ORDER BY next_retry_at ASC LIMIT 100, with FOR UPDATE SKIP LOCKED.
For each pending delivery:
4. POST the payload to webhook.url with these headers:
- Content-Type: application/json
- X-CMS-Event: entry.published
- X-CMS-Signature: sha256=
- X-CMS-Delivery-Id:
5. Timeout: 5 seconds. Read response status.
6. On 2xx: status='success', delivered_at=now(), response_status set
7. On non-2xx or timeout: increment attempts. If attempts >= 6,
status='failed'. Otherwise schedule next_retry_at with exponential
backoff: 30s, 2m, 10m, 1h, 6h, 24h.
Built-in webhook recipes (one-click setup in dashboard):
- Vercel: POSTs to a Vercel deploy hook URL, no auth needed
- Next.js Revalidation: POSTs to /api/revalidate with the affected paths
- Algolia: POSTs to a thin proxy we run that translates to algolia
index updates (we host this, customer just enables the feature)
- Slack: POSTs to a Slack webhook with editor name + entry title +
Open in CMS button (Block Kit)
UI:
- Webhooks list page with status, last delivery, last error
- Per-webhook delivery log: last 100 deliveries, request/response,
one-click re-deliver button
- Test fire: send a fake entry.published payload to verify the
endpoint before going live
The HMAC signature is what makes the receiver trust the payload — without it, anyone can forge an event. The exponential backoff and retry log are what saves you the day a customer’s endpoint goes down for an hour: their content stays consistent, the deliveries replay, no data is lost.
Per-workspace pricing with content-entry and API-request limits is the dominant pattern in this category and works well:
Avoid free-forever plans with 25 entries — they attract only the freelancers building demos who never convert. A 14-day full-feature trial converts dramatically better. For the dev-tools and AI-coding-assistant choices that determine your build velocity, our vibe coding tools roundup covers what to actually use day-to-day.
You will not out-feature Contentful. Contentful has 600+ engineers and 12 years of polish. You will not out-price Strapi either — Strapi is genuinely free and self-hostable. You will not out-DX Sanity, who basically invented the modern schema editor UX. Solo founders win in three places, and only three:
Each of these maps to a real micro SaaS idea. Pick the niche — restaurants, real estate, course creators, podcasters, photographers — talk to twenty potential customers, and ship the schema-plus-delivery-API loop end-to-end before you build the team-management page.
A headless CMS SaaS that nails the schema editor, generates clean entry forms, serves a fast cached delivery API, and ships with vertical-specific schemas pre-seeded already beats most generic CMS tools for the niche customer. Pick the vertical, ship the entry-to-delivery loop first.
The stack, prompts, pricing, and mistakes to avoid — for solo founders building with AI.