Subscribers, posts, sending, opens, deliverability and the unglamorous DKIM/SPF work that keeps you out of the spam folder — mapped step by step.
Research-based methodology. This guide draws from the Resend and SendGrid docs, public Beehiiv and ConvertKit architecture writeups, the IETF DMARC spec (RFC 7489), and our own builds with Claude. Where we cite specific behavior we link the source. How we research.
Let’s start with what nobody else writing a build guide will say: this is one of the hardest SaaS categories to compete in. Beehiiv has eaten the creator newsletter market. Substack owns paid newsletters with a built-in audience graph. ConvertKit (now Kit) owns the creator-marketing tier. Mailchimp still owns the SMB long tail. None of those companies are going away in the next 36 months, and all of them are well-capitalized.
If your plan is “build a Beehiiv competitor,” close this tab and read our Beehiiv vs Substack comparison instead, then go publish on one of them. You’ll get to revenue faster.
If your plan is to build a specific newsletter SaaS for a vertical or workflow that those tools can’t serve well — B2B internal newsletters with deep CRM integration, paid newsletters for niche industry verticals, transactional digests where the content is generated rather than written, internal-comms newsletters with read-receipt enforcement — then keep reading. The technical challenges are real, but the market positioning is what kills most attempts, not the code.
Two viable wedges in 2026:
If you still want to build the underlying ESP yourself, here’s the path. Just know that Sections 1–6 are the easy parts and the deliverability section is where most builds die.
You need workspaces (the publication), subscribers (with status and source), posts (the issue), sending_jobs (a queued send), opens and clicks (events), plus suppressions (bounces, complaints, unsubscribes — legally critical). The model is small but the indexes determine whether you can serve a list of 500,000 subscribers without a 30-second segmentation query.
I'm building a newsletter SaaS for niche B2B publishers (lists of 1k–500k subscribers). I need a Postgres schema designed for these hot queries: 1. Look up a subscriber by email within a workspace (login, opt-in confirm, click-tracking redirect) 2. List subscribers in a workspace filtered by tag(s), status, and created_at range (segmentation builder) 3. Insert open and click events at high write volume (10k/min during a send) without slowing down reads 4. List posts for a workspace with their aggregated open and click rates (dashboard) Tables I need: workspaces, subscribers, subscriber_tags (many-to-many), posts, sending_jobs, sent_emails (one row per recipient per post), opens, clicks, suppressions (bounces, complaints, unsubs). For each table give me: - Columns + types + constraints - Indexes (including partial indexes for status='active' subscribers, GIN on tags, BRIN on opens.created_at if appropriate) - Why each index is shaped that way for the workload above - Foreign keys with ON DELETE CASCADE where it's safe Then write Supabase RLS so workspace members only see their own data, and the public click-redirect endpoint can write to opens/clicks via an RPC without authentication (using a signed token in the URL). Output as one runnable SQL file.
The non-obvious detail Claude will sometimes miss: sent_emails needs a unique constraint on (post_id, subscriber_id) so retries on the send worker can’t double-send. Push back if it isn’t in the first draft.
Single opt-in (“type your email, you’re subscribed”) is dead in 2026. ESPs like Resend and Postmark enforce double opt-in for new domains, Gmail’s 2024 sender requirements (one-click unsubscribe, low complaint rate, authenticated mail) effectively force it, and the GDPR/CASL legal posture in most of your customers’ markets requires affirmative consent. Build double opt-in from day one or rebuild it from day 90.
I'm using Supabase (Postgres + Edge Functions) and Resend for
transactional email.
Build a double opt-in flow:
1. POST /api/subscribe accepts email + workspace_slug. It must:
- Validate the email format and check it's not in suppressions
- Insert (or upsert) a subscribers row with status='pending'
- Generate a single-use opt-in token (signed JWT, 24h expiry,
stored hashed in opt_in_tokens table)
- Send a Resend transactional with subject the workspace owner
configured (default: "Confirm your subscription to [name]")
- Return 200 with a generic "check your inbox" message even if
the email was already subscribed (anti-enumeration)
2. GET /api/subscribe/confirm?token=... must:
- Verify the token signature, expiry, and that it hasn't been used
- Mark the subscriber status='active', record confirmed_at and
the IP for compliance audit
- Mark the token used
- Redirect to the workspace's "thanks" page
3. POST /api/subscribe/unsubscribe must work both via a one-click
List-Unsubscribe header URL (POST, no body needed, RFC 8058) AND
via a clickable unsubscribe link in the email footer.
For both confirm and unsubscribe, write the audit row (timestamp,
IP, user agent) into a subscriber_events table.
Use Resend's React Email for the confirmation template. Output as
TypeScript Edge Functions plus the SQL for the supporting tables.
The List-Unsubscribe header (one-click unsubscribe) is no longer optional. Gmail and Yahoo both require it for senders over 5,000 messages/day. Implement it on day one.
The post editor is where every newsletter SaaS spends six months and ships a worse Beehiiv. Don’t. Use a Tiptap or Lexical editor with a small, opinionated set of blocks (heading, paragraph, image, link, button, divider, embed). Render twice: once for the web archive page (modern HTML, fine), once for the email body (table-based layout, inline styles, MSO conditionals for Outlook).
The email-safe render is the part to get right. Outlook 2016+ on Windows still uses Word as the rendering engine. Apple Mail strips half your CSS. Gmail clips messages over 102KB and shows “[Message clipped]” with a “view entire message” link — brutal for engagement. Aim for under 80KB of HTML per send, no JavaScript, no <style> in <head> if you can avoid it (inline everything), and test with Litmus or Email on Acid before you trust your output.
This is the load-bearing piece of the entire product. A send job is “take this post, find the matching subscribers, generate one personalized email per subscriber, hand it to the ESP at a rate that doesn’t get throttled.” If you do this in a Vercel function, you will hit timeout limits and your sends will be partial. Use a long-running worker on Railway, Fly, or a dedicated queue (Inngest, QStash, or a Postgres-based queue with PgBoss).
I'm building a send worker that takes a sending_job row and emails a post to a list of 50,000 subscribers via Resend. Constraints: - Resend's API caps at ~10 requests/sec per account by default - I need partial-progress resilience: if the worker crashes at email 35,000 of 50,000, restarting must not re-send to the first 35k - Each subscriber gets a personalized email (merge tags: first_name, unsubscribe_url with their token, tracking pixel URL, click-wrap URLs for every link in the post) - Bounces and complaints come back via Resend webhooks asynchronously and need to update suppressions Architecture: - A `dispatch_send_job` function fans the job into sent_emails rows with status='queued' (one row per recipient) - A worker process polls for queued sent_emails, claims a batch of N using FOR UPDATE SKIP LOCKED, sends them via Resend at a controlled rate, marks them 'sent' or 'failed' with the Resend message_id - A retry policy: 'failed' rows with retryable error retry up to 3 times with exponential backoff; permanent failures (invalid_email, hard_bounce) write to suppressions - A separate webhook handler /api/webhooks/resend updates sent_emails on delivered/bounced/complained events Write the worker in TypeScript (Node) with pg and the Resend SDK. Include the SQL for the FOR UPDATE SKIP LOCKED claim query and the retry policy.
The FOR UPDATE SKIP LOCKED pattern is the unlock here. It lets multiple worker processes safely claim disjoint batches of work from one queue table without coordinating, which means you can scale horizontally just by running more workers when your largest customer schedules a 500k send.
Every email gets a tracking pixel (a 1x1 transparent GIF served from your domain) and every link is rewritten to https://yourdomain.com/c/<signed-token> which redirects to the original URL. You record the open or click, then 302 to the destination. Open tracking is unreliable (Apple Mail Privacy Protection has been pre-fetching pixels since 2021, inflating opens by ~40%); click tracking is the metric that actually means something.
Build two Next.js Route Handlers for tracking.
1. GET /t/o/[token] (open pixel)
- Decode the token (signed, contains sent_email_id)
- Insert an opens row (sent_email_id, ip, user_agent, ts) without
blocking the response
- Return a 1x1 transparent GIF with Cache-Control: no-store
- Must respond in under 50ms (use waitUntil for the DB write)
2. GET /t/c/[token] (click redirect)
- Decode the token (signed, contains sent_email_id and the
original target URL)
- Insert a clicks row (sent_email_id, target_url, ip, ts)
- 302 redirect to the target URL
- Block obviously-bot clicks (no UA, datacenter IP ranges) by
marking is_bot=true rather than rejecting (we still log them)
Token format: HMAC-SHA256 signed JSON, base64url, with a version byte
so we can rotate the secret. No DB lookup required to verify.
Add Apple Mail Privacy Protection detection: if the open's IP is in
Apple's known proxy ranges OR the user_agent matches Apple's MPP
fingerprint, mark is_mpp=true. Filter these out of "real opens"
metrics in the dashboard but keep them for audit.
Output the route handlers + the token signing util + a test for the
MPP detection.
The segment builder is the feature that turns your tool from “a sender” into “a marketing platform.” Customers want to send to “subscribers tagged ‘customer’ who haven’t opened in 30 days and are in the EU.” You need a query DSL that compiles to safe parameterized SQL.
Design a JSON-based segment DSL for a newsletter SaaS that compiles
to a parameterized Postgres query. The DSL must support:
- Tag membership: { "tag_in": ["customer", "trial"] }
- Tag exclusion: { "tag_not_in": ["unsubscribed_from_marketing"] }
- Subscribed-at range: { "subscribed_at": { "gte": "2025-01-01" } }
- Last-opened recency: { "last_opened_at": { "gte": "now-30d" } }
- Click history: { "clicked_post": { "post_id": "...", "within": "90d" } }
- Custom field equals: { "custom.country": "DE" }
- Boolean composition: { "and": [...] }, { "or": [...] }, { "not": ... }
For each leaf, output the SQL fragment + the parameter array (no
string interpolation, ever — everything goes through $1, $2,
parameters).
Also output:
1. A `compile_segment(dsl_json)` TypeScript function
2. A `count_segment(workspace_id, dsl_json)` SQL helper that returns
the matching subscriber count (the dashboard shows this in real
time as the user builds the segment)
3. A test suite covering each operator and one nested boolean case
Show me how you'd index the subscribers + opens + clicks tables to
keep the count query under 500ms on a 500k-subscriber workspace.
Every newsletter SaaS founder finds out the hard way that the product is not the editor or the dashboard — the product is “your email shows up in the inbox.” If 30% of your sends go to spam, your customers churn no matter how good the UI is.
Three DNS records and one operational habit:
include: mechanism (e.g., include:_spf.resend.com).p=none and a reporting address (rua=) so you can see what’s breaking, then move to p=quarantine after two weeks of clean reports.If you’re self-hosting the SMTP, this is your full-time job. If you’re using Resend, SendGrid, or Postmark, they handle the IP-warming side and you only need to handle per-customer domain auth (each customer authenticates their own sending domain via DNS records you give them).
Three viable models for a newsletter SaaS:
Whichever you pick, our Lemon Squeezy vs Stripe comparison covers the merchant-of-record vs direct-Stripe tradeoff that matters most for a publisher-facing tool.
Here’s the honest read after working through the build above: the deliverability layer alone is a 6–12 month project for a solo founder. The send worker is fine. The editor is fine. Open tracking is fine. But every founder who ships an ESP from scratch ends up running a deliverability operations function (managing IP pools, complaint rates per customer, DMARC reports, ESP relationships) that has nothing to do with their actual product idea.
If your product idea is “a newsletter for X niche with Y workflow,” build it as a thin wrapper over Beehiiv’s API or ConvertKit’s API. They handle deliverability, you handle the workflow your customers actually pay for. You ship in 4 weeks instead of 4 months. You’ll lose some control over the editor and tracking, but you gain something more valuable: every email you send shows up in the inbox.
The framework: solve a workflow problem (curation, automation, gating, integration) on top of an existing ESP. Charge for the workflow. Let Beehiiv eat the deliverability cost. This is the same playbook our general SaaS build guide recommends: do less, charge for the part that matters.
A from-scratch newsletter SaaS is a deliverability operations company in disguise. The win for solo founders is a niche workflow on top of Beehiiv’s API: AI-curated industry digests, B2B internal newsletters with CRM gating, paid niche publications. Charge for the workflow, let your ESP charge for the bytes.
The stack, prompts, pricing, and mistakes to avoid — for solo founders building with AI.