OAuth into a third-party API, snapshot the data, render charts that don’t suck, and alert when things move — the full architecture for a B2B dashboard product.
Research-based methodology. This guide draws on the Stripe, Shopify, GA4, and PostHog API docs, Vercel and Railway scheduling docs, and our own Claude-assisted builds. First-person testing is called out where it exists. How we research.
Analytics dashboards are the inverse of habit trackers. The technical work is non-trivial — OAuth, rate limits, scheduled syncs, multi-tenancy, charts — but the willingness to pay is genuinely there. A B2B operator who would never pay $5/mo for a habit app will happily pay $99/mo for a dashboard that pulls their Shopify and Stripe data into a single view their team can see without logging into three separate consoles.
The 2026 opening is that the existing players (Mixpanel, Amplitude, Looker, Geckoboard) are too expensive and too generic for the long tail of vertical-specific use cases. A focused dashboard for “Shopify + Klaviyo for DTC brands under $5M ARR” is a real product. A focused dashboard for “Stripe + Webflow for solo creators” is a real product. The general “visualize anything” market is closed; the specific “here is the dashboard for [vertical]” market is wide open.
This guide assumes you’re building a focused, opinionated dashboard for a specific buyer. If you’ve read the broader build-a-SaaS-with-Claude playbook, this is the dashboard-shaped extension of it.
Five problems will eat your time if you don’t plan for them:
Every third-party API has a rate budget. Stripe’s is generous (100 req/sec); Shopify’s GraphQL has a leaky-bucket cost model; GA4’s Data API has daily quotas; PostHog has a per-project budget. Multi-tenant means: you have one OAuth token per customer, and each token has its own rate budget. You don’t pool them, but you do need to respect each one. The PostHog review covers what its API can and can’t do at the limits.
Customers think “real-time.” You’ll deliver “updated every 15 minutes” or even “updated hourly,” because every other approach is too expensive in API calls. The product needs to make freshness visible (“Last synced 8 minutes ago. Refresh.”) so customers calibrate their expectations instead of filing support tickets.
One bad RLS policy and customer A sees customer B’s revenue numbers. This is a company-killing event. Postgres RLS is the right primitive; everything else is wrong.
Recharts is the workhorse for React. Tremor is opinionated and faster to ship with but loses some flexibility. Visx is powerful but has a steeper curve. Chart.js looks dated by default. Avoid d3 directly unless your differentiator is a custom visualization.
If you charge per-workspace and each workspace connects to N integrations that each have their own pull cost, your variable costs scale faster than your revenue at the high end unless you cap data volumes per tier.
The non-negotiable data shape: organizations as the tenant, users belong to organizations via memberships, integrations hold encrypted OAuth credentials per org, metric_definitions describe what to compute, metric_snapshots store denormalized timeseries, and alerts hold threshold rules.
I'm building a multi-tenant B2B analytics dashboard. Each customer is an organization. Users belong to organizations and can be admin or member. Each organization connects N integrations (Stripe, Shopify, GA4, etc.) which each return data we snapshot into a timeseries table. Design the Postgres schema: - organizations (id, name, slug, plan, created_at, trial_ends_at) - memberships (org_id, user_id, role: 'admin'|'member', created_at, primary key (org_id, user_id)) - integrations (id, org_id, provider: 'stripe'|'shopify'|'ga4'|'posthog', account_label, encrypted_access_token, encrypted_refresh_token, expires_at, last_synced_at, sync_status, sync_error, created_at) - metric_definitions (id, org_id, integration_id, slug, name, aggregation: 'sum'|'count'|'avg', source_field, filter_jsonb) - metric_snapshots (metric_def_id, captured_on date, value numeric, primary key (metric_def_id, captured_on)) - alerts (id, org_id, metric_def_id, condition: 'above'|'below'| 'pct_change_above'|'pct_change_below', threshold numeric, notify_email, last_triggered_at) Encryption note: tokens stored using pgcrypto pgp_sym_encrypt with the key from a vault env var, never plaintext. Generate the complete SQL and matching Supabase RLS policies enforcing: - A user can read/write rows belonging to organizations they're a member of (use a SECURITY DEFINER helper function `is_org_member(org_id)`) - Only role='admin' can write to integrations and alerts - metric_snapshots is server-side write only (anon/authenticated cannot INSERT; the sync worker uses service role) - A user cannot SELECT raw encrypted_access_token even from their own org — only the server-side worker can decrypt Output: full SQL, then a paragraph explaining the threat model the policies defend against.
The most common error here is letting authenticated users read the encrypted token columns. Even encrypted, you don’t want a leaked dump to give an attacker something to work on offline. Restrict those columns to service-role queries only.
OAuth is the single most error-prone part of a dashboard build. Each provider does it slightly differently. Stripe Connect uses a different flow than the standard Stripe API. Shopify wants you to declare scopes in your app config. GA4 OAuth has a separate “Data API” scope you have to ask for explicitly. Get the redirect URI registration wrong and you’ll burn an afternoon.
Build the OAuth integration flow for a multi-tenant analytics
dashboard. Start with two providers: Stripe (read-only) and Shopify.
For each provider:
1. A "Connect [Provider]" button on /settings/integrations that:
- Generates a CSRF state token bound to (org_id, provider, user_id)
and stores it briefly server-side (signed JWT in a cookie + DB row
with 10-min expiry)
- Redirects to the provider's authorize URL with the right scopes:
- Stripe: read_only on read_write=false (or use Stripe's
restricted-key flow for read-only data access)
- Shopify: read_orders, read_products, read_customers
- Redirect URI: /api/oauth/[provider]/callback
2. The callback handler:
- Verifies the state token, ties to (org_id, provider)
- Exchanges code for access_token + refresh_token
- Encrypts tokens with pgcrypto and inserts/updates the integrations
row
- Triggers an initial sync job
- Redirects back to /settings/integrations with a success flash
3. A token refresh helper that runs before any worker call:
- If expires_at < now() + 5min, refresh the token
- Retries with exponential backoff on 5xx
- On invalid_grant, marks integration sync_status='reauth_required'
and sends an email to the org admin
Reference the official Stripe and Shopify OAuth docs for exact endpoint
URLs and parameter names. Use the latest API version for each.
One detail that’ll save you a day: Shopify OAuth requires that the redirect URI exactly match what’s in your Partner Dashboard config — including trailing slashes. Configure the redirect URI as the very first thing.
You have two reasonable options for scheduled syncs in 2026: Vercel Cron (cheap, simple, capped at 60s per invocation on Hobby and 300s on Pro) or a Railway worker (longer-running, full control, slightly more setup). For most dashboards under 100 customers, Vercel Cron is enough. Past that, a Railway worker pulling work from a queue is more reliable. The Vercel vs Railway comparison covers when to switch.
I need a scheduled sync system. Each integration row should be synced
on its own cadence (e.g., Stripe every 15 min, GA4 hourly, Shopify
every 30 min).
Design and build:
1. A `sync_jobs` table:
- id, integration_id, scheduled_for timestamptz, status: 'queued'|
'running'|'success'|'failed', attempt int default 0, error text,
started_at, finished_at
2. A Vercel Cron route /api/cron/enqueue-syncs that runs every 5 min:
- Finds integrations where last_synced_at + cadence_minutes < now()
and there is no queued/running job
- Inserts a sync_jobs row with status='queued'
- Idempotent: do not enqueue duplicate work
3. A second Vercel Cron route /api/cron/run-syncs that runs every minute:
- Pulls the next queued job (FOR UPDATE SKIP LOCKED) up to a small
batch
- Marks status='running', started_at=now()
- Calls a per-provider syncer (stripe.ts, shopify.ts, ga4.ts) that:
a) Refreshes the OAuth token if needed
b) Pulls only data since the last successful sync (incremental)
c) Aggregates into metric_snapshots rows (UPSERT on
metric_def_id+captured_on)
d) Respects the provider's rate limit (e.g., for Shopify, parse the
X-Shopify-Shop-Api-Call-Limit header and back off)
- On success: status='success', finished_at, integrations.last_synced_at
- On failure: status='failed', error, integrations.sync_status,
and re-enqueue with attempt+1 if attempt < 3
4. A monitoring view in the admin dashboard showing the last 50 sync
jobs with status + duration + error.
Discuss when this design needs to move off Vercel Cron and onto a real
worker (Railway, Fly, Trigger.dev) -- specifically the customer count
and per-job duration thresholds.
Two things that bite people here: forgetting FOR UPDATE SKIP LOCKED on the queue dequeue (which causes two cron invocations to grab the same job), and not tracking per-provider rate limit headers (which causes the next sync to fail because you’ve been throttled).
For a 2026 React dashboard, two reasonable choices: Tremor (faster to ship, opinionated visual style, good defaults) or Recharts (more control, more code). If you’re building something where the chart is the product, Recharts. If you’re building a dashboard where the chart is one of many components on a page, Tremor.
Build the chart components for a B2B analytics dashboard using Recharts
and Tailwind. Three components:
1. <MetricCard />
- Props: title, value, prevValue, format ('number'|'currency'|'percent')
- Renders a card with the big number, a small green/red delta vs prev,
and a small sparkline of the last 30 days
- The sparkline is a Recharts <LineChart> with no axes/grid, just
a 1px stroke
2. <TimeseriesChart />
- Props: data (array of { date, value }), label, format
- Recharts area chart with a soft gradient fill
- X-axis tick formatter shows MMM D for <= 90 days, YYYY-MM otherwise
- Y-axis tick formatter follows the format prop
- Tooltip shows the exact date and value formatted
- Empty state: a centered "Connect [provider] to see data" with a
button
3. <ComparisonBarChart />
- Props: data (array of { label, current, previous })
- Side-by-side bars comparing this period to the previous period
- Hover shows both values and the % delta
- Uses Recharts <BarChart> with two <Bar> children
All components must:
- Be SSR-safe (no chart on first render -- use 'use client' and
Suspense boundary)
- Be responsive (use ResponsiveContainer from Recharts)
- Have proper a11y (role='img', aria-label with the data summary)
- Follow a consistent palette: primary #1D9E75, neutral #6B7280
Use the official Recharts docs for prop names. Don't invent props.
The often-overlooked piece is the empty state. New customers will hit your dashboard before any sync has run. A blank chart looks broken; a clear “Connect Stripe to see this” CTA looks intentional.
Alerts are where dashboards become indispensable. A static dashboard is something the customer remembers to check. An alerting dashboard is something the customer trusts to interrupt them when it matters. The combination of “visual on demand + email when threshold crossed + weekly summary” is what justifies the price.
Two systems on top of the metric_snapshots table.
A. Threshold alerts:
- A Vercel Cron route /api/cron/check-alerts that runs hourly
- For each row in `alerts`:
- Fetches the latest metric_snapshot for that metric_def_id
- Evaluates the condition (above/below threshold, or pct_change
compared to N days ago)
- If triggered AND last_triggered_at is older than 24h (no spam):
- Sends a Resend email to alert.notify_email with:
subject "[Provider] [Metric] [direction] threshold"
body shows current value, threshold, the most relevant
timeseries data, and a deep link to the dashboard
- Updates last_triggered_at
- Idempotent: rerunning the cron does not double-fire alerts
B. Weekly digest:
- A Vercel Cron route /api/cron/weekly-digest that runs Mondays 9am UTC
- For each organization with active integrations:
- Pulls the last 7 days vs prior 7 days for every metric_definition
- Renders an HTML email with a "wins / losses / things to watch"
layout (top 3 movers up, top 3 movers down, plus any alerts that
fired this week)
- Sends to all org members via Resend (one email per member, BCC
not allowed because each member should be able to unsubscribe
individually)
- Stores a row in `digests_sent` for audit + dedupe
Use Resend's React Email templates for both emails. Reference the
official Resend docs for batching limits.
The weekly digest is the hidden retention lever. Customers who get a useful Monday-morning email open the dashboard 2–3x more often than customers who don’t. This single feature is often the difference between $30 ARPU and $80 ARPU.
For a multi-tenant dashboard, three-tier pricing is standard:
Cap data volumes (events/snapshots per month) at each tier so a single high-volume customer can’t blow your unit economics. Make the cap visible in-product so it becomes a natural upgrade prompt instead of a billing surprise.
For the database itself, Postgres on Supabase or Neon both work. For data-heavy dashboards, the Supabase vs Neon comparison matters — Neon’s branching helps with sync-script testing, Supabase’s integrated auth + RLS saves time.
Generic “dashboard for everyone” is closed. Niche dashboards are wide open. Some patterns we’ve seen work in 2026:
Each is a real AI SaaS idea worth pursuing. The pattern: pick a buyer who currently logs into 3–5 tools every Monday morning. Give them one URL that combines those tools with one opinion about what matters. Charge $99–$249/mo.
A focused dashboard for a specific buyer is one of the highest-leverage products a solo founder can build in 2026. The plumbing (OAuth, scheduling, RLS, charts) is well-trodden ground that Claude can walk you through; the differentiation is your taste in what to show and what to alert on.
The stack, prompts, pricing, and mistakes to avoid — for solo founders building with AI.