An eight-step tutorial covering the vercel.json schedule, Hobby vs Pro limits, securing the endpoint with CRON_SECRET, idempotency, and the common scheduled patterns most SaaS apps end up shipping.
Methodology. This tutorial synthesizes the Vercel Cron documentation as of May 2026 from vercel.com/docs/cron-jobs and the Vercel Functions runtime docs at vercel.com/docs/functions/runtimes. Limits and pricing change — verify against vercel.com/docs before shipping. For broader Vercel context see the Vercel pricing breakdown and deploy a Next.js SaaS to Vercel tutorial.
Every SaaS eventually needs scheduled work. A daily digest email, a weekly database cleanup, a Stripe dunning sweep that retries failed charges, a sitemap regeneration. The classic answer was a server with a crontab; the modern serverless answer is Vercel Cron Jobs — a scheduling layer that lives next to your Next.js deployment, takes a JSON schedule, and invokes one of your existing API route handlers on a cadence you define.
The setup looks deceptively simple: drop a crons array into vercel.json, build the matching route handler, and ship. The places founders trip are downstream — an endpoint left publicly callable, a job that takes 90 seconds when the Pro plan caps you at 60, a duplicate run because Vercel retried a timeout. This guide walks the canonical pattern from the docs, adds the production-grade hardening, and ends with the four scheduled jobs almost every SaaS ends up shipping.
Vercel Cron Jobs is the right pick when your scheduled work is short, runs against your own database, and is co-located with your Next.js app. It is the wrong pick when your jobs are long-running, need step-level retries, fan-out across thousands of items, or have to run against external infrastructure that doesn’t care your web app exists.
The honest comparison:
For a typical solo-founder SaaS — one Next.js app, one Postgres, a few scheduled hygiene jobs — Vercel Cron is enough. Once a single job needs to process 10K records or run for more than a minute, move it to Inngest or Trigger.dev. The two coexist fine: Vercel Cron handles the small stuff, the durable platform handles the heavy stuff.
vercel.jsonCron jobs are declared in vercel.json at the root of your repo. Each entry has a path (the route Vercel will hit) and a schedule (a standard five-field cron expression). Vercel parses this on every deployment and registers the schedule in their internal scheduler.
{
"crons": [
{
"path": "/api/cron/daily-digest",
"schedule": "0 14 * * *"
},
{
"path": "/api/cron/dunning-sweep",
"schedule": "*/15 * * * *"
},
{
"path": "/api/cron/weekly-cleanup",
"schedule": "0 3 * * 0"
},
{
"path": "/api/cron/sitemap-rebuild",
"schedule": "0 */6 * * *"
}
]
}
Cron expressions are evaluated in UTC. The example above runs the daily digest at 14:00 UTC (10am ET), the dunning sweep every 15 minutes, the weekly cleanup on Sundays at 03:00 UTC, and the sitemap rebuild every six hours. There is no timezone field; if you need a local-time digest, compute the UTC offset and adjust seasonally for daylight saving time, or send at slightly different times across the year and accept that.
Two gotchas worth flagging early. First, the path must be an absolute path on the same Vercel deployment — you cannot point Vercel Cron at an external URL. Second, the schedule field accepts standard five-field cron syntax including */N for intervals and comma-separated lists, but it does not accept seconds (the optional sixth field that some cron implementations support). Minute granularity is the floor on Pro; daily granularity is the floor on Hobby.
The path you declare in vercel.json is just a Next.js route. In the App Router, scheduled work lives in a route handler at app/api/cron/[task]/route.ts. Vercel sends a GET request on the schedule, so export a GET handler. The handler runs in the Vercel Functions runtime, has access to your environment variables, and can read and write your database the same way any other route can.
// app/api/cron/daily-digest/route.ts
import { NextResponse } from 'next/server';
import { sendDigests } from '@/lib/digest';
export const dynamic = 'force-dynamic';
export const maxDuration = 60; // Pro plan ceiling in seconds
export async function GET(req: Request) {
const startedAt = Date.now();
try {
const result = await sendDigests();
return NextResponse.json({
ok: true,
sent: result.sent,
skipped: result.skipped,
durationMs: Date.now() - startedAt
});
} catch (err) {
console.error('[cron:daily-digest] failed', err);
return NextResponse.json(
{ ok: false, error: String(err) },
{ status: 500 }
);
}
}
Two settings deserve a callout. export const dynamic = 'force-dynamic' tells Next.js the route is never cached — without it, the App Router can serve a stale 200 to the cron invocation and never run your code. export const maxDuration = 60 opts the function into the full 60-second ceiling on Pro; the default is shorter on some runtimes.
An unsecured cron route is a public endpoint that runs whatever your job does, on demand, for anyone who finds the URL. If your job sends emails, that’s a spam vector. If it issues refunds, that’s a fraud vector. The canonical fix is the CRON_SECRET pattern documented by Vercel: set an environment variable, and Vercel automatically attaches it as a Bearer token on every cron invocation. Your handler verifies the header and rejects everything else.
// app/api/cron/_lib/auth.ts
export function isAuthorizedCron(req: Request): boolean {
const auth = req.headers.get('authorization');
if (!auth) return false;
const expected = `Bearer ${process.env.CRON_SECRET}`;
return auth === expected;
}
// app/api/cron/daily-digest/route.ts
import { NextResponse } from 'next/server';
import { isAuthorizedCron } from '../_lib/auth';
import { sendDigests } from '@/lib/digest';
export const dynamic = 'force-dynamic';
export const maxDuration = 60;
export async function GET(req: Request) {
if (!isAuthorizedCron(req)) {
return NextResponse.json({ error: 'unauthorized' }, { status: 401 });
}
const result = await sendDigests();
return NextResponse.json({ ok: true, ...result });
}
Set CRON_SECRET in the Vercel project settings (Settings → Environment Variables) using a long random string — openssl rand -hex 32 is fine. Vercel injects this token on the platform side; you do not have to wire the header yourself. The Vercel docs are explicit that any cron invocation Vercel makes will carry the matching Authorization: Bearer ${CRON_SECRET} header. If your handler doesn’t see that header, the request didn’t come from Vercel Cron.
Vercel will sometimes retry a cron invocation. Network timeouts, function cold starts that exceed the limit, deployment swaps mid-execution — any of these can cause the same scheduled tick to fire twice. If your job assumes single-execution, you get duplicate emails, duplicate Stripe charges, duplicate database mutations. The fix is to make every cron-driven action idempotent.
Two patterns cover almost every case. First, a database lock keyed on the run window:
// lib/cron-lock.ts
import { sql } from '@/lib/db';
export async function acquireCronLock(
jobName: string,
windowKey: string
): Promise<boolean> {
// Returns true if this invocation is the first to claim the window.
const result = await sql`
INSERT INTO cron_runs (job_name, window_key, started_at)
VALUES (${jobName}, ${windowKey}, NOW())
ON CONFLICT (job_name, window_key) DO NOTHING
RETURNING window_key
`;
return result.length > 0;
}
// app/api/cron/daily-digest/route.ts (excerpt)
const today = new Date().toISOString().slice(0, 10); // 2026-05-07
const acquired = await acquireCronLock('daily-digest', today);
if (!acquired) {
return NextResponse.json({ ok: true, skipped: 'already-ran' });
}
await sendDigests();
The unique index on (job_name, window_key) guarantees only one row inserts per (job, window) tuple. The second invocation gets RETURNING window_key empty, returns early, and does no work.
The second pattern is semantic deduplication at the work-item level. For a digest email, the natural dedup key is (user_id, digest_date); record “sent at” before sending and skip if already set. For a Stripe charge retry, the dedup key is the Stripe idempotency key on the charge call itself — pass idempotencyKey: \`dunning-${invoiceId}-${attemptNumber}\` and Stripe will reject the duplicate call with the original response.
Vercel Functions have a hard duration ceiling. On Hobby, it’s 10 seconds. On Pro, the default is 10 seconds but you can opt into up to 60 seconds with export const maxDuration = 60. On Enterprise, longer durations are negotiable. If your cron exceeds the ceiling, the function is killed mid-execution and Vercel returns an error — usually after some of the work has happened, which is exactly the case where idempotency saves you.
The pragmatic split is the queue-table pattern. Instead of doing all the work inline, the cron handler enqueues work into a table and a separate worker drains the queue:
// app/api/cron/daily-digest/route.ts (queue pattern)
import { NextResponse } from 'next/server';
import { isAuthorizedCron } from '../_lib/auth';
import { sql } from '@/lib/db';
export const dynamic = 'force-dynamic';
export const maxDuration = 60;
export async function GET(req: Request) {
if (!isAuthorizedCron(req)) {
return NextResponse.json({ error: 'unauthorized' }, { status: 401 });
}
// Step 1: enqueue one row per active user.
const enqueued = await sql`
INSERT INTO digest_queue (user_id, scheduled_for, status)
SELECT id, CURRENT_DATE, 'pending'
FROM users
WHERE digest_enabled = true
ON CONFLICT (user_id, scheduled_for) DO NOTHING
RETURNING user_id
`;
// Step 2: drain a batch within the function duration budget.
const deadline = Date.now() + 50_000; // leave 10s headroom
let processed = 0;
while (Date.now() < deadline) {
const batch = await dequeueBatch(50);
if (batch.length === 0) break;
await Promise.all(batch.map(processOne));
processed += batch.length;
}
return NextResponse.json({
ok: true,
enqueued: enqueued.length,
processed
});
}
Two benefits fall out of this pattern. The cron run is bounded — it always returns inside the 60-second window because the loop checks the deadline. And the queue itself becomes the durable state: if the function is killed, the next cron tick (or a separate drain endpoint) picks up the unprocessed rows. For very high-volume work, a second cron entry that does nothing but call the drain step every minute keeps the queue flowing without enlarging the digest run.
Four scheduled jobs cover almost every SaaS’s needs. Each is a real production pattern, not a toy example.
Send each active user a daily digest of activity. The hard parts are timezone handling (group users by their local timezone, send near 8am local), digest content generation (often the slowest step per user), and the queue pattern from Step 6 if you have more than a few hundred users.
// lib/digest.ts (excerpt)
import { sendEmail } from '@/lib/resend';
export async function processOne(userId: string) {
const user = await getUser(userId);
const items = await getDigestItems(userId, { since: '24h' });
if (items.length === 0) {
await markDigestSent(userId, 'empty');
return;
}
await sendEmail({
to: user.email,
subject: `Your ${items.length} updates today`,
react: DigestEmail({ user, items })
});
await markDigestSent(userId, 'sent');
}
For the email side of this, the Resend tutorial covers React Email templates and the SDK setup.
When a subscription invoice fails, Stripe retries on its own schedule, but you usually want to add your own logic — a custom email sequence, a grace period before downgrade, a final cancel-on-day-14 step. The cron sweep runs every 15 minutes, finds invoices that have just transitioned to past_due, and triggers the right step in your dunning state machine.
// app/api/cron/dunning-sweep/route.ts (excerpt)
const overdue = await sql`
SELECT i.id, i.user_id, i.attempt_count, i.last_attempted_at
FROM stripe_invoices i
WHERE i.status = 'past_due'
AND (i.last_attempted_at IS NULL
OR i.last_attempted_at < NOW() - INTERVAL '24 hours')
LIMIT 100
`;
for (const inv of overdue) {
await runDunningStep(inv); // sends email, may downgrade, may cancel
}
The Stripe subscriptions with Supabase tutorial covers the upstream side — webhooks, the invoice table, and how past_due gets written.
Every SaaS accumulates rows that should be deleted on a schedule: expired sessions, soft-deleted user data past the retention window, abandoned signup attempts older than 30 days, audit logs past the legal retention floor. Run weekly at a low-traffic hour:
// app/api/cron/weekly-cleanup/route.ts (excerpt)
const deleted = await sql`
WITH expired_sessions AS (
DELETE FROM sessions
WHERE expires_at < NOW() - INTERVAL '7 days'
RETURNING 1
),
abandoned_signups AS (
DELETE FROM signup_attempts
WHERE created_at < NOW() - INTERVAL '30 days'
AND completed = false
RETURNING 1
)
SELECT
(SELECT COUNT(*) FROM expired_sessions) AS sessions,
(SELECT COUNT(*) FROM abandoned_signups) AS signups
`;
Programmatic-SEO sites and content-heavy SaaS apps want their sitemap.xml rebuilt regularly so search engines see new pages. Run every six hours, regenerate, and write to a CDN-backed bucket or an in-app cache key:
// app/api/cron/sitemap-rebuild/route.ts (excerpt)
import { put } from '@vercel/blob';
const urls = await getAllPublicUrls();
const xml = renderSitemap(urls);
await put('sitemap.xml', xml, {
access: 'public',
contentType: 'application/xml'
});
The single most common cron-job failure mode is silence. The schedule is set, the handler exists, the code looks right, but the job never runs — or runs and errors and nobody notices for two weeks. Three layers of observability prevent this.
First, log every invocation with a structured payload. The Vercel Functions log dashboard shows these per deployment; for longer retention, configure a Vercel Log Drain that streams logs to a hosted log platform (Axiom, Datadog, Logtail).
Second, capture errors with Sentry. Wrap each handler in a thin error boundary so failures land in your Sentry inbox with the cron name attached:
// lib/cron-wrapper.ts
import * as Sentry from '@sentry/nextjs';
import { NextResponse } from 'next/server';
import { isAuthorizedCron } from '@/app/api/cron/_lib/auth';
type CronHandler = () => Promise<Record<string, unknown>>;
export function withCron(name: string, handler: CronHandler) {
return async (req: Request) => {
if (!isAuthorizedCron(req)) {
return NextResponse.json({ error: 'unauthorized' }, { status: 401 });
}
const startedAt = Date.now();
try {
const result = await handler();
console.log(`[cron:${name}] ok`, {
durationMs: Date.now() - startedAt,
...result
});
return NextResponse.json({ ok: true, ...result });
} catch (err) {
Sentry.captureException(err, { tags: { cron: name } });
console.error(`[cron:${name}] failed`, err);
return NextResponse.json(
{ ok: false, error: String(err) },
{ status: 500 }
);
}
};
}
Use it like:
// app/api/cron/daily-digest/route.ts
import { withCron } from '@/lib/cron-wrapper';
import { sendDigests } from '@/lib/digest';
export const dynamic = 'force-dynamic';
export const maxDuration = 60;
export const GET = withCron('daily-digest', async () => {
return await sendDigests();
});
Third, add a Slack failure alert for the jobs you actually depend on. A simple webhook posted to a #cron-alerts channel from inside the catch block, or from a Sentry alert rule, is enough. The bar is “something pings me when the dunning sweep silently fails” — not perfect dashboards.
Numbers from vercel.com/docs/cron-jobs and the Functions runtime docs as of May 2026:
| Plan | Cron jobs | Granularity | Function duration |
|---|---|---|---|
| Hobby | 2 | Daily-only | 10s |
| Pro | 40 | Minute-level | 60s (with maxDuration) |
| Enterprise | 100 | Minute-level | Negotiable, longer |
The Hobby cap is the one most solo founders bump into first. Two scheduled jobs with daily granularity is enough for a side project but not for a real SaaS — the moment you want a 15-minute dunning sweep, you’re on Pro. The Vercel pricing breakdown covers the rest of the Hobby-vs-Pro math, and the Vercel free tier limits guide enumerates the other ceilings worth knowing.
An unsecured cron route is publicly callable. Always check the Authorization: Bearer ${CRON_SECRET} header before doing any work. The wrapper in Step 8 enforces this by default.
Vercel may retry on timeout or platform glitch. The same tick can fire twice. Use the database lock or a per-item dedup key on every mutation that has external side effects (emails, charges, webhooks).
A 75-second job on Pro is killed at 60. The visible symptom is a 504 in the Vercel logs and partial work in your database. Use the queue-table pattern: enqueue all work first, then drain a bounded batch within the deadline.
Vercel Cron is great for short, simple scheduled work. It is not a job queue, not a workflow engine, not a fan-out platform. If your job has 10 steps with retries between each, run it on Inngest or Trigger.dev and let Vercel Cron trigger the workflow rather than execute it.
Cron expressions are UTC. 0 9 * * * is 9am UTC, which is 4am ET in winter and 5am ET in summer. If your job’s timing is user-facing, either anchor it to UTC and accept the drift, or compute the offset in code and re-derive a local-time send time per user.
If a cron silently stops running, you may not notice for weeks. Wire Sentry or a Slack alert on failure, log every invocation, and set a heartbeat check on the jobs that matter most. The cost is one afternoon; the alternative is a customer asking why their digest hasn’t arrived in a month.
Vercel Cron Jobs is the lowest-friction scheduler for a Next.js SaaS that already lives on Vercel. The eight steps above are the canonical pattern: declare the schedule, build the route, secure with CRON_SECRET, make the work idempotent, stay inside the duration ceiling, and wire observability before you ship. For workloads that grow past 60 seconds or need step-level retries, hand off to Inngest or Trigger.dev — Vercel Cron is still the right way to kick off those workflows.
The stack, prompts, pricing, and mistakes to avoid — for solo founders building with AI.