An event-driven durable execution platform that turns “one signup fans out into eight side effects” into a few lines of TypeScript — no Redis, no worker pool, no orchestration framework. Pricing and capabilities verified against inngest.com/pricing and the public docs.
Most solo SaaS founders hit the same wall around the time their product gets its first real users. The signup flow needs to do more than just create a row: send a welcome email, push the user into the CRM, kick off an onboarding sequence, fire an analytics event, and maybe seed a sample workspace. Doing all of that synchronously inside the signup request is a recipe for slow pages and dropped side effects. Doing it asynchronously means you’re suddenly in the queue-and-worker business. Inngest is one of the platforms that argues you don’t need to be in that business at all — you just need to send an event and let the runtime fan it out.
Methodology. This is a research-based overview. We have not personally built production apps with Inngest; this article synthesizes the company’s documentation at inngest.com/docs, public pricing at inngest.com/pricing, public user reports, and third-party benchmarks. Last reviewed: May 8, 2026.
Inngest is an event-driven durable execution platform for serverless background jobs. Founded in 2021, the company built its product around a specific mental model: events trigger functions, functions are made of steps, and steps are individually durable. That last word is the load-bearing one. A “step” in Inngest is an atomic unit of work whose result is checkpointed by the runtime, so if anything fails or the underlying compute goes away mid-flight, the function resumes from the last successful step rather than starting over.
The platform fits naturally into modern serverless stacks — Next.js on Vercel, Hono on Cloudflare, Express on AWS Lambda — because Inngest doesn’t require you to run a long-lived worker process. You expose a single API route in your app; Inngest’s control plane invokes that route to execute steps; your functions run on the same compute you’re already paying for. The control plane handles scheduling, retries, fan-out, observability, and the durable state machine.
Five primitives do the heavy lifting:
step.run() records its return value; if the function is interrupted and resumed, completed steps are skipped and only unfinished work runs again. Automatic retries with exponential backoff are configurable per function.user.signedup fires off a welcome email function, a CRM-sync function, an analytics function, and a Day-1 onboarding function — in parallel, with independent retries and observability.step.run() and the runtime guarantees the block runs at most once per logical invocation, even across retries and replays.Each of these is a primitive you would otherwise build yourself out of Redis, Postgres, BullMQ, and a long-lived worker process. The integration is the value, not any single feature in isolation.
Three sentences capture the way Inngest wants you to think about your background work. Events trigger functions. Functions are made of steps. Steps are individually durable. Once that model clicks, almost every async-work shape in a SaaS product maps onto it cleanly.
A signup is an event named user.signedup. Multiple functions can subscribe to that event — one to send the welcome email, one to push into the CRM, one to start an embedding job for the user’s first uploaded document. Each function is a sequence of steps: validate the payload, look up the user, call the third-party API, write back to the database. If any step fails, only that step retries; the rest of the function’s state is preserved. If the function is paused with step.sleep("3 days") for a Day-3 follow-up email, no compute is consumed during the pause — the runtime simply schedules the next step to run three days later.
An Inngest function in TypeScript looks like this, per the docs:
import { inngest } from "./client";
export const onboardNewUser = inngest.createFunction(
{ id: "onboard-new-user" },
{ event: "user.signedup" },
async ({ event, step }) => {
const user = await step.run("load-user", async () => {
return await db.users.findById(event.data.userId);
});
await step.run("send-welcome-email", async () => {
return await resend.emails.send({
to: user.email,
from: "hi@yoursaas.com",
subject: "Welcome",
html: renderWelcomeTemplate(user),
});
});
await step.sleep("wait-3-days", "3d");
await step.run("send-day-3-followup", async () => {
return await resend.emails.send({
to: user.email,
subject: "How’s it going?",
html: renderDay3Template(user),
});
});
}
);
Three things are worth noticing. First, every side effect is wrapped in step.run() with a stable name — that name is the key the runtime uses to checkpoint the result. Second, the three-day pause uses step.sleep(), not setTimeout, which means no compute runs during the wait. Third, the function reads as a normal sequential async function, even though under the hood it might execute across multiple separate invocations of the API route over the course of three days.
Events are sent from anywhere in your app — a server action, a webhook handler, a cron route, another function:
await inngest.send({
name: "user.signedup",
data: { userId: user.id },
});
That call doesn’t execute any function in your app process. It tells Inngest’s control plane that the event happened, and the control plane fans the event out to every function subscribed to it. Your server action returns immediately; the user gets a fast response; the work happens asynchronously across however many handlers are subscribed.
Most homegrown queue setups die not because the queue fails but because nobody can tell what is happening when it does. Inngest’s dashboard treats this as a first-class concern:
The polish gap between the Inngest dashboard and a self-hosted Bull Board on top of BullMQ is large. The polish gap against “tail Vercel logs and hope” is even larger. Founders most often skip observability when rolling their own queues, and Inngest makes it the default.
Inngest publishes four plans on inngest.com/pricing. Always reconcile against the live page; numbers shift periodically.
Inngest is not officially open source in the way Trigger.dev is. Self-hosting is not a supported deployment path; the company runs the platform as a managed service. For founders who require that no payload data leaves their infrastructure — healthcare, finance, EU residency rules — this is a real constraint, and the answer is to look at Trigger.dev or to roll your own with BullMQ. For the much larger group of solo founders without that constraint, the closed-source posture is closer to a non-issue: you’re already paying third parties for auth, email, payments, and AI inference, and adding one more managed dependency is not categorically different.
That said, if optionality matters to you on principle — if you want the ability to move workloads off the platform without rewriting them — the self-host story is the dimension where Trigger.dev wins outright.
Three product shapes where Inngest is genuinely the right pick:
The common thread: workloads where the “one event, many handlers” shape recurs. The deeper your async workflows go, the more value the platform extracts.
Three scenarios where Inngest is overkill or wrong:
The closest comparison. Both are TypeScript-native durable-execution platforms with hosted offerings, generous free tiers, and dashboards. Inngest leans event-driven: events fan out to many functions, and the mental model centers on events plus steps. Trigger.dev leans task-driven: you trigger a task by name, and the mental model centers on a single durable run with retries and observability. For event-fan-out and high-throughput workloads, Inngest is often the more natural fit. For long-running single-task jobs (a 90-second image generation, a multi-step LLM agent) and for self-host requirements, Trigger.dev is the closer match. Both are good; the right pick is shaped by your workload shape, not by raw feature count.
Vercel Cron is the lightweight option. It calls a route in your Next.js app on a cron schedule. There’s no SDK to learn, no dashboard, no concurrency control, no retries beyond what your route does itself. For a daily digest or a weekly cleanup it’s genuinely enough — we walk through the setup in our Vercel Cron guide. The moment your job needs retries, fan-out, or any orchestration, Vercel Cron stops scaling and Inngest becomes the right pick. Hosting platform tradeoffs are covered in Vercel vs Railway.
BullMQ is the “build it yourself” baseline. You provision Redis, write a worker process, deploy that worker somewhere, and add Bull Board for the dashboard. BullMQ is mature, free, and fully under your control. It’s also a project of its own — the operational tax of running stateful Redis with replication, plus the engineering tax of building observability that approaches what Inngest ships out of the box, are both real. For solo founders optimizing for shipped features per week, the trade often goes against rolling your own. BullMQ wins when self-host is mandatory and the operational cost is worth it.
Inngest is the right pick when your background work is event-driven and fan-out heavy — when one user action naturally triggers four to ten side effects, when those side effects need independent retries and visibility, and when you don’t want to run Redis. The free tier is generous enough to validate fit at zero cost (50K runs per month covers a real beta), and the dev server is one of the better local DX stories in the category. The pricing ladder steps up sharply between Basic and Pro — $30 to $300 is a real jump — so worth modeling against your projected run volume before you commit.
For founders with single-task durable workloads (a long AI generation job, a video render) and especially for those who need self-host, Trigger.dev often wins on shape and licensing. For one-route-a-day cron jobs, Vercel Cron is fine and you don’t need either platform. Inngest finds its sweet spot in the middle: real fan-out, real concurrency, real observability needs, and a managed service tradeoff you’re willing to make. We cover the rest of the modern dev-tool stack in our best SaaS tools for developers roundup, and the AI-shaped half of these workloads in our how to build a SaaS with Claude guide.
The stack, prompts, pricing, and mistakes to avoid — for solo founders building with AI.