Most solo SaaS founders hit the same wall around the time their product gets its first real users. The signup flow needs to do more than just create a row: send a welcome email, push the user into the CRM, kick off an onboarding sequence, fire an analytics event, and maybe seed a sample workspace. Doing all of that synchronously inside the signup request is a recipe for slow pages and dropped side effects. Doing it asynchronously means you’re suddenly in the queue-and-worker business. Inngest is one of the platforms that argues you don’t need to be in that business at all — you just need to send an event and let the runtime fan it out.

Methodology. This is a research-based overview. We have not personally built production apps with Inngest; this article synthesizes the company’s documentation at inngest.com/docs, public pricing at inngest.com/pricing, public user reports, and third-party benchmarks. Last reviewed: May 8, 2026.

What Inngest actually is

Inngest is an event-driven durable execution platform for serverless background jobs. Founded in 2021, the company built its product around a specific mental model: events trigger functions, functions are made of steps, and steps are individually durable. That last word is the load-bearing one. A “step” in Inngest is an atomic unit of work whose result is checkpointed by the runtime, so if anything fails or the underlying compute goes away mid-flight, the function resumes from the last successful step rather than starting over.

The platform fits naturally into modern serverless stacks — Next.js on Vercel, Hono on Cloudflare, Express on AWS Lambda — because Inngest doesn’t require you to run a long-lived worker process. You expose a single API route in your app; Inngest’s control plane invokes that route to execute steps; your functions run on the same compute you’re already paying for. The control plane handles scheduling, retries, fan-out, observability, and the durable state machine.

Core capabilities

Five primitives do the heavy lifting:

  • Durable functions. Functions are replay-safe. Each call to step.run() records its return value; if the function is interrupted and resumed, completed steps are skipped and only unfinished work runs again. Automatic retries with exponential backoff are configurable per function.
  • Fan-out. One event can trigger many handlers. The classic example: user.signedup fires off a welcome email function, a CRM-sync function, an analytics function, and a Day-1 onboarding function — in parallel, with independent retries and observability.
  • Schedules. Cron-syntax schedules and delays declared inline with the function. Support for both recurring schedules and one-off scheduled triggers.
  • step.run() for atomic units. Wrap any side-effecting block in step.run() and the runtime guarantees the block runs at most once per logical invocation, even across retries and replays.
  • step.sleep() and step.sleepUntil() for delays without compute. Pause a function for hours, days, or weeks without holding open a serverless invocation. The runtime wakes the function back up at the right moment.

Each of these is a primitive you would otherwise build yourself out of Redis, Postgres, BullMQ, and a long-lived worker process. The integration is the value, not any single feature in isolation.

The mental model

Three sentences capture the way Inngest wants you to think about your background work. Events trigger functions. Functions are made of steps. Steps are individually durable. Once that model clicks, almost every async-work shape in a SaaS product maps onto it cleanly.

A signup is an event named user.signedup. Multiple functions can subscribe to that event — one to send the welcome email, one to push into the CRM, one to start an embedding job for the user’s first uploaded document. Each function is a sequence of steps: validate the payload, look up the user, call the third-party API, write back to the database. If any step fails, only that step retries; the rest of the function’s state is preserved. If the function is paused with step.sleep("3 days") for a Day-3 follow-up email, no compute is consumed during the pause — the runtime simply schedules the next step to run three days later.

Defining a function

An Inngest function in TypeScript looks like this, per the docs:

import { inngest } from "./client";

export const onboardNewUser = inngest.createFunction(
  { id: "onboard-new-user" },
  { event: "user.signedup" },
  async ({ event, step }) => {
    const user = await step.run("load-user", async () => {
      return await db.users.findById(event.data.userId);
    });

    await step.run("send-welcome-email", async () => {
      return await resend.emails.send({
        to: user.email,
        from: "hi@yoursaas.com",
        subject: "Welcome",
        html: renderWelcomeTemplate(user),
      });
    });

    await step.sleep("wait-3-days", "3d");

    await step.run("send-day-3-followup", async () => {
      return await resend.emails.send({
        to: user.email,
        subject: "How’s it going?",
        html: renderDay3Template(user),
      });
    });
  }
);

Three things are worth noticing. First, every side effect is wrapped in step.run() with a stable name — that name is the key the runtime uses to checkpoint the result. Second, the three-day pause uses step.sleep(), not setTimeout, which means no compute runs during the wait. Third, the function reads as a normal sequential async function, even though under the hood it might execute across multiple separate invocations of the API route over the course of three days.

Triggering events

Events are sent from anywhere in your app — a server action, a webhook handler, a cron route, another function:

await inngest.send({
  name: "user.signedup",
  data: { userId: user.id },
});

That call doesn’t execute any function in your app process. It tells Inngest’s control plane that the event happened, and the control plane fans the event out to every function subscribed to it. Your server action returns immediately; the user gets a fast response; the work happens asynchronously across however many handlers are subscribed.

Observability

Most homegrown queue setups die not because the queue fails but because nobody can tell what is happening when it does. Inngest’s dashboard treats this as a first-class concern:

  • Run timelines. Every function invocation gets a page showing the triggering event, every step in order, the timing of each step, the return value, and any errors.
  • Retry visibility. When a step fails and retries, each attempt is its own row with its error message and timestamp. No log-grepping required.
  • Real-time logs. Logs from inside a step stream into the dashboard live during execution.
  • Dev server. A local dev server runs the entire Inngest control plane on your laptop so you can develop and test functions offline. This is one of the strongest local-DX stories in the background-job category.

The polish gap between the Inngest dashboard and a self-hosted Bull Board on top of BullMQ is large. The polish gap against “tail Vercel logs and hope” is even larger. Founders most often skip observability when rolling their own queues, and Inngest makes it the default.

Pricing tiers

Inngest publishes four plans on inngest.com/pricing. Always reconcile against the live page; numbers shift periodically.

Free
$0/month
  • 50,000 runs per month included
  • 1,000 step executions per run
  • Full feature set: durable functions, fan-out, schedules, retries, dashboard
  • Local dev server is free, with no caps
  • Community support
  • Generous enough for an indie launch and a real beta
Pro
$300/month base + usage
  • Substantially higher concurrency and run caps
  • Per-unit overage rates published on the pricing page
  • Multiple environments (staging + production)
  • Priority support
  • Designed for growing SaaS with serious async-workflow volume
Enterprise
Custom
  • Everything in Pro
  • SOC 2, SLA, dedicated support
  • Custom data residency and contractual terms
  • Negotiated commit pricing

The self-host question

Inngest is not officially open source in the way Trigger.dev is. Self-hosting is not a supported deployment path; the company runs the platform as a managed service. For founders who require that no payload data leaves their infrastructure — healthcare, finance, EU residency rules — this is a real constraint, and the answer is to look at Trigger.dev or to roll your own with BullMQ. For the much larger group of solo founders without that constraint, the closed-source posture is closer to a non-issue: you’re already paying third parties for auth, email, payments, and AI inference, and adding one more managed dependency is not categorically different.

That said, if optionality matters to you on principle — if you want the ability to move workloads off the platform without rewriting them — the self-host story is the dimension where Trigger.dev wins outright.

When Inngest fits

Three product shapes where Inngest is genuinely the right pick:

  • Event-driven architectures. If your app naturally produces events — signups, payments, content uploads, plan changes — and many side effects need to happen in response, the event-and-fan-out model maps cleanly onto Inngest.
  • Fan-out workflows. One signup that triggers a welcome email, an analytics event, a CRM sync, and a Day-3 onboarding sequence. Each is a separate function subscribed to the same event; each retries independently; each shows up on the dashboard separately.
  • High-volume async work. Inngest’s runtime is designed for concurrency. If you’re processing tens of thousands of inbound webhooks a day, queueing thousands of LLM calls, or running large fan-out batch jobs, the platform’s scaling story is one of its strengths.

The common thread: workloads where the “one event, many handlers” shape recurs. The deeper your async workflows go, the more value the platform extracts.

When Inngest doesn’t fit

Three scenarios where Inngest is overkill or wrong:

  • Simple cron-only needs. If your only background work is “hit this URL once a day at 8am,” Vercel Cron is fine. It’s already in your platform, costs nothing extra, and doesn’t require a second SDK.
  • You need self-host. If your customers or compliance posture require that no payload data leaves your infrastructure, use Trigger.dev (open source, supported self-host) or BullMQ (DIY). Inngest is a managed service.
  • Very low volume. If you’re sending ten background jobs a week, the free tier is more than you need and you’re adding an SDK and a dashboard for work a single Vercel Cron route would handle. The right answer is the simpler tool.

Inngest vs Trigger.dev vs Vercel Cron vs BullMQ

Inngest vs Trigger.dev

The closest comparison. Both are TypeScript-native durable-execution platforms with hosted offerings, generous free tiers, and dashboards. Inngest leans event-driven: events fan out to many functions, and the mental model centers on events plus steps. Trigger.dev leans task-driven: you trigger a task by name, and the mental model centers on a single durable run with retries and observability. For event-fan-out and high-throughput workloads, Inngest is often the more natural fit. For long-running single-task jobs (a 90-second image generation, a multi-step LLM agent) and for self-host requirements, Trigger.dev is the closer match. Both are good; the right pick is shaped by your workload shape, not by raw feature count.

Inngest vs Vercel Cron

Vercel Cron is the lightweight option. It calls a route in your Next.js app on a cron schedule. There’s no SDK to learn, no dashboard, no concurrency control, no retries beyond what your route does itself. For a daily digest or a weekly cleanup it’s genuinely enough — we walk through the setup in our Vercel Cron guide. The moment your job needs retries, fan-out, or any orchestration, Vercel Cron stops scaling and Inngest becomes the right pick. Hosting platform tradeoffs are covered in Vercel vs Railway.

Inngest vs BullMQ

BullMQ is the “build it yourself” baseline. You provision Redis, write a worker process, deploy that worker somewhere, and add Bull Board for the dashboard. BullMQ is mature, free, and fully under your control. It’s also a project of its own — the operational tax of running stateful Redis with replication, plus the engineering tax of building observability that approaches what Inngest ships out of the box, are both real. For solo founders optimizing for shipped features per week, the trade often goes against rolling your own. BullMQ wins when self-host is mandatory and the operational cost is worth it.

The honest verdict

Inngest is the right pick when your background work is event-driven and fan-out heavy — when one user action naturally triggers four to ten side effects, when those side effects need independent retries and visibility, and when you don’t want to run Redis. The free tier is generous enough to validate fit at zero cost (50K runs per month covers a real beta), and the dev server is one of the better local DX stories in the category. The pricing ladder steps up sharply between Basic and Pro — $30 to $300 is a real jump — so worth modeling against your projected run volume before you commit.

For founders with single-task durable workloads (a long AI generation job, a video render) and especially for those who need self-host, Trigger.dev often wins on shape and licensing. For one-route-a-day cron jobs, Vercel Cron is fine and you don’t need either platform. Inngest finds its sweet spot in the middle: real fan-out, real concurrency, real observability needs, and a managed service tradeoff you’re willing to make. We cover the rest of the modern dev-tool stack in our best SaaS tools for developers roundup, and the AI-shaped half of these workloads in our how to build a SaaS with Claude guide.

Related reading

Get one SaaS build breakdown every week

The stack, prompts, pricing, and mistakes to avoid — for solo founders building with AI.