Research-based overview. This article synthesizes public documentation, pricing pages, and user reports. How we research.

An edge function is a serverless function that is deployed to a globally distributed network of compute nodes (often hundreds of points of presence) and executed in a lightweight runtime — usually a V8 isolate — physically close to the requesting user.

If you already understand serverless functions and CDNs, the cleanest analogy is this: an edge function is to a serverless function as a CDN is to an origin server. A CDN caches static content close to the user; an edge function executes dynamic logic close to the user. Same geographic distribution principle, applied to compute instead of cache.

How edge functions work

The lifecycle of a single request to an edge function:

  1. Deploy globally. When you push your function, the platform replicates it to every point of presence in their network. Cloudflare's network is over 300 cities; Vercel's is fewer but still global.
  2. Route by user proximity. The user's request hits the nearest PoP via anycast routing or DNS. Latency from user to compute is typically < 50 ms anywhere on Earth, compared to 100–300 ms for a single-region origin.
  3. Execute in a lightweight runtime. The function spins up in a V8 isolate (Cloudflare's Workers, Vercel Edge) or a similar small VM (Deno's runtime, AWS Lambda@Edge). Cold starts are typically < 5 ms because there's no container boot.
  4. Return the response. Either directly, or after fetching from the origin / a cache / a database. Most edge function workflows involve at least one origin fetch.

The runtime constraints are the interesting part. V8 isolates can't run arbitrary Node.js code — they implement a subset of the Web platform APIs (fetch, Response, crypto, etc.) and disallow most of node:. This is why edge functions are advertised in terms of "Web standard" APIs rather than "the full Node.js API surface." Some platforms (Vercel, Netlify) now offer a Node.js compatibility mode at the edge, but the original V8-isolate model is still the cleanest mental picture.

Edge function providers

Provider Runtime Locations Notable
Cloudflare Workers V8 isolate (custom) ~330 cities The category-defining product. Tightly integrated with KV, R2, D1, Durable Objects.
Vercel Edge Functions Edge Runtime (V8) + Node compat 18+ regions Tightly integrated with Next.js. Best for Next-heavy stacks.
Deno Deploy Deno (V8) 35+ regions First-class TypeScript without a build step. Loved by Deno users.
Netlify Edge Functions Deno-based Global via Deno Deploy Solid for Netlify-hosted sites. Less compelling on its own.
AWS Lambda@Edge / CloudFront Functions Node.js / JS ~250 PoPs Powerful but operationally complex. AWS-native shops only.

For a solo founder, the practical choice is usually between Cloudflare Workers (if you want to go all-in on the Cloudflare stack) and Vercel Edge Functions (if you're building with Next.js). The other three are good but niche. We compare two of the underlying platforms in Vercel vs Railway and Fly.io vs Railway.

What edge functions are good for

The shared characteristic of every good edge use case: short, stateless work that benefits from being close to the user.

What edge functions are bad for

Edge functions vs serverless functions vs traditional servers

Property Traditional server Serverless function Edge function
Where it runs One region (you pick) One region (you pick) ~all regions, automatically
Cold start None (always running) 100–1000+ ms (Lambda) < 5 ms (V8 isolate)
Runtime Full OS, full Node/Python/etc. Full Node/Python/etc. Web-standard subset
CPU/time limits None Up to 15 min (Lambda) 10–50 ms typical
Ideal for Long-running, stateful, predictable load Stateless API endpoints, infrequent jobs Latency-sensitive routing, auth, light personalization

The decision rule. Default to a regional serverless function. Move to the edge only when you can prove the user-perceived latency matters — or when the work is genuinely just a request-router with no heavy compute. Don't put your whole app at the edge because a blog post said you should. The full picture of where this fits sits in our solo founder tech stack guide.

Further reading

Get one SaaS build breakdown every week

The stack, prompts, pricing, and mistakes to avoid — for solo founders building with AI.