An eight-step tutorial covering buckets, policies on storage.objects, signed URLs, image transforms, and the resumable-upload pattern for files larger than 6 MB.
Methodology. This tutorial synthesizes the Supabase Storage documentation and the Next.js App Router file-upload patterns as of May 2026. Pricing tiers, bucket size limits, and bandwidth allowances change — verify against supabase.com/docs/guides/storage and supabase.com/pricing for the current numbers.
Almost every SaaS hits the file-upload problem within the first month of building. A profile avatar, a CSV import, an attachment on a support ticket, a logo on a team settings page. The shape is always the same: pick a file from the user’s machine, upload it somewhere, store a reference in your database, render it later via a URL the right people can read and the wrong people can’t.
The classic answer was “set up an S3 bucket and a signed-URL flow.” That still works. But for a SaaS already running on Supabase — using the Postgres database, the auth, and the row-level security model — Supabase Storage is the lower-friction path. It’s an S3-compatible object store with an extra layer that ties bucket access to the same authenticated user identity that your database tables already use, plus a CDN-backed render endpoint that does on-the-fly image transforms. You stay inside one vendor, one auth model, one set of credentials.
This guide walks the canonical eight-step setup, from bucket creation through the resumable-upload pattern for files large enough that a single HTTP request will time out. Every code block is the pattern Supabase’s public docs recommend, adapted for the Next.js App Router and Server Actions.
Before any code, the architecture decision: where do uploaded files live?
For a solo founder building a SaaS on the standard Next.js + Supabase + Vercel stack, Supabase Storage is the right default — same auth model as your database, RLS-style access controls, native image transforms, and no extra vendor to manage. Move to Cloudflare R2 once egress costs start dominating; move to a hosted widget like Uploadcare only if your product’s upload UX is genuinely complex (multi-file, drag-and-drop, AI-assisted cropping). For everything else, stay on Supabase.
Buckets are top-level containers in Supabase Storage. Each bucket has a name, a public/private flag, optional file-size and MIME-type restrictions, and a set of policies that govern who can read and write its objects. Create one in the Supabase dashboard at Storage → Create bucket.
The most important decision at creation time is the public/private flag. A public bucket means any object inside it is readable by anyone with the URL — no auth check, no policy evaluation, no signed URLs needed. A private bucket requires either an authenticated session that satisfies a read policy, or a time-limited signed URL.
The honest default: private, almost always. The exceptions are narrow:
For everything else — user uploads, attachments, profile photos, file imports, exports awaiting download — the bucket should be private and access should flow through policies on storage.objects or through signed URLs.
Other settings worth configuring at create time:
image/png, image/jpeg, image/webp. For a CSV import bucket, set text/csv, application/vnd.ms-excel. This is server-side enforcement and is a useful first line of defense, though Step 5 still validates server-side in your app code.Supabase Storage policies are SQL row-level-security rules on the storage.objects table. The same RLS model that protects your application tables protects your buckets — you write a policy that says “a user can read or write a row if X,” and Supabase enforces it on every storage API call.
The canonical pattern for user-scoped uploads is a path prefix that includes the user’s ID. Files for user abc-123 live at avatars/abc-123/foo.png; the policy says “allow if the path starts with the authenticated user’s ID.”
-- Bucket: 'user-uploads'
-- Layout: {user_id}/{filename}
-- Allow authenticated users to upload to their own folder.
create policy "users can upload to own folder"
on storage.objects for insert
to authenticated
with check (
bucket_id = 'user-uploads'
and (storage.foldername(name))[1] = auth.uid()::text
);
-- Allow authenticated users to read their own files.
create policy "users can read own files"
on storage.objects for select
to authenticated
using (
bucket_id = 'user-uploads'
and (storage.foldername(name))[1] = auth.uid()::text
);
-- Allow authenticated users to delete their own files.
create policy "users can delete own files"
on storage.objects for delete
to authenticated
using (
bucket_id = 'user-uploads'
and (storage.foldername(name))[1] = auth.uid()::text
);
-- Allow authenticated users to update (replace) their own files.
create policy "users can update own files"
on storage.objects for update
to authenticated
using (
bucket_id = 'user-uploads'
and (storage.foldername(name))[1] = auth.uid()::text
);
The storage.foldername(name) function returns an array of path segments for the object name. [1] is the first segment — the user-ID prefix. auth.uid() is the authenticated user’s ID from the JWT. The policy passes if and only if those two match.
For multi-tenant SaaS where files belong to a workspace rather than a single user, the same pattern applies with a workspace-membership check:
-- Bucket: 'workspace-files'
-- Layout: {workspace_id}/{filename}
create policy "members can read workspace files"
on storage.objects for select
to authenticated
using (
bucket_id = 'workspace-files'
and exists (
select 1 from workspace_members
where workspace_id = ((storage.foldername(name))[1])::uuid
and user_id = auth.uid()
)
);
The full RLS pattern for multi-tenant SaaS — including how to model workspace memberships and how to test policies — is in the Supabase RLS for multi-tenant SaaS tutorial.
The client-side React component handles three things: picking the file, sending it to the server, and showing progress while the upload runs. Modern browsers expose upload progress via the XMLHttpRequest API; fetch does not surface upload progress, so the canonical pattern uses XHR for the upload step.
// components/upload-button.tsx
'use client';
import { useState } from 'react';
type UploadResult = {
path: string;
url: string;
};
export function UploadButton({
onComplete
}: {
onComplete: (result: UploadResult) => void;
}) {
const [progress, setProgress] = useState(0);
const [error, setError] = useState<string | null>(null);
const [uploading, setUploading] = useState(false);
const handleFile = async (file: File) => {
setError(null);
setUploading(true);
setProgress(0);
// Client-side guardrail; server-side validates again in Step 5.
if (file.size > 25 * 1024 * 1024) {
setError('File too large (max 25MB)');
setUploading(false);
return;
}
const form = new FormData();
form.append('file', file);
const xhr = new XMLHttpRequest();
xhr.open('POST', '/api/upload', true);
xhr.upload.onprogress = (e) => {
if (e.lengthComputable) {
setProgress(Math.round((e.loaded / e.total) * 100));
}
};
xhr.onload = () => {
setUploading(false);
if (xhr.status >= 200 && xhr.status < 300) {
const result = JSON.parse(xhr.responseText) as UploadResult;
onComplete(result);
} else {
setError(`Upload failed: ${xhr.statusText}`);
}
};
xhr.onerror = () => {
setUploading(false);
setError('Network error during upload');
};
xhr.send(form);
};
return (
<div>
<input
type="file"
accept="image/png,image/jpeg,image/webp"
onChange={(e) => {
const f = e.target.files?.[0];
if (f) handleFile(f);
}}
disabled={uploading}
/>
{uploading && (
<div role="progressbar" aria-valuenow={progress}>
{progress}%
</div>
)}
{error && <div role="alert">{error}</div>}
</div>
);
}
The accept attribute is a UX hint, not a security boundary — the browser uses it to filter the file picker, but a determined user can still upload anything. Server-side validation in Step 5 is the actual gate.
For the “upload directly from the browser to Supabase” pattern that skips your server entirely, Supabase exposes a signed-URL upload flow (storage.from(bucket).createSignedUploadUrl) that lets the browser PUT directly to the storage CDN with a short-lived signed token. That pattern saves you bandwidth on Vercel but loses the server-side validation step. For most solo founders, route through the server — the per-upload bandwidth cost is small and the validation surface is too important to skip.
The server-side handler accepts the FormData, validates everything, generates a UUID-based filename, writes to Supabase Storage, and returns a path the client can use. Use a Next.js Route Handler for the upload itself (Server Actions don’t handle FormData with files cleanly enough at scale).
// app/api/upload/route.ts
import { NextResponse } from 'next/server';
import { createServerClient } from '@supabase/ssr';
import { cookies } from 'next/headers';
import { randomUUID } from 'crypto';
const ALLOWED_MIME = ['image/png', 'image/jpeg', 'image/webp'];
const MAX_BYTES = 25 * 1024 * 1024; // 25MB
export async function POST(req: Request) {
const cookieStore = cookies();
const supabase = createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
{
cookies: {
get: (name) => cookieStore.get(name)?.value
}
}
);
const { data: { user } } = await supabase.auth.getUser();
if (!user) {
return NextResponse.json({ error: 'unauthorized' }, { status: 401 });
}
const form = await req.formData();
const file = form.get('file') as File | null;
if (!file) {
return NextResponse.json({ error: 'no file' }, { status: 400 });
}
// Server-side validation. Never trust the client.
if (file.size > MAX_BYTES) {
return NextResponse.json({ error: 'file too large' }, { status: 413 });
}
if (!ALLOWED_MIME.includes(file.type)) {
return NextResponse.json({ error: 'unsupported type' }, { status: 415 });
}
// UUID-based filename. Never trust the user-provided one.
const ext = file.name.split('.').pop()?.toLowerCase() ?? 'bin';
const objectName = `${user.id}/${randomUUID()}.${ext}`;
const { error: uploadError } = await supabase.storage
.from('user-uploads')
.upload(objectName, file, {
contentType: file.type,
upsert: false,
cacheControl: '3600'
});
if (uploadError) {
return NextResponse.json(
{ error: uploadError.message },
{ status: 500 }
);
}
// Sign a URL the client can render immediately.
const { data: signed } = await supabase.storage
.from('user-uploads')
.createSignedUrl(objectName, 60 * 60); // 1 hour
return NextResponse.json({
path: objectName,
url: signed?.signedUrl
});
}
Three parts deserve a callout. First, UUID filenames: never use the filename the user typed. ../../etc/passwd.png is not a creative test case — path traversal is a real attack class, and even when storage handles it correctly, you still get filename collisions when two users both upload image.png. Always generate the storage path yourself.
Second, server-side MIME validation: the file.type field comes from the browser’s sniffing and can be lied about. For higher-security applications, additionally inspect the file’s magic bytes (the first 8–16 bytes that identify true file type) using a library like file-type. For most SaaS use cases, the bucket-level MIME restriction (Step 2) plus the explicit allowlist here is enough.
Third, upsert: false: never overwrite an existing object on upload. With UUID filenames the collision probability is effectively zero, but the explicit flag makes the intent obvious and prevents a bug elsewhere from silently destroying data.
Once a file is in a private bucket, you need a way to render it. The Supabase pattern is signed URLs — time-limited tokens that grant read access to a single object. Generate one server-side when you need to display the file, and let the URL expire when you don’t.
// lib/storage.ts
import { createServerClient } from '@supabase/ssr';
import { cookies } from 'next/headers';
export async function getSignedUrl(
bucket: string,
path: string,
expiresInSeconds: number = 3600
): Promise<string | null> {
const cookieStore = cookies();
const supabase = createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
{
cookies: {
get: (name) => cookieStore.get(name)?.value
}
}
);
const { data, error } = await supabase.storage
.from(bucket)
.createSignedUrl(path, expiresInSeconds);
if (error) return null;
return data?.signedUrl ?? null;
}
The expiration is a tradeoff. Short windows (5 minutes) are safer if a URL leaks but force more frequent regeneration; long windows (24 hours) are user-friendly but risk being captured and replayed. The sweet spot for most app UI is 1 hour — long enough that a single page render or a short-lived email link works, short enough that a leaked URL becomes invalid quickly.
For batch generation, use createSignedUrls (plural) which takes an array of paths and returns an array of signed URLs in one round trip:
// In a Server Component or Route Handler:
const { data } = await supabase.storage
.from('user-uploads')
.createSignedUrls(
['userA/file1.png', 'userA/file2.png', 'userA/file3.png'],
3600
);
// data is an array of { path, signedUrl, error } records.
For a list view rendering 100 thumbnails, this is the difference between 100 sequential round trips and 1 batch call.
Supabase Storage exposes an image-render endpoint that takes query parameters for resize, format, and quality — transforms happen at the edge CDN, the result is cached, and you avoid shipping a 4 MB original to a 200×200 avatar slot.
The transforms are passed via a transform option to createSignedUrl:
// lib/image-url.ts
import { createServerClient } from '@supabase/ssr';
import { cookies } from 'next/headers';
type ImageTransform = {
width?: number;
height?: number;
quality?: number; // 1-100
format?: 'origin' | 'webp' | 'avif';
resize?: 'cover' | 'contain' | 'fill';
};
export async function getImageUrl(
path: string,
transform: ImageTransform = {}
): Promise<string | null> {
const cookieStore = cookies();
const supabase = createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
{
cookies: { get: (name) => cookieStore.get(name)?.value }
}
);
const { data } = await supabase.storage
.from('user-uploads')
.createSignedUrl(path, 3600, {
transform: {
width: transform.width,
height: transform.height,
quality: transform.quality ?? 80,
format: transform.format ?? 'origin',
resize: transform.resize ?? 'cover'
}
});
return data?.signedUrl ?? null;
}
// Usage:
// const avatarUrl = await getImageUrl(profile.avatar_path, {
// width: 96, height: 96, format: 'webp'
// });
The resize values follow CSS object-fit conventions: cover fills the box and crops, contain fits inside the box and letterboxes, fill stretches. format: 'webp' serves WebP to browsers that support it (almost all of them in 2026); avif is the more aggressive option and is supported widely in modern browsers.
Image transforms are an add-on on Supabase Pro — verify the current pricing on supabase.com/pricing. For workloads beyond what Supabase’s built-in transforms cover (face detection, smart cropping, AI-assisted image optimization), Cloudflare Images and imgproxy are the next steps up. For most SaaS apps — resize an avatar, generate an OG image thumbnail, ship a smaller version of an uploaded photo — the built-in transforms are enough.
Standard upload calls send the whole file in a single HTTP request. That works fine up to a few megabytes. Past 6 MB — the practical ceiling for a single request when network conditions are imperfect — you start seeing timeouts, retries that re-upload the entire file, and a frustrating user experience on flaky connections.
Supabase Storage supports the tus.io resumable-upload protocol via the uploadToSignedUrl and resumable patterns documented in @supabase/storage-js. The protocol breaks the file into chunks (typically 6 MB), uploads each chunk as a separate request, and tracks progress server-side so an interrupted upload resumes from the last completed chunk instead of starting over.
// components/resumable-upload.tsx
'use client';
import { useState } from 'react';
import { createClient } from '@supabase/supabase-js';
import * as tus from 'tus-js-client';
const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL!;
const supabaseAnon = process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!;
export function ResumableUpload({
bucket,
pathPrefix
}: {
bucket: string;
pathPrefix: string;
}) {
const [progress, setProgress] = useState(0);
const [error, setError] = useState<string | null>(null);
const uploadFile = async (file: File) => {
setError(null);
setProgress(0);
const supabase = createClient(supabaseUrl, supabaseAnon);
const { data: { session } } = await supabase.auth.getSession();
if (!session) {
setError('not authenticated');
return;
}
const objectName = `${pathPrefix}/${crypto.randomUUID()}-${file.name}`;
const upload = new tus.Upload(file, {
endpoint: `${supabaseUrl}/storage/v1/upload/resumable`,
retryDelays: [0, 3000, 5000, 10000, 20000],
headers: {
authorization: `Bearer ${session.access_token}`,
'x-upsert': 'false'
},
uploadDataDuringCreation: true,
removeFingerprintOnSuccess: true,
metadata: {
bucketName: bucket,
objectName,
contentType: file.type,
cacheControl: '3600'
},
chunkSize: 6 * 1024 * 1024, // 6MB — required by Supabase
onError: (err) => setError(String(err)),
onProgress: (bytesUploaded, bytesTotal) => {
setProgress(Math.round((bytesUploaded / bytesTotal) * 100));
},
onSuccess: () => setProgress(100)
});
// Resume from a previous attempt if one is recorded for this file.
const previous = await upload.findPreviousUploads();
if (previous.length > 0) {
upload.resumeFromPreviousUpload(previous[0]);
}
upload.start();
};
return (
<div>
<input
type="file"
onChange={(e) => {
const f = e.target.files?.[0];
if (f) uploadFile(f);
}}
/>
<div role="progressbar" aria-valuenow={progress}>{progress}%</div>
{error && <div role="alert">{error}</div>}
</div>
);
}
Three constraints worth remembering. First, the chunk size must be exactly 6 MB for all chunks except the final one — Supabase’s tus implementation rejects other sizes. Second, the findPreviousUploads + resumeFromPreviousUpload pair is what makes the “resume after browser refresh” UX work; without it, a closed tab restarts the upload from zero. Third, the protocol uses the user’s JWT directly — the same RLS policies from Step 3 apply, no separate server-side action required.
Use the resumable pattern when the expected file size exceeds 6 MB or when network conditions are likely to be imperfect (mobile uploads, large attachments). Stick with the simple supabase.storage.from(bucket).upload() for everything else — it’s less code and faster for small files.
Free and Pro both default to a 50 MB per-file ceiling. Pro lets you raise this up to 5 GB by configuring the file size limit on each bucket. Free does not. If your product needs to accept files larger than 50 MB and you’re on Free, you’re moving to Pro — the full pricing breakdown is in the Supabase pricing explained guide.
Storage is cheap. Bandwidth (egress — serving files to users) is what eats the bill. Free includes 5 GB of egress per month; Pro includes 250 GB; past Pro it’s $0.09/GB. A SaaS that serves a 2 MB image to 1,000 users a day will burn through Free in two days and Pro in four months. The fix at scale is either Cloudflare R2 (zero egress fees) or aggressive image-transform usage to ship smaller files. The Supabase vs Firebase comparison covers how the two providers price storage and bandwidth differently.
The most common security failure with Supabase Storage is creating a public bucket for convenience and leaving it that way. Once a bucket is public, every object inside it is readable by anyone with the URL — and URLs leak in browser history, in shared screenshots, in error logs, in indexable cached pages. Always default to private. The performance penalty for using signed URLs is negligible; the security penalty for leaving sensitive uploads in a public bucket is unbounded.
Never use the user-provided filename as the storage path. Use UUIDs (Step 5) or another opaque identifier. The filename collision case (two users both uploading resume.pdf) is the easy reason; the path-traversal case (a malicious filename containing ../ sequences) is the actually dangerous one. Both are solved the same way.
The browser’s reported MIME type is a hint, not a guarantee. A user can upload a file the browser labels image/png that’s actually a PHP script renamed; the bucket’s MIME restriction will block it from being served as PHP, but only if the bucket is configured correctly and only if your downstream code doesn’t re-serve the bytes through a route that ignores the type. Always validate server-side, ideally by inspecting the file’s magic bytes for high-security cases.
Supabase’s built-in transforms are CDN-fast and cheap, but they’re not Cloudflare Images. They don’t do face detection, smart cropping, or AI-assisted optimization. They handle the 90% case — resize, format conversion, quality adjustment — well; if you need more, layer Cloudflare Images on top, or run imgproxy in front of your Supabase bucket. For a typical SaaS that needs avatars, OG images, and thumbnails, the built-in transforms are enough and you don’t need a second image platform.
The Supabase client must be initialized with the user’s session for RLS-style storage policies to work. In a Server Component or Route Handler, this means using createServerClient from @supabase/ssr with the cookies adapter (Step 5). Using the bare @supabase/supabase-js client with just the anon key on the server skips the session entirely and your policies will deny everything. The full Supabase auth/database wiring pattern is in the magic link auth tutorial.
Supabase Storage is the lowest-friction object store for a Next.js SaaS already on Supabase. The eight steps above are the canonical pattern from the Supabase docs: create a private bucket, scope access with policies on storage.objects, route uploads through a server action that validates and rewrites the filename, render via signed URLs with image transforms, and use the resumable protocol once files cross 6 MB. The pieces compose cleanly with the rest of the Supabase stack and let you ship the file-upload feature without leaving your auth model.
The stack, prompts, pricing, and mistakes to avoid — for solo founders building with AI.