
@mux/ai + Vercel Workflows Starter
A Next.js starter template demonstrating how to build durable video AI pipelines with @mux/ai and the Vercel Workflow DevKit.
š Deploy to Vercel
Three Integration Layers
| Layer | Pattern | Example |
|---|---|---|
| 1. Primitives | Call functions directly | getSummaryAndTags() ā instant results |
| 2. Workflows | Run durably via Vercel Workflows | translateCaptions, translateAudio ā retries, progress tracking |
| 3. Connectors | Compose with external tools | Clip creation with Remotion ā multi-step pipelines |
Resumable workflows (try it)
This project showcases resumable, durable workflows out of the box:
- Start a workflow (captions, dubbing, or summary).
- Refresh the page, or navigate away and back.
- You should see the workflow still running asynchronously, with status rehydrated from browser
localStorage.
Quick Start
npm installnpm run dev
Inspect workflow runs locally:
npx workflow web
Rate Limiting
This demo includes IP-based rate limiting to protect against excessive API costs. Limits are automatically bypassed in development mode.
| Endpoint | Limit | Window |
|---|---|---|
translate-audio | 3 | 24h |
translate-captions | 10 | 24h |
render | 6 | 24h |
summary | 10 | 24h |
search | 50 | 1h |
See DOCS/RATE-LIMITS.md for implementation details and maintenance.
Remotion support
Remotion is used within this example app for composing @mux/ai with video rendering.
Local Development
# Open the Remotion Studio for live preview and iterationnpm run remotion:studio# Render a video locally (for testing)# Pass the composition name as an argumentnpm run remotion:render:local default-composition# Optionally specify an output pathnpm run remotion:render:local default-composition out/foo.mp4
Production Deployment
# Deploy Remotion site to AWS Lambda for serverless renderingnpm run remotion:deploy
Note:
remotion:deploybundles and deploys your Remotion site to AWS Lambda for production video rendering. This is not for development ā useremotion:studioandremotion:render:localfor local dev and testing.
Automated Deployment
Remotion is automatically deployed to AWS Lambda when changes to remotion/ are merged into main. See DOCS/AUTOMATED-REMOTION-DEPLOYMENTS.md for details.
Environment Variables
See AGENTS.md for the full list. At minimum you'll need:
# Mux credentialsMUX_TOKEN_ID=MUX_TOKEN_SECRET=# OpenAI (required for embeddings)OPENAI_API_KEY=# Database (PostgreSQL with pgvector) ā required to store/search the Mux catalog metadataDATABASE_URL=
Database setup + importing your Mux catalog
This project stores your Mux catalog metadata in Postgres and generates pgvector embeddings for semantic search.
The database schema and migrations are managed with Drizzle (see db/schema.ts and db/migrations/), and the db:* scripts use Drizzle Kit.
1) Configure your database connection
Create a .env.local file (this is what both Drizzle and the import script load):
# Database (PostgreSQL + pgvector)DATABASE_URL="postgresql://USER:PASSWORD@HOST:5432/DB_NAME"# Mux (used by the import script)MUX_TOKEN_ID="..."MUX_TOKEN_SECRET="..."# Embeddings (used by the import script)OPENAI_API_KEY="..."
Your Postgres must support
pgvector. The first migration will runCREATE EXTENSION IF NOT EXISTS vector;.
2) Run database migrations
Apply the migrations in db/migrations/ (creates tables + indexes and enables pgvector):
npm run db:migrate
3) Import Mux assets (and generate embeddings)
This fetches all ready Mux assets with playback IDs, upserts rows into videos, and writes embedding rows into video_chunks.
npm run import-mux-assets
To embed subtitles from a specific captions track language, pass --language (defaults to en). This should match the language of an existing captions track on the source Mux asset ā it does not translate captions:
npm run import-mux-assets -- --language en
4) Understand the database scripts
npm run db:generate: Generates new migration files fromdb/schema.ts(use this after changing the schema).npm run db:migrate: Applies migrations to the database defined byDATABASE_URL.npm run db:studio: Opens Drizzle Studio to inspect tables/rows locally (also usesDATABASE_URL).
Media Detail Page Structure
The media detail page (/media/[slug]) is organized into co-located feature folders:
app/media/[slug]/āāā media-content.tsxāāā page.tsxāāā localization/ā āāā actions.ts (captions & audio translation)ā āāā constants.tsā āāā ui.tsxāāā player/ā āāā context.tsā āāā provider.tsxā āāā ui.tsxā āāā use-player.tsāāā social-clips/ā āāā actions.ts (clip creation & Remotion Lambda rendering)ā āāā constants.tsā āāā preview.tsx (client-side Remotion Player preview)ā āāā ui.tsxāāā summarize-and-tag/ā āāā actions.ts (start/poll summary generation workflow)ā āāā ui.tsxāāā transcript/ā āāā actions.ts (semantic search within video transcript)ā āāā helpers.tsā āāā ui.tsxāāā workflows-panel/āāā helpers.tsāāā ui.tsx (includes StatusBadge, StepProgress, etc.)
Learn More
context/application-explained.mdā what the app does and whycontext/design-explained.mdā visual design and UX patternscontext/implementation-explained.mdā routes, data model, and code patternsAGENTS.mdā guidance for AI coding assistantsDOCS/RATE-LIMITS.mdā rate limiting configuration and maintenance
See Also
pgvectorā vector embeddings and similarity search for Postgres