You can now open secure, interactive shell sessions to running Sandboxes with the Vercel Sandbox CLI.
pnpm i -g sandbox
sandbox login
sandbox create # If you don't have a running Sandbox to SSH into
sandbox ssh<sandbox-id>
Note: While you’re connected, the Sandbox timeout is automatically extended in 5-minute increments to help avoid unexpected disconnections, for up to 5 hours.
You can now trigger a Vercel Agent code review on demand.
When Vercel post comments on your GitHub pull request, you can now click the Review with Vercel Agent button from the deployment table to trigger a code review.
Vercel Sandbox for Node.js now uses Node.js 24 by default. This keeps the Node.js runtime aligned with the latest Node.js features and performance improvements.
If you don’t explicitly configure a runtime, Sandbox will use Node.js 24 (as shown below).
main.ts
import{ Sandbox }from"@vercel/sandbox";
asyncfunctionmain(){
const sandbox =await Sandbox.create();
const version =await sandbox.runCommand("node",["-v"]);
You can now give any model the ability to search the web using Perplexity through Vercel's AI Gateway.
AI Gateway supports Perplexity Search as a universal web search tool that works with all models, regardless of provider. Unlike native search tools that are exclusive to specific providers, Perplexity Search can be added to all models.
To use Perplexity Search with the AI SDK, import gateway.tools.perplexitySearch() from @ai-sdk/gateway and pass it in the tools parameter as perplexity_search to any model.
You can now access GPT 5.2 Codex with Vercel's AI Gateway and no other provider accounts required. GPT 5.2 Codex combines GPT 5.2's strength in professional knowledge work with GPT 5.1 Codex Max's agentic coding capabilities.
GPT 5.2 Codex is better at working on long running coding tasks compared to predecessors and can handle more complex tasks like large refactors and migrations more reliably. The model has stronger vision performance for more accurate processing of screenshots and charts that are shared while coding. GPT 5.2 Codex also surpasses GPT 5.1 Codex Max in cyber capabilities and outperformed the previous model in OpenAI's Professional Capture-the-Flag (CTF) cybersecurity eval.
To use the GPT 5.2 Codex, with the AI SDK, set the model to openai/gpt-5.2-codex:
import{ streamText }from'ai';
const result =streamText({
model:'openai/gpt-5.2-codex',
prompt:
`Take the attached prototypes, diagram, and reference screenshots
to build a production app for customer analytics dashboards.`
});
AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.
Today we're releasing a brand new set of components for AI Elements designed to work with the Transcription and Speech functions of the AI SDK, helping you build the next generation of voice agents, transcription services and apps powered by natural language.
The Persona component displays an animated AI visual that responds to different conversational states. Built with Rive WebGL2, it provides smooth, high-performance animations for various AI interaction states including idle, listening, thinking, speaking, and asleep. The component supports multiple visual variants to match different design aesthetics.
The SpeechInput component provides an easy-to-use interface for capturing voice input in your application. It uses the Web Speech API for real-time transcription in supported browsers (Chrome, Edge), and falls back to MediaRecorder with an external transcription service for browsers that don't support Web Speech API (Firefox, Safari).
The Transcription component provides a flexible render props interface for displaying audio transcripts with synchronized playback. It automatically highlights the current segment based on playback time and supports click-to-seek functionality for interactive navigation.
The AudioPlayer component provides a flexible and customizable audio playback interface built on top of media-chrome. It features a composable architecture that allows you to build audio experiences with custom controls, metadata display, and seamless integration with AI-generated audio content.
The MicSelector component provides a flexible and composable interface for selecting microphone input devices. Built on shadcn/ui's Command and Popover components, it features automatic device detection, permission handling, dynamic device list updates, and intelligent device name parsing.
The VoiceSelector component provides a flexible and composable interface for selecting AI voices. Built on shadcn/ui's Dialog and Command components, it features a searchable voice list with support for metadata display (gender, accent, age), grouping, and customizable layouts. The component includes a context provider for accessing voice selection state from any nested component.