React2Shell Security Bulletin
CVE-2025-55182 is a critical vulnerability in React that requires immediate action. Next.js and other frameworks that React are affected. Read the bulletin and act now. ...
CVE-2025-55182 is a critical vulnerability in React that requires immediate action. Next.js and other frameworks that React are affected. Read the bulletin and act now. ...
Building agents should feel like shaping an idea rather than fighting a maze of code or infrastructure. And we've seen this story before. A decade ago, the web moved from hand‑rolled routing and homegrown build scripts to opinionated frameworks and a platform that understood what developers were trying to do. Velocity went up, quality followed, and a generation of products appeared as if overnight. AI is following the same arc, but the stakes and surface area are larger because what you build is no longer a set of pages. It is a system that intelligently reasons, plans, and acts. Built on the foundations of Framework-defined Infrastructure, Vercel AI Cloud provides the tooling, infrastructure primitives, developer experience, and platform to bypass the complexity. You focus entirely on what you're building, with confidence in what's powering it under the hood.
The same ease of use you expect from Vercel, now extended to your backends. Since we introduced the AI Cloud at Vercel Ship, teams have been building AI applications that go beyond simple prompt-to-response patterns. These apps orchestrate multi-step workflows, spawn sub-agents, and run processes that take hours or days. They need backends that process data, run inference, and respond to real-time events. You can now deploy the most popular Python and TypeScript backend frameworks with zero configuration. Vercel reads your framework and automatically provisions the infrastructure to run it.
AWS databases are now available in the Vercel Marketplace and v0.
Models are powerful, but they're limited to their training data and knowledge cutoff date. When users ask about today's news, current prices, or the latest API changes, models can offer outdated information or admit they don't know. Provider-agnostic web search on AI Gateway changes this. With a single line of code, you can give any model the ability to search the web in real-time. It works with OpenAI, Anthropic, Google, and every other provider available through AI Gateway.
We've encapsulated 10+ years of React and Next.js optimization knowledge into react-best-practices, a structured repository optimized for AI agents and LLMs.
It's a thrilling time to work in Sales at Vercel. The web is transitioning from pages to agents, and Vercel is building the self-driving infrastructure to power it. We've assembled a Sales organization that equally understands the continually shifting technical landscape and pressing business needs to stay flexible, move fast, and be secure in the AI era. We're rethinking how Sales operates, and we're building the most AI-forward go-to-market organization in the industry. To lead this charge, we're welcoming Nick Bogaty as our Chief Revenue Officer.
We invited Dylan Jhaveri from Mux to share how they shipped durable workflows with their @mux/ai SDK. AI workflows have a frustrating habit of failing halfway through. Your content moderation check passes, you're generating video chapters, and then you hit a network timeout, a rate limit, or a random 500 from a provider having a bad day. Now you're stuck. Do you restart from scratch and pay for that moderation check again? Or do you write a bunch of state management code to remember where you left off? This is where durable execution changes everything. When we set out to build @mux/ai, an open-source SDK to help our customers build AI features on top of Mux's video infrastructure, we faced a fundamental question: how do we ship durable workflows in a way that's easy for developers to adopt, without forcing them into complex infrastructure decisions? The answer was Vercel's Workflow DevKit.
Many of us have built complex tooling to feed our agents the right information. It's brittle because we're guessing what the model needs instead of letting it find what it needs. We've found a simpler approach. We replaced most of the custom tooling in our internal agents with a filesystem tool and a bash tool. Our sales call summarization agent went from ~$1.00 to ~$0.25 per call on Claude Opus 4.5, and the output quality improved. We used the same approach for d0, our text-to-SQL agent. The idea behind this is that LLMs have been trained on massive amounts of code. They've spent countless hours navigating directories, grepping through files, and managing state across complex codebases. If agents excel at filesystem operations for code, they'll excel at filesystem operations for anything. Agents already understand filesystems. Customer support tickets, sales call transcripts, CRM data, conversation history. Structure it as files, give the agent bash, and the model brings the same capabilities it uses for code navigation.
Last year we introduced the v0 Composite Model Family, and described how the v0 models operate inside a multi-step agentic pipeline. Three parts of that pipeline have had the greatest impact on reliability. These are the dynamic system prompt, a streaming manipulation layer that we call “LLM Suspense”, and a set of deterministic and model-driven autofixers that run after (or while!) the model finishes streaming its response. What we optimize for The primary metric we optimize for is the percentage of successful generations. A successful generation is one that produces a working website in v0’s preview instead of an error or blank screen. But the problem is that LLMs running in isolation encounter various issues when generating code at scale. In our experience, code generated by LLMs can have errors as often as 10% of the time. Our composite pipeline is able to detect and fix many of these errors in real time as the LLM streams the output. This can lead to a double-digit increase in success rates.
Companies spend millions of dollars in time and money trying to build internal tools. These range from lightweight automations and dashboards to fully custom systems with dedicated engineering teams. Most businesses can’t justify focusing developers on bespoke operational tools, so non-technical teams resort to brittle and insecure workarounds: custom Salesforce formulas and fields, complex workflow automations, spreadsheets, and spiderwebs of integrations across platforms. They are trying to build software without actually building software, and most of the tools end up collecting dust. v0’s AI agent changes this equation. Business users can build and publish real code and apps on the same platform that their developers use, safely integrate with internal and external systems, and secure everything behind existing SSO authentication.
At our recent Next.js Conf and Ship AI events, we introduced an activation that blended technical experimentation with playful nostalgia. The idea started long before anyone stepped into the venue. As part of the online registration experience for both events, attendees could prompt and generate their own trading cards, giving them an early taste of the format and creating the foundation for what we wanted to bring into the real world.
It got better. We spent months building a sophisticated internal text-to-SQL agent, d0, with specialized tools, heavy prompt engineering, and careful context management. It worked… kind of. But it was fragile, slow, and required constant maintenance. So we tried something different. We deleted most of it and stripped the agent down to a single tool: execute arbitrary bash commands. We call this a file system agent. Claude gets direct access to your files and figures things out using grep, cat, and ls. The agent got simpler and better at the same time. 100% success rate instead of 80%. Fewer steps, fewer tokens, faster responses. All by doing less.
With over 20 million monthly downloads and adoption by teams ranging from startups to Fortune 500 companies, the AI SDK is the leading TypeScript toolkit for building AI applications. It provides a unified API, allowing you to integrate with any AI provider, and seamlessly integrates with Next.js, React, Svelte, Vue, and Node.js. The AI SDK enables you to build everything from chatbots to complex background agents.
In the weeks following React2Shell's disclosure, our firewall blocked over 6 million exploit attempts targeting deployments running vulnerable versions of Next.js, with 2.3 million in a single 24-hour period at peak. This was possible thanks to Seawall, the deep request inspection layer of the Vercel Web Application Firewall (WAF). We worked with 116 security researchers to find every WAF bypass they could, paying out over $1 million and shipping 20 unique updates to our WAF in 48 hours as new techniques were reported. The bypass techniques they discovered are now permanent additions to our firewall, protecting every deployment on the platform. But WAF rules are only the first line of defense. We are now disclosing for the first time an additional defense-in-depth against RCE on the Vercel platform that operates directly on the compute layer. Data from this defense-in-depth allows us to state with high confidence that the WAF was extraordinarily effective against exploitation of React2Shell. This post is about what we built to protect our customers and what it means for security on Vercel going forward.
Ready to deploy? Start building with a free account. Speak to an expert for your Pro or Enterprise needs.
Explore Vercel Enterprise with an interactive product tour, trial, or a personalized demo.