• Vercel CLI for Marketplace integrations optimized for agents

    AI agents can now autonomously discover, install, and retrieve setup instructions for Vercel Marketplace integrations using the Vercel CLI. This lets agents configure databases, auth, logging, and other services end-to-end in one workflow.

    These capabilities are powered by the new discover and guide commands in the Vercel CLI.

    By using the --format=json flag with the discover command, the CLI provides non-interactive JSON output that benefits developers as well, making it easier to automate infrastructure, write custom scripts, and manage CI/CD pipelines.

    When building an application, agents begin by exploring available integrations using the discover command.

    vercel integration discover --format=json

    After exploring the options, the agent can add an integration and then fetch getting started guides and code snippets for a specific integration using the guide command.

    vercel integration add neon --format=json
    vercel integration guide neon

    The Vercel CLI returns this setup documentation in an agent-friendly markdown format. This allows the agent to easily parse the instructions, write the necessary integration code, and configure the project autonomously.

    For integrations with required metadata fields, agents can use the help command to determine the required inputs and pass them as options to the add command.

    vercel integration add upstash/upstash-redis --help
    vercel integration add upstash/upstash-redis -m primaryRegion=iad1 --format=json

    The CLI also makes it easy to pause this process for human decisions, like terms of service acceptance. Agents can prompt developers for confirmation, enabling hybrid workflows that require human oversight of certain integration decisions.

    These commands are continuously tested against agent evaluations to ensure reliable autonomous behavior.

    pnpm i -g vercel@latest

    Update to the latest version of the Vercel CLI to try it out, or read the documentation.

  • Vercel Queues now in public beta

    Vercel Queues is a durable event streaming system built with Fluid compute, and is now available in public beta for all teams. Vercel Queues also powers Workflow: use Queues for direct message publishing and consumption, Workflow for ergonomic multi step orchestration.

    Functions need a reliable way to defer expensive work and guarantee that tasks complete even when functions crash or new deployments roll out. Queues makes it simple to process messages asynchronously with automatic retries and delivery guarantees, providing at-least-once delivery semantics.

    How it works:

    • Messages are sent to a durable topic

    • The queue fans messages out to subscribed consumer groups.

    • Each consumer group processes messages independently.

    • The queue redelivers messages to consumer groups until successfully processed or expired.

    Publish messages from any route handler:

    app/api/orders/route.ts
    import { send } from '@vercel/queue';
    export async function POST(request: Request) {
    const order = await request.json();
    const { messageId } = await send('orders', order);
    return Response.json({ messageId });
    }

    Create a consumer:

    app/api/queues/fulfill-order/route.ts
    import { handleCallback } from '@vercel/queue';
    export const POST = handleCallback(async (order, metadata) => {
    console.log('Fulfilling order', metadata.messageId, order);
    // await doAnythingAsync(order);
    });

    Configure the consumer group:

    vercel.json
    {
    "functions": {
    "app/api/queues/fulfill-order/route.ts": {
    "experimentalTriggers": [{ "type": "queue/v2beta", "topic": "orders" }]
    }
    }
    }

    Adding a trigger makes the route private: it has no public URL and only Vercel's queue infrastructure can invoke it.

    Vercel Queues is billed per API operation, starting at $0.60 per 1M operations, and includes:

    • Multiple AZ synchronous replication

    • At-least-once delivery

    • Customizable visibility timeout

    • Delayed delivery

    • Idempotency keys

    • Concurrency control

    • Per-deployment topic partitioning

    Functions invoked by Queues in push mode are charged at existing Fluid compute rates.

    Get started with the Queues documentation.

  • Chat SDK adds Telegram adapter support

    Chat SDK now supports Telegram, extending its single-codebase approach to Slack, Discord, GitHub, and Teams, with the new Telegram adapter.

    Teams can build bots that support mentions, message reactions, direct messages, and typing indicators.

    The adapter handles single file uploads and renders basic text cards, with buttons and link buttons that display as inline keyboard elements, allowing developers to create interactive workflows directly within Telegram chats.

    Get start with Telegram adapter setup:

    import { Chat } from "chat";
    import { createTelegramAdapter } from "@chat-adapter/telegram";
    const bot = new Chat({
    userName: "mybot",
    adapters: {
    telegram: createTelegramAdapter(),
    },
    });
    bot.onNewMention(async (thread, message) => {
    await thread.post(`You said: ${message.text}`);
    });

    Telegram does not expose full historical message APIs to bots, so message history relies on adapter-level caching. Additionally, callback data is limited to 64 bytes, and the platform does not currently support modals or ephemeral messages.

    Read the documentation to get started.

  • Developer role now available for Pro teams

    Pro teams can now assign the Developer role to their members. Previously only available for Enterprise teams, the Developer role gives Pro teams more granular access control.

    Developers can safely deploy to projects on a team, with more limited team-wide configuration control and environment variables visibility.

    Owners can assign the Developer role to any existing seat or invite new members from the team members settings.

    Learn more about team level roles.

  • New dashboard redesign is now the default

    Dash - DarkDash - Dark

    The new dashboard navigation is now the default experience for all Vercel users.

    Following a successful opt-in beta release in January it has now rolled out fully as of February 26, 2026, with several improvements made based on feedback.

    The redesigned navigation includes:

    • New sidebar with horizontal tabs moved to a resizable sidebar that can be hidden when not needed

    • Consistent tabs for unified navigations across both team and project levels

    • Improved order with navigation items prioritized the most common developer workflows

    • Projects as filters so you can switch between team and project versions of the same page in one click

    • Optimized for mobile with floating bottom bar optimized for one-handed use

    No action is required. The new navigation is available to all users automatically.

    Open your dashboard to see the updated experience.

  • Nano Banana 2 is live on AI Gateway

    Gemini 3.1 Flash Image Preview (Nano Banana 2) is now available on AI Gateway.

    This release improves visual quality while maintaining the generation speed and cost of flash-tier models.

    Nano Banana 2 can use Google Image Search to ground outputs in real-world imagery. This helps with rendering lesser-known landmarks and objects by retrieving live visual data. This model also introduces configurable thinking levels (Minimal and High) to let the model reason through complex prompts before rendering. New resolutions and new aspect ratios (512p, 1:4 and 1:8) are available alongside the existing options to expand to support more types of creative assets.

    To use this model, set model to google/gemini-3.1-flash-image-preview in the AI SDK. Nano Banana 2 is a multimodal model. Use `streamText` or `generateText` to generate images alongside text responses. This example shows how the model can use web search to find live data.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'google/gemini-3.1-flash-image-preview',
    providerOptions: {
    google: { responseModalities: ['TEXT', 'IMAGE'] },
    },
    prompt: 'Generate an image of the 2026 Super Bowl at golden hour',
    });

    You can also change the thinking level: in this example, thinking is set to high for a more thorough response.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'google/gemini-3.1-flash-image-preview',
    providerOptions: {
    google: {
    responseModalities: ['TEXT', 'IMAGE'],
    thinkingConfig: {
    includeThoughts: true,
    thinkingLevel: 'high',
    },
    },
    },
    prompt:
    `An exploded view diagram of a modern GPU, showing the die, HBM stacks, interposer,
    and cooling solution as separate floating layers with labeled callouts.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.