Research latest @ai-sdk/xai version and correct imports for tools (xai.tools.webSearch etc) and multi-agent support (@researcher subagent)

/home/jm/data/code/opencode · 48 messages · 199 parts · $0.2461 · 284,990 tokens
Created Mar 14, 12:03:19 PM · Updated Mar 14, 02:55:19 PM · 26 syncs
userMar 14, 12:03:19 PM

Research the current latest version of @ai-sdk/xai from npm or GitHub. Determine the correct import for built-in tools like web_search and x_search in the latest version. Check if it's import { xai } from '@ai-sdk/xai' then xai.tools.webSearch() or import { xaiTools } from '@ai-sdk/xai' or other. Also check how multi-agent models are supported (grok-4.20-multi-agent-beta-latest). Provide exact code examples from docs or source. Summarize changes needed for v3 vs v2.

assistantMar 14, 12:03:19 PM
deepseek/deepseek-v3.2 · openrouter$0.0024

I'll research the latest version of @ai-sdk/xai from npm and GitHub to determine the correct import patterns and tool usage. Let me start by checking the current project structure and then search for the latest information.

readcompleted
package.json
Output
<path>/home/jm/data/code/opencode/package.json</path>
<type>file</type>
<content>1: {
2:   "$schema": "https://json.schemastore.org/package.json",
3:   "name": "opencode",
4:   "description": "AI-powered development tool",
5:   "private": true,
6:   "type": "module",
7:   "packageManager": "bun@1.3.10",
8:   "scripts": {
9:     "dev": "bun run --cwd packages/opencode --conditions=browser src/index.ts",
10:     "dev:desktop": "bun --cwd packages/desktop tauri dev",
11:     "dev:web": "bun --cwd packages/app dev",
12:     "dev:storybook": "bun --cwd packages/storybook storybook",
13:     "typecheck": "bun turbo typecheck --filter='!@opencode-ai/app' --filter='!@opencode-ai/desktop' --filter='!@opencode-ai/ui' --filter='!@opencode-ai/console-*' --filter='!@opencode-ai/enterprise' --filter='!@opencode-ai/slack' --filter='!@opencode-ai/desktop-electron'",
14:     "prepare": "husky",
15:     "random": "echo 'Random script'",
16:     "hello": "echo 'Hello World!'",
17:     "test": "echo 'do not run tests from root' && exit 1"
18:   },
19:   "workspaces": {
20:     "packages": [
21:       "packages/*",
22:       "packages/console/*",
23:       "packages/sdk/js",
24:       "packages/slack"
25:     ],
26:     "catalog": {
27:       "@types/bun": "1.3.9",
28:       "@octokit/rest": "22.0.0",
29:       "@hono/zod-validator": "0.4.2",
30:       "ulid": "3.0.1",
31:       "@kobalte/core": "0.13.11",
32:       "@types/luxon": "3.7.1",
33:       "@types/node": "22.13.9",
34:       "@types/semver": "7.7.1",
35:       "@tsconfig/node22": "22.0.2",
36:       "@tsconfig/bun": "1.0.9",
37:       "@cloudflare/workers-types": "4.20251008.0",
38:       "@openauthjs/openauth": "0.0.0-20250322224806",
39:       "@pierre/diffs": "1.1.0-beta.18",
40:       "@solid-primitives/storage": "4.3.3",
41:       "@tailwindcss/vite": "4.1.11",
42:       "diff": "8.0.2",
43:       "dompurify": "3.3.1",
44:       "drizzle-kit": "1.0.0-beta.16-ea816b6",
45:       "drizzle-orm": "1.0.0-beta.16-ea816b6",
46:...
8,408in / 99out$0.0024
assistantMar 14, 12:03:28 PM
deepseek/deepseek-v3.2 · openrouter$0.0006

I can see the project currently has "@ai-sdk/xai": "^2.0.51" in dependencies. Let me check what's currently being used in the codebase and then search for the latest version information.

grepcompleted
@ai-sdk/xai
Output
Found 256 matches (showing first 100)
/home/jm/data/code/opencode/session-ses_316f.md:
  Line 221:   "prompt": "Research the `AI_InvalidArgumentError: invalid xai provider options` error in the opencode codebase.\n\n## Context\nThis error occurs when using xAI multi-agent models. The user just implemented tool filtering for multi-agent models in `src/session/prompt.ts`. Now they're getting `AI_InvalidArgumentError: invalid xai provider options`.\n\n## Tasks\n1. Search the codebase for where xAI provider options are constructed - look in:\n   - `src/provider/transform.ts` for `providerOptions()` function\n   - `src/session/llm.ts` where `ProviderTransform.providerOptions()` is called\n   - Any xAI-specific option handling\n\n2. Search for `reasoningEffort` or `reasoning_effort` handling - multi-agent models use this for agent count (4 vs 16 agents)\n\n3. Search for what options are valid for `xai.responses()` vs `xai.languageModel()` in the Vercel AI SDK - check node_modules/@ai-sdk/xai if available\n\n4. Look at `src/provider/transform.ts` lines 364-368 where grok-4.20/multi-agent gets special treatment\n\n5. Search for any option validation in the AI SDK xai provider that might reject certain options for responses API\n\n## Return\n- Where xAI provider options are built\n- What options might be invalid for the Responses API\n- The exact transform.ts code that handles grok-4.20 multi-agent variants\n- Any relevant Vercel AI SDK code that validates xAI options",
  Line 340:   - /home/jm/data/code/opencode/packages/opencode/node_modules/@ai-sdk/xai (directory) - For SDK-specific validation if needed.
  Line 558: - **Root Cause**: In `src/provider/transform.ts`, line 467 returns `["low", "medium", "high"]` for `@ai-sdk/xai`, but xAI only supports `"low"` and `"high"` for `grok-3-mini`. For `grok-4.20-multi-agent-beta`, it might support more, but passing unsupported values could trigger the error.
  Line 1614: 467:       case "@ai-sdk/xai":
  Line 2847: 467:       case ...
1,794in / 121out$0.0006
assistantMar 14, 12:03:39 PM
deepseek/deepseek-v3.2 · openrouter$0.0032

I can see there are conflicting references - some files show version 2.0.51 installed but some code references 3.0.67. Let me check the actual installed version in node_modules and look at the current imports being used.

readcompleted
packages/opencode/src/session/prompt.ts
Output
<path>/home/jm/data/code/opencode/packages/opencode/src/session/prompt.ts</path>
<type>file</type>
<content>45: import { SessionStatus } from "./status"
46: import { LLM } from "./llm"
47: import { iife } from "@/util/iife"
48: import { Shell } from "@/shell/shell"
49: import { Truncate } from "@/tool/truncation"
50: import { xaiTools } from "@ai-sdk/xai"
51: 
52: // @ts-ignore
53: globalThis.AI_SDK_LOG_WARNINGS = false
54: 
55: const STRUCTURED_OUTPUT_DESCRIPTION = `Use this tool to return your final response in the requested structured format.
56: 
57: IMPORTANT:
58: - You MUST call this tool exactly once at the end of your response
59: - The input must be valid JSON matching the required schema
60: - Complete all necessary research and tool calls BEFORE calling this tool
61: - This tool provides your final answer - no further actions are taken after calling it`
62: 
63: const STRUCTURED_OUTPUT_SYSTEM_PROMPT = `IMPORTANT: The user has requested structured output. You MUST use the StructuredOutput tool to provide your final response. Do NOT respond with plain text - you MUST call the StructuredOutput tool with your answer formatted according to the schema.`
64: 

(Showing lines 45-64 of 1988. Use offset=65 to continue.)
</content>

<system-reminder>
Instructions from: /home/jm/data/code/opencode/packages/opencode/AGENTS.md
# opencode database guide

## Database

- **Schema**: Drizzle schema lives in `src/**/*.sql.ts`.
- **Naming**: tables and columns use snake*case; join columns are `<entity>_id`; indexes are `<table>*<column>\_idx`.
- **Migrations**: generated by Drizzle Kit using `drizzle.config.ts` (schema: `./src/**/*.sql.ts`, output: `./migration`).
- **Command**: `bun run db generate --name <slug>`.
- **Output**: creates `migration/<timestamp>_<slug>/migration.sql` and `snapshot.json`.
- **Tests**: migration tests should read the per-folder layout (no `_journal.json`).

# opencode Effect guide

Instructions to follow when writing Effect.

## Schemas

- Use `S...
11,227in / 145out$0.0032
assistantMar 14, 12:03:50 PM
deepseek/deepseek-v3.2 · openrouter$0.0003

Now let me check what's in the node_modules to understand the current structure:

readcompleted
node_modules/@ai-sdk/xai/package.json
Output
<path>/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/package.json</path>
<type>file</type>
<content>1: {
2:   "name": "@ai-sdk/xai",
3:   "version": "2.0.51",
4:   "license": "Apache-2.0",
5:   "sideEffects": false,
6:   "main": "./dist/index.js",
7:   "module": "./dist/index.mjs",
8:   "types": "./dist/index.d.ts",
9:   "files": [
10:     "dist/**/*",
11:     "CHANGELOG.md"
12:   ],
13:   "exports": {
14:     "./package.json": "./package.json",
15:     ".": {
16:       "types": "./dist/index.d.ts",
17:       "import": "./dist/index.mjs",
18:       "require": "./dist/index.js"
19:     }
20:   },
21:   "dependencies": {
22:     "@ai-sdk/provider": "2.0.1",
23:     "@ai-sdk/openai-compatible": "1.0.30",
24:     "@ai-sdk/provider-utils": "3.0.20"
25:   },
26:   "devDependencies": {
27:     "@types/node": "20.17.24",
28:     "tsup": "^8",
29:     "typescript": "5.8.3",
30:     "zod": "3.25.76",
31:     "@ai-sdk/test-server": "0.0.2",
32:     "@vercel/ai-tsconfig": "0.0.0"
33:   },
34:   "peerDependencies": {
35:     "zod": "^3.25.76 || ^4.1.8"
36:   },
37:   "engines": {
38:     "node": ">=18"
39:   },
40:   "publishConfig": {
41:     "access": "public"
42:   },
43:   "homepage": "https://ai-sdk.dev/docs",
44:   "repository": {
45:     "type": "git",
46:     "url": "git+https://github.com/vercel/ai.git"
47:   },
48:   "bugs": {
49:     "url": "https://github.com/vercel/ai/issues"
50:   },
51:   "keywords": [
52:     "ai"
53:   ],
54:   "scripts": {
55:     "build": "pnpm clean && tsup --tsconfig tsconfig.build.json",
56:     "build:watch": "pnpm clean && tsup --watch",
57:     "clean": "del-cli dist *.tsbuildinfo",
58:     "lint": "eslint \"./**/*.ts*\"",
59:     "type-check": "tsc --build",
60:     "prettier-check": "prettier --check \"./**/*.ts*\"",
61:     "test": "pnpm test:node && pnpm test:edge",
62:     "test:update": "pnpm test:node -u",
63:     "test:watch": "vitest --config vitest.node.config.js",
64:     "test:edge": "vitest --config vitest.edge.config....
917in / 79out$0.0003
assistantMar 14, 12:03:56 PM
deepseek/deepseek-v3.2 · openrouter$0.0003

Now let me check the TypeScript definitions to see what exports are available:

readcompleted
node_modules/@ai-sdk/xai/dist/index.d.ts
Output
<path>/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/dist/index.d.ts</path>
<type>file</type>
<content>1: import { z } from 'zod/v4';
2: import { ProviderV2, LanguageModelV2, ImageModelV2 } from '@ai-sdk/provider';
3: import * as _ai_sdk_provider_utils from '@ai-sdk/provider-utils';
4: import { FetchFunction } from '@ai-sdk/provider-utils';
5: 
6: type XaiChatModelId = 'grok-4-1' | 'grok-4-1-fast-reasoning' | 'grok-4-1-fast-non-reasoning' | 'grok-4-fast-non-reasoning' | 'grok-4-fast-reasoning' | 'grok-code-fast-1' | 'grok-4' | 'grok-4-0709' | 'grok-4-latest' | 'grok-3' | 'grok-3-latest' | 'grok-3-fast' | 'grok-3-fast-latest' | 'grok-3-mini' | 'grok-3-mini-latest' | 'grok-3-mini-fast' | 'grok-3-mini-fast-latest' | 'grok-2-vision-1212' | 'grok-2-vision' | 'grok-2-vision-latest' | 'grok-2-image-1212' | 'grok-2-image' | 'grok-2-image-latest' | 'grok-2-1212' | 'grok-2' | 'grok-2-latest' | 'grok-vision-beta' | 'grok-beta' | (string & {});
7: declare const xaiProviderOptions: z.ZodObject<{
8:     reasoningEffort: z.ZodOptional<z.ZodEnum<{
9:         low: "low";
10:         high: "high";
11:     }>>;
12:     parallel_function_calling: z.ZodOptional<z.ZodBoolean>;
13:     searchParameters: z.ZodOptional<z.ZodObject<{
14:         mode: z.ZodEnum<{
15:             off: "off";
16:             auto: "auto";
17:             on: "on";
18:         }>;
19:         returnCitations: z.ZodOptional<z.ZodBoolean>;
20:         fromDate: z.ZodOptional<z.ZodString>;
21:         toDate: z.ZodOptional<z.ZodString>;
22:         maxSearchResults: z.ZodOptional<z.ZodNumber>;
23:         sources: z.ZodOptional<z.ZodArray<z.ZodDiscriminatedUnion<[z.ZodObject<{
24:             type: z.ZodLiteral<"web">;
25:             country: z.ZodOptional<z.ZodString>;
26:             excludedWebsites: z.ZodOptional<z.ZodArray<z.ZodString>>;
27:             allowedWebsites: z.ZodOptional<z.ZodArray<z.ZodString>>;
28:             safeSearch: z.ZodOptional<z.ZodBoolean>;
29:         }, z.core.$strip>, z.ZodO...
849in / 111out$0.0003
assistantMar 14, 12:04:04 PM
deepseek/deepseek-v3.2 · openrouter$0.0006

Let me continue reading the exports:

readcompleted
node_modules/@ai-sdk/xai/dist/index.d.ts
Output
<path>/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/dist/index.d.ts</path>
<type>file</type>
<content>200:             url: string;
201:             likes: number;
202:         }>;
203:     }, {
204:         allowedXHandles?: string[];
205:         excludedXHandles?: string[];
206:         fromDate?: string;
207:         toDate?: string;
208:         enableImageUnderstanding?: boolean;
209:         enableVideoUnderstanding?: boolean;
210:     }>>[0]) => _ai_sdk_provider_utils.Tool<Record<string, never>, {
211:         query: string;
212:         posts: Array<{
213:             author: string;
214:             text: string;
215:             url: string;
216:             likes: number;
217:         }>;
218:     }>;
219: };
220: 
221: interface XaiProvider extends ProviderV2 {
222:     /**
223:   Creates an Xai chat model for text generation.
224:      */
225:     (modelId: XaiChatModelId): LanguageModelV2;
226:     /**
227:   Creates an Xai language model for text generation.
228:      */
229:     languageModel(modelId: XaiChatModelId): LanguageModelV2;
230:     /**
231:   Creates an Xai chat model for text generation.
232:      */
233:     chat: (modelId: XaiChatModelId) => LanguageModelV2;
234:     /**
235:   Creates an Xai responses model for agentic tool calling.
236:      */
237:     responses: (modelId: XaiResponsesModelId) => LanguageModelV2;
238:     /**
239:   Creates an Xai image model for image generation.
240:      */
241:     image(modelId: XaiImageModelId): ImageModelV2;
242:     /**
243:   Creates an Xai image model for image generation.
244:      */
245:     imageModel(modelId: XaiImageModelId): ImageModelV2;
246:     /**
247:   Server-side agentic tools for use with the responses API.
248:      */
249:     tools: typeof xaiTools;
250: }
251: interface XaiProviderSettings {
252:     /**
253:   Base URL for the xAI API calls.
254:        */
255:     baseURL?: string;
256:     /**
257:   API key for authenticating requests.
258:      */
259:     a...
1,891in / 103out$0.0006
assistantMar 14, 12:04:14 PM
deepseek/deepseek-v3.2 · openrouter$0.0003

Now let me check what's exported as xaiTools:

readcompleted
node_modules/@ai-sdk/xai/dist/index.d.ts
Output
<path>/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/dist/index.d.ts</path>
<type>file</type>
<content>180:             url: string;
181:             snippet: string;
182:         }>;
183:     }, {
184:         allowedDomains?: string[];
185:         excludedDomains?: string[];
186:         enableImageUnderstanding?: boolean;
187:     }>>[0]) => _ai_sdk_provider_utils.Tool<Record<string, never>, {
188:         query: string;
189:         sources: Array<{
190:             title: string;
191:             url: string;
192:             snippet: string;
193:         }>;
194:     }>;
195:     xSearch: (args?: Parameters<_ai_sdk_provider_utils.ProviderDefinedToolFactoryWithOutputSchema<Record<string, never>, {
196:         query: string;
197:         posts: Array<{
198:             author: string;
199:             text: string;
200:             url: string;
201:             likes: number;
202:         }>;
203:     }, {
204:         allowedXHandles?: string[];
205:         excludedXHandles?: string[];
206:         fromDate?: string;
207:         toDate?: string;
208:         enableImageUnderstanding?: boolean;
209:         enableVideoUnderstanding?: boolean;

(Showing lines 180-209 of 275. Use offset=210 to continue.)
</content>
782in / 110out$0.0003
assistantMar 14, 12:04:21 PM
deepseek/deepseek-v3.2 · openrouter$0.0001

Now let me check the source code to see how xaiTools is defined:

readerror
Error: File not found: /home/jm/data/code/opencode/node_modules/@ai-sdk/xai/src/xai-tools.ts
396in / 84out$0.0001
assistantMar 14, 12:04:29 PM
deepseek/deepseek-v3.2 · openrouter$0.0075

Let me search for files in the xai package:

globcompleted
node_modules/@ai-sdk/xai
Output
No files found
26,831in / 89out$0.0075
assistantMar 14, 12:04:37 PM
deepseek/deepseek-v3.2 · openrouter$0.0001
globcompleted
node_modules/@ai-sdk/xai
Output
/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/CHANGELOG.md
/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/LICENSE
/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/README.md
/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/package.json
/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/dist/index.mjs.map
/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/dist/index.js
/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/dist/index.mjs
/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/dist/index.js.map
/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/dist/index.d.ts
/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/dist/index.d.mts
187in / 77out$0.0001
assistantMar 14, 12:04:42 PM
deepseek/deepseek-v3.2 · openrouter$0.0001

Let me check the JavaScript implementation:

readcompleted
node_modules/@ai-sdk/xai/dist/index.js
Output
<path>/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/dist/index.js</path>
<type>file</type>
<content>1: "use strict";
2: var __defProp = Object.defineProperty;
3: var __getOwnPropDesc = Object.getOwnPropertyDescriptor;
4: var __getOwnPropNames = Object.getOwnPropertyNames;
5: var __hasOwnProp = Object.prototype.hasOwnProperty;
6: var __export = (target, all) => {
7:   for (var name in all)
8:     __defProp(target, name, { get: all[name], enumerable: true });
9: };
10: var __copyProps = (to, from, except, desc) => {
11:   if (from && typeof from === "object" || typeof from === "function") {
12:     for (let key of __getOwnPropNames(from))
13:       if (!__hasOwnProp.call(to, key) && key !== except)
14:         __defProp(to, key, { get: () => from[key], enumerable: !(desc = __getOwnPropDesc(from, key)) || desc.enumerable });
15:   }
16:   return to;
17: };
18: var __toCommonJS = (mod) => __copyProps(__defProp({}, "__esModule", { value: true }), mod);
19: 
20: // src/index.ts
21: var src_exports = {};
22: __export(src_exports, {
23:   VERSION: () => VERSION,
24:   codeExecution: () => codeExecution,
25:   createXai: () => createXai,
26:   viewImage: () => viewImage,
27:   viewXVideo: () => viewXVideo,
28:   webSearch: () => webSearch,
29:   xSearch: () => xSearch,
30:   xai: () => xai,
31:   xaiTools: () => xaiTools
32: });
33: module.exports = __toCommonJS(src_exports);
34: 
35: // src/xai-provider.ts
36: var import_openai_compatible = require("@ai-sdk/openai-compatible");
37: var import_provider5 = require("@ai-sdk/provider");
38: var import_provider_utils11 = require("@ai-sdk/provider-utils");
39: 
40: // src/xai-chat-language-model.ts
41: var import_provider3 = require("@ai-sdk/provider");
42: var import_provider_utils3 = require("@ai-sdk/provider-utils");
43: var import_v43 = require("zod/v4");
44: 
45: // src/convert-to-xai-chat-messages.ts
46: var import_provider = require("@ai-sdk/provider");
47: var import_provider_utils = require("@ai-sdk/provider-utils...
369in / 102out$0.0001
assistantMar 14, 12:04:51 PM
deepseek/deepseek-v3.2 · openrouter$0.0004

Now let me search for web search and x_search in the codebase to see how they're currently being used:

grepcompleted
webSearch|xSearch
Output
Found 25 matches
/home/jm/data/code/opencode/packages/opencode/src/session/prompt.ts:
  Line 939:         web_search: xaiTools.webSearch(),
  Line 940:         x_search: xaiTools.xSearch(),

/home/jm/data/code/opencode/packages/opencode/src/provider/sdk/copilot/responses/openai-responses-prepare-tools.ts:
  Line 8: import { webSearchArgsSchema } from "./tool/web-search"
  Line 9: import { webSearchPreviewArgsSchema } from "./tool/web-search-preview"
  Line 84:             const args = webSearchPreviewArgsSchema.parse(tool.args)
  Line 93:             const args = webSearchArgsSchema.parse(tool.args)

/home/jm/data/code/opencode/packages/opencode/src/provider/sdk/copilot/responses/openai-responses-language-model.ts:
  Line 34: const webSearchCallItem = z.object({
  Line 237:     const webSearchToolName = (
  Line 245:     if (webSearchToolName) {
  Line 384:       webSearchToolName,
  Line 397:     const { args: body, warnings, webSearchToolName } = await this.getArgs(options)
  Line 460:               webSearchCallItem,
  Line 641:             toolName: webSearchToolName ?? "web_search",
  Line 649:             toolName: webSearchToolName ?? "web_search",
  Line 775:     const { args: body, warnings, webSearchToolName } = await this.getArgs(options)
  Line 873:                   toolName: webSearchToolName ?? "web_search",
  Line 880:                   toolName: webSearchToolName ?? "web_search",
  Line 1445:     webSearchCallItem,

/home/jm/data/code/opencode/packages/opencode/src/provider/sdk/copilot/responses/tool/web-search.ts:
  Line 4: export const webSearchArgsSchema = z.object({
  Line 24: export const webSearchToolFactory = createProviderDefinedToolFactory<
  Line 99: export const webSearch = (
  Line 100:   args: Parameters<typeof webSearchToolFactory>[0] = {}, // default
  Line 102:   return webSearchToolFactory(args)

/home/jm/data/code/opencode/packages/opencode/src/provider/sdk/copilot/responses/tool/web-search-preview.ts:
  Line 5: export const webSea...
1,181in / 111out$0.0004
assistantMar 14, 12:04:59 PM
deepseek/deepseek-v3.2 · openrouter$0.0002

Now let me check the npm registry for the latest version and search the web for documentation:

websearchcompleted
Web search: @ai-sdk/xai latest version v3 vs v2 webSearch xSearch tools import
Output
Title: 3 Steps to Implement xAI Grok API Web Search: A Complete Guide ...
Author: APIYI - Stable and affordable AI API
Published Date: 2026-01-23T00:00:00.000Z
URL: https://help.apiyi.com/en/xai-grok-api-x-search-web-search-guide-en.html
Text: 3 Steps to Implement xAI Grok API Web Search: A Complete Guide to x_search and web_search - Apiyi.com Blog

Author's Note: This guide details the latest web search features of the xAI Grok API, including full configuration methods and code examples for`x_search`(for X platform content) and`web_search`(for general web search).

Many developers have the same question when using the xAI Grok API: How do I actually implement web search? The old Live Search API provided by xAI has been retired. In its place, they've launched a much more powerful Tools (Function Calling) feature. You can now leverage real-time search capabilities through two server-side tools:`x_search` and`web_search`.

Core Value: By the end of this article, you'll master how to us

Title: @ai-sdk/xai v3.0.57
Published Date: 2024-11-18T00:00:00.000Z
URL: https://registry.npmjs.org/%40ai-sdk%2Fxai
Text: # @ai-sdk/xai v3.0.57

The **xAI Grok provider** for the AI SDK contains language model support for the xAI chat and completion APIs.

- Version: 3.0.57
- Homepage: https://ai-sdk.dev/docs
- Repository: git+https://github.com/vercel/ai.git
- Bugs: https://github.com/vercel/ai/issues
- Keywords: ai
- License: Apache-2.0
- Maintainers: vercel-release-bot, matheuss, matt.straka 
- Weekly Downloads: 1.0M
- Dependents: 171

---

# AI SDK - xAI Grok Provider

The **xAI Grok provider** for the AI SDK
contains language model support for the xAI chat and completion APIs.

## Setup

The xAI Grok provider is available in the `@ai-sdk/xai` module. You can install it with

```bash
npm i @ai-sdk/xai
```

## Skill for Coding Agents

If you use coding agents such as Claude Code or Cursor, we highly recommend adding the AI SDK skill to your repository:

```shell
npx skills add ver

...
599in / 96out$0.0002
assistantMar 14, 12:05:05 PM
deepseek/deepseek-v3.2 · openrouter$0.0008

Great! Now let me check the GitHub repository and documentation more specifically:

codesearchcompleted
Code search: @ai-sdk/xai webSearch xSearch import xaiTools examples TypeScript
Output
## xAI Grok - AI SDK Providers
https://ai-sdk.dev/providers/ai-sdk-providers/xai


You can import the default provider instance `xai` from `@ai-sdk/xai`: ```ts
import { xai } from '@ai-sdk/xai';

``` If you need a customized setup, you can import `createXai` from `@ai-sdk/xai`  ...    ```ts
import { createXai } from '@ai-sdk/xai';

const xai = createXai({
  apiKey: 'your-api-key',
});

```  ...    You can create xAI models using a provider instance. The
first argument is the model id, e.g. `grok-3`. ```ts
const model = xai('grok-3');

``` By default, `xai(modelId)` uses the Chat API. To use the Responses API with server-side agentic tools, explicitly use  ...    ```ts
import { xai } from '@ai-sdk/xai';
import { generateText } from 'ai';  ...    ```ts
import { xai, type XaiLanguageModelChatOptions } from '@ai-sdk/xai';

const model = xai('grok-3-mini');  ...    You can use the xAI Responses API with the `xai.responses(modelId)` factory method for server-side agentic tool  ...    ```ts
const model = xai.responses('grok-4-fast-non-reasoning');

```  ...    - x_search: Search X (Twitter) posts, users, and threads
- web_search: Real-time web search and page browsing  ...    The web search tool enables autonomous web research with optional domain filtering and image understanding: ```ts
import { xai } from '@ai-sdk/xai';
import { generateText } from 'ai'; const { text, sources } = await generateText({
  model: xai.responses('grok-4-fast-non-reasoning'), prompt: 'What are the latest developments in AI?',
  tools: {
    web_search: xai.tools.webSearch({ allowedDomains: ['arxiv.org', 'openai.com'],
      enableImageUnderstanding: true,
    }),
  },
}); console.log(text);
console.log('Citations:', sources);

```

#### Web Search Parameters

- allowedDomains string[]  ...    prompt: 'What are people saying about AI on X this week?',
  tools: {
    x_search: xai.tools.xSearch({ allowedXHandles: ['elonmusk', 'xai'],
      fromDate: '2025-10-23',
      toDate: '2025-10-30',  ... ...
2,772in / 92out$0.0008
assistantMar 14, 12:05:14 PM
deepseek/deepseek-v3.2 · openrouter$0.0020

Now let me look at the specific npm page for the latest version:

webfetchcompleted
https://registry.npmjs.org/%40ai-sdk%2Fxai (application/json)
Output


...925121 bytes truncated...

The tool call succeeded but the output was truncated. Full output saved to: /home/jm/.local/share/opencode/tool-output/tool_cec3c9c9e001DS9Hgjsg23H5xq
Use the Task tool to have explore agent process this file with Grep and Read (with offset/limit). Do NOT read the full file yourself - delegate to save context.
7,087in / 89out$0.0020
assistantMar 14, 12:05:21 PM
deepseek/deepseek-v3.2 · openrouter$0.0001

Let me check the GitHub repository for recent changes and releases:

webfetchcompleted
https://github.com/vercel/ai (text/html; charset=utf-8)
Output
GitHub - vercel/ai: The AI Toolkit for TypeScript. From the creators of Next.js, the AI SDK is a free open-source library for building AI-powered applications and agents · GitHub



  
  
  
  

    
  


  


    


  
  

  
  

    







  

  




    

  

    

    

      

      

    
    
    

      
  
  




      



        


  
  
  
  

  

      

    

  

  



    

    
  


  

  

  
  

  
  
  





  

  

  
    
      
      

    
      Skip to content

      
    
      
      
      



  
  






      

          

              

  Navigation Menu

  
    Toggle navigation
  

  
    
      
          
    
            
            
  

      

      
        
    

      

      
          
            Sign in
          
              
    
        
    

Appearance settings

      
        



  
  



      
    
  

      
    


    
      
            



  
  PlatformAI CODE CREATIONGitHub CopilotWrite better code with AIGitHub SparkBuild and deploy intelligent appsGitHub ModelsManage and compare promptsMCP RegistryNewIntegrate external toolsDEVELOPER WORKFLOWSActionsAutomate any workflowCodespacesInstant dev environmentsIssuesPlan and track workCode ReviewManage code changesAPPLICATION SECURITYGitHub Advanced SecurityFind and fix vulnerabilitiesCode securitySecure your code as you buildSecret protectionStop leaks before they startEXPLOREWhy GitHubDocumentationBlogChangelogMarketplaceView all featuresSolutionsBY COMPANY SIZEEnterprisesSmall and medium teamsStartupsNonprofitsBY USE CASEApp ModernizationDevSecOpsDevOpsCI/CDView all use casesBY INDUSTRYHealthcareFinancial servicesManufacturingGovernmentView all industriesView all solutionsResourcesEXPLORE BY TOPICAISoftware DevelopmentDevOpsSecurityView all topicsEXPLORE BY TYPECustomer storiesEvents &amp; webinarsEbooks &amp; reportsBusiness insightsGitHub SkillsSUPPORT &amp; SERVICESDocumentationCustomer supportCommunity forumTrust centerPartnersView all resourcesOpen Sou...
115in / 80out$0.0001
assistantMar 14, 12:05:27 PM
deepseek/deepseek-v3.2 · openrouter$0.0011

Now let me check the specific xai package directory in the AI SDK repository:

webfetchcompleted
https://github.com/vercel/ai/tree/main/packages/xai (text/html; charset=utf-8)
Output
ai/packages/xai at main · vercel/ai · GitHub



  
  
  
  

    
  


  


    


  
  

  
  

    







  

  




    

  

    

    

      

      

    
    
    

      
  
  




      



        


  
  
  
  

  

      

    

  

  



    

    
  


  

  

  
  

  
  
  





  

  

  
    
      
      

    
      Skip to content

      
    
      
      
      



  
  






      

          

              

  Navigation Menu

  
    Toggle navigation
  

  
    
      
          
    
            
            
  

      

      
        
    

      

      
          
            Sign in
          
              
    
        
    

Appearance settings

      
        



  
  



      
    
  

      
    


    
      
            



  
  PlatformAI CODE CREATIONGitHub CopilotWrite better code with AIGitHub SparkBuild and deploy intelligent appsGitHub ModelsManage and compare promptsMCP RegistryNewIntegrate external toolsDEVELOPER WORKFLOWSActionsAutomate any workflowCodespacesInstant dev environmentsIssuesPlan and track workCode ReviewManage code changesAPPLICATION SECURITYGitHub Advanced SecurityFind and fix vulnerabilitiesCode securitySecure your code as you buildSecret protectionStop leaks before they startEXPLOREWhy GitHubDocumentationBlogChangelogMarketplaceView all featuresSolutionsBY COMPANY SIZEEnterprisesSmall and medium teamsStartupsNonprofitsBY USE CASEApp ModernizationDevSecOpsDevOpsCI/CDView all use casesBY INDUSTRYHealthcareFinancial servicesManufacturingGovernmentView all industriesView all solutionsResourcesEXPLORE BY TOPICAISoftware DevelopmentDevOpsSecurityView all topicsEXPLORE BY TYPECustomer storiesEvents &amp; webinarsEbooks &amp; reportsBusiness insightsGitHub SkillsSUPPORT &amp; SERVICESDocumentationCustomer supportCommunity forumTrust centerPartnersView all resourcesOpen SourceCOMMUNITYGitHub SponsorsFund open source developersPROGRAMSSecurity LabMaintainer CommunityAcceleratorArchive ProgramREPOSITORIESTo...
3,959in / 91out$0.0011
assistantMar 14, 12:05:35 PM
deepseek/deepseek-v3.2 · openrouter$0.0005

Let me check the source code directly in the repository:

webfetcherror
Error: Request failed with status code: 404
1,813in / 91out$0.0005
assistantMar 14, 12:05:42 PM
deepseek/deepseek-v3.2 · openrouter$0.0001

Let me check the src directory:

webfetchcompleted
https://github.com/vercel/ai/tree/main/packages/xai/src (text/html; charset=utf-8)
Output
ai/packages/xai/src at main · vercel/ai · GitHub



  
  
  
  

    
  


  


    


  
  

  
  

    







  

  




    

  

    

    

      

      

    
    
    

      
  
  




      



        


  
  
  
  

  

      

    

  

  



    

    
  


  

  

  
  

  
  
  





  

  

  
    
      
      

    
      Skip to content

      
    
      
      
      



  
  






      

          

              

  Navigation Menu

  
    Toggle navigation
  

  
    
      
          
    
            
            
  

      

      
        
    

      

      
          
            Sign in
          
              
    
        
    

Appearance settings

      
        



  
  



      
    
  

      
    


    
      
            



  
  PlatformAI CODE CREATIONGitHub CopilotWrite better code with AIGitHub SparkBuild and deploy intelligent appsGitHub ModelsManage and compare promptsMCP RegistryNewIntegrate external toolsDEVELOPER WORKFLOWSActionsAutomate any workflowCodespacesInstant dev environmentsIssuesPlan and track workCode ReviewManage code changesAPPLICATION SECURITYGitHub Advanced SecurityFind and fix vulnerabilitiesCode securitySecure your code as you buildSecret protectionStop leaks before they startEXPLOREWhy GitHubDocumentationBlogChangelogMarketplaceView all featuresSolutionsBY COMPANY SIZEEnterprisesSmall and medium teamsStartupsNonprofitsBY USE CASEApp ModernizationDevSecOpsDevOpsCI/CDView all use casesBY INDUSTRYHealthcareFinancial servicesManufacturingGovernmentView all industriesView all solutionsResourcesEXPLORE BY TOPICAISoftware DevelopmentDevOpsSecurityView all topicsEXPLORE BY TYPECustomer storiesEvents &amp; webinarsEbooks &amp; reportsBusiness insightsGitHub SkillsSUPPORT &amp; SERVICESDocumentationCustomer supportCommunity forumTrust centerPartnersView all resourcesOpen SourceCOMMUNITYGitHub SponsorsFund open source developersPROGRAMSSecurity LabMaintainer CommunityAcceleratorArchive ProgramREPOSITORI...
75in / 83out$0.0001
assistantMar 14, 12:05:48 PM
deepseek/deepseek-v3.2 · openrouter$0.0006

Let me check the index.ts file:

webfetchcompleted
https://raw.githubusercontent.com/vercel/ai/main/packages/xai/src/index.ts (text/plain; charset=utf-8)
Output
export type {
  XaiLanguageModelChatOptions,
  /** @deprecated Use `XaiLanguageModelChatOptions` instead. */
  XaiLanguageModelChatOptions as XaiProviderOptions,
} from './xai-chat-options';
export type { XaiErrorData } from './xai-error';
export type {
  XaiLanguageModelResponsesOptions,
  /** @deprecated Use `XaiLanguageModelResponsesOptions` instead. */
  XaiLanguageModelResponsesOptions as XaiResponsesProviderOptions,
} from './responses/xai-responses-options';
export type {
  XaiImageModelOptions,
  /** @deprecated Use `XaiImageModelOptions` instead. */
  XaiImageModelOptions as XaiImageProviderOptions,
} from './xai-image-options';
export type { XaiVideoModelId } from './xai-video-settings';
export type {
  XaiVideoModelOptions,
  /** @deprecated Use `XaiVideoModelOptions` instead. */
  XaiVideoModelOptions as XaiVideoProviderOptions,
} from './xai-video-options';
export { createXai, xai } from './xai-provider';
export type { XaiProvider, XaiProviderSettings } from './xai-provider';
export {
  codeExecution,
  mcpServer,
  viewImage,
  viewXVideo,
  webSearch,
  xSearch,
  xaiTools,
} from './tool';
export { VERSION } from './version';
1,917in / 85out$0.0006
assistantMar 14, 12:05:58 PM
deepseek/deepseek-v3.2 · openrouter$0.0001

Now let me check the tool directory:

webfetchcompleted
https://raw.githubusercontent.com/vercel/ai/main/packages/xai/src/tool/index.ts (text/plain; charset=utf-8)
Output
import { codeExecution } from './code-execution';
import { fileSearch } from './file-search';
import { mcpServer } from './mcp-server';
import { viewImage } from './view-image';
import { viewXVideo } from './view-x-video';
import { webSearch } from './web-search';
import { xSearch } from './x-search';

export {
  codeExecution,
  fileSearch,
  mcpServer,
  viewImage,
  viewXVideo,
  webSearch,
  xSearch,
};

export const xaiTools = {
  codeExecution,
  fileSearch,
  mcpServer,
  viewImage,
  viewXVideo,
  webSearch,
  xSearch,
};
354in / 87out$0.0001
assistantMar 14, 12:06:03 PM
deepseek/deepseek-v3.2 · openrouter$0.0059

Perfect! Now let me check what's currently in the local node_modules to understand the current version better and see if there are any differences:

readcompleted
node_modules/@ai-sdk/xai/README.md
Output
<path>/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/README.md</path>
<type>file</type>
<content>1: # AI SDK - xAI Grok Provider
2: 
3: The **[xAI Grok provider](https://ai-sdk.dev/providers/ai-sdk-providers/xai)** for the [AI SDK](https://ai-sdk.dev/docs)
4: contains language model support for the xAI chat and completion APIs.
5: 
6: ## Setup
7: 
8: The xAI Grok provider is available in the `@ai-sdk/xai` module. You can install it with
9: 
10: ```bash
11: npm i @ai-sdk/xai
12: ```
13: 
14: ## Provider Instance
15: 
16: You can import the default provider instance `xai` from `@ai-sdk/xai`:
17: 
18: ```ts
19: import { xai } from '@ai-sdk/xai';
20: ```
21: 
22: ## Example
23: 
24: ```ts
25: import { xai } from '@ai-sdk/xai';
26: import { generateText } from 'ai';
27: 
28: const { text } = await generateText({
29:   model: xai('grok-3-beta'),
30:   prompt: 'Write a vegetarian lasagna recipe for 4 people.',
31: });
32: ```
33: 
34: ## Documentation
35: 
36: Please check out the **[xAI Grok provider documentation](https://ai-sdk.dev/providers/ai-sdk-providers/xai)** for more information.

(End of file - total 36 lines)
</content>
21,020in / 91out$0.0059
assistantMar 14, 12:06:12 PM
deepseek/deepseek-v3.2 · openrouter$0.0002

Now let me check the CHANGELOG.md to see version history:

readcompleted
node_modules/@ai-sdk/xai/CHANGELOG.md
Output
<path>/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/CHANGELOG.md</path>
<type>file</type>
<content>1: # @ai-sdk/xai
2: 
3: ## 2.0.51
4: 
5: ### Patch Changes
6: 
7: - c635580: fix (provider/xai): set response format to allow object generation
8: 
9: ## 2.0.50
10: 
11: ### Patch Changes
12: 
13: - 13a913c: fix(provider/xai): send reasoning-end before text-start in streaming
14: 
15: ## 2.0.49
16: 
17: ### Patch Changes
18: 
19: - 0c419f8: fix(provider/xai): handle error responses returned with 200 status
20: 
21: ## 2.0.48
22: 
23: ### Patch Changes
24: 
25: - c7d45b4: fixed streaming tool input for custom_tool_call types (x_search, view_x_video) which were incorrectly returning empty input values
26: - 547e0c2: fix (provider/xai): no duplicate text delta in responses api
27: 
28: ## 2.0.47
29: 
30: ### Patch Changes
31: 
32: - 6e3beb1: added support for streaming custom tool input chunks in xAI
33: 
34: ## 2.0.46
35: 
36: ### Patch Changes
37: 
38: - eed2ef1: fix(xai): add missing stream response schema and provider tool name parsing
39: 
40: ## 2.0.45
41: 
42: ### Patch Changes
43: 
44: - 9d16e4e: fix (provider/xai): use correct format for function tools
45: 
46: ## 2.0.44
47: 
48: ### Patch Changes
49: 
50: - 08bffe4: feat (provider/xai): include cached input token metrics in responses api usage
51: 
52: ## 2.0.43
53: 
54: ### Patch Changes
55: 
56: - 526fe8d: fix: trigger new release for `@ai-v5` dist-tag
57: - Updated dependencies [526fe8d]
58:   - @ai-sdk/openai-compatible@1.0.30
59:   - @ai-sdk/provider@2.0.1
60:   - @ai-sdk/provider-utils@3.0.20
61: 
62: ## 2.0.42
63: 
64: ### Patch Changes
65: 
66: - 3c6f4e1: fix(provider/xai): correct sources format in searchParameters
67: 
68: ## 2.0.41
69: 
70: ### Patch Changes
71: 
72: - f4a85f7: fix(xai): use correct parameter names for maxOutputTokens
73: 
74: ## 2.0.40
75: 
76: ### Patch Changes
77: 
78: - Updated dependencies [ef6d784]
79:   - @ai-sdk/provider-utils@3.0.19
80:   - @ai-sdk/openai-compatible@1....
708in / 81out$0.0002
assistantMar 14, 12:06:19 PM
deepseek/deepseek-v3.2 · openrouter$0.0043

Now let me check what multi-agent models are supported. Let me search for multi-agent in the current codebase and online documentation:

codesearchcompleted
Code search: grok-4.20-multi-agent-beta-latest xAI multi-agent models Vercel AI SDK documentation
Output
## Multi Agent | xAI
https://docs.x.ai/developers/model-capabilities/text/multi-agent

 Realtime Multi-agent Research enables Grok to orchestrate multiple AI agents that work together in real time to perform  ...    To use Realtime Multi-agent Research, specify `grok-4.20-multi-agent-beta-0309` as the model name in your API requests. This model is optimized for orchestrating multiple agents that collaborate on research tasks.  ...    from xai_sdk import Client
from xai_sdk.chat import user
from xai_sdk.tools import web_search, x_search  ...    response = client.responses.create(
    model="grok-4.20-multi-agent-beta-0309",
    input=[
        {  ...    "Authorization": f"Bearer {os.getenv('XAI_API_KEY')}"
}
payload = {
    "model": "grok-4.20-multi-agent-beta-0309", "input": [
        {
            "role": "user",  ...    ```bash
curl https://api.x.ai/v1/responses \
  -H "Content-Type: application/json" \ -H "Authorization: Bearer $XAI_API_KEY" \
  -d '{
  "model": "grok-4.20-multi-agent-beta-0309",
  "input": [
    {  ...    ```typescript
import { xai } from "@ai-sdk/xai";
import { generateText } from "ai"; const { text } = await generateText({
  model: xai.responses("grok-4.20-multi-agent-beta-0309"),
  prompt:  ...    When you send a request to the multi-agent model, multiple agents are launched to discuss and collaborate on your  ...    responsible for synthesizing the discussion and presenting the final answer back to you. ### Supported Models

* `grok-4.20-multi-agent-beta-0309`

### Built-in Tools Support  ...    `web_search`, `x_search`, `code_execution`, `collections_search`. Check out this doc for more information.  ...    You can configure how many agents collaborate on a request. The two available setups are **4 agents** and **16 agents**. More agents means deeper, more thorough research at the cost of higher token usage and latency. | SDK / API | Parameter | 4 Agents | 16 Agents |
|---|---|---|---|
| xAI SDK | `agent_count` | `4` | `16` | | OpenAI SDK |...
15,224in / 110out$0.0043
assistantMar 14, 12:06:29 PM
deepseek/deepseek-v3.2 · openrouter$0.0015

Now let me check the specific current imports in the project to understand the current usage:

batchcompleted
Batch execution (3/3 successful)
Output
All 3 tools executed successfully.

Keep using the batch tool for optimal performance in your next response!
readcompleted
packages/opencode/src/provider/provider.ts
Output
<path>/home/jm/data/code/opencode/packages/opencode/src/provider/provider.ts</path>
<type>file</type>
<content>30: import { createOpenAICompatible } from "@ai-sdk/openai-compatible"
31: import { createOpenRouter, type LanguageModelV2 } from "@openrouter/ai-sdk-provider"
32: import { createOpenaiCompatible as createGitHubCopilotOpenAICompatible } from "./sdk/copilot"
33: import { createXai } from "@ai-sdk/xai"
34: import { createMistral } from "@ai-sdk/mistral"
35: import { createGroq } from "@ai-sdk/groq"
36: import { createDeepInfra } from "@ai-sdk/deepinfra"
37: import { createCerebras } from "@ai-sdk/cerebras"
38: import { createCohere } from "@ai-sdk/cohere"
39: import { createGateway } from "@ai-sdk/gateway"
40: import { createTogetherAI } from "@ai-sdk/togetherai"
41: import { createPerplexity } from "@ai-sdk/perplexity"
42: import { createVercel } from "@ai-sdk/vercel"
43: import { createGitLab, VERSION as GITLAB_PROVIDER_VERSION } from "@gitlab/gitlab-ai-provider"
44: import { fromNodeProviderChain } from "@aws-sdk/credential-providers"
45: import { GoogleAuth } from "google-auth-library"
46: import { ProviderTransform } from "./transform"
47: import { Installation } from "../installation"
48: import { ModelID, ProviderID } from "./schema"
49: 

(Showing lines 30-49 of 1465. Use offset=50 to continue.)
</content>
readcompleted
packages/opencode/src/provider/transform.ts
Output
<path>/home/jm/data/code/opencode/packages/opencode/src/provider/transform.ts</path>
<type>file</type>
<content>460:           ]),
461:         )
462: 
463:       case "@ai-sdk/cerebras":
464:       // https://v5.ai-sdk.dev/providers/ai-sdk-providers/cerebras
465:       case "@ai-sdk/togetherai":
466:       // https://v5.ai-sdk.dev/providers/ai-sdk-providers/togetherai
467:       case "@ai-sdk/xai":
468:       // https://v5.ai-sdk.dev/providers/ai-sdk-providers/xai
469:       case "@ai-sdk/deepinfra":
470:       // https://v5.ai-sdk.dev/providers/ai-sdk-providers/deepinfra
471:       case "venice-ai-sdk-provider":
472:       // https://docs.venice.ai/overview/guides/reasoning-models#reasoning-effort
473:       case "@ai-sdk/openai-compatible":
474:         return Object.fromEntries(WIDELY_SUPPORTED_EFFORTS.map((effort) => [effort, { reasoningEffort: effort }]))
475: 
476:       case "@ai-sdk/azure":
477:         // https://v5.ai-sdk.dev/providers/ai-sdk-providers/azure
478:         if (id === "o1-mini") return {}
479:         const azureEfforts = ["low", "medium", "high"]

(Showing lines 460-479 of 1017. Use offset=480 to continue.)
</content>
readcompleted
packages/opencode/src/session/prompt.ts
Output
<path>/home/jm/data/code/opencode/packages/opencode/src/session/prompt.ts</path>
<type>file</type>
<content>48: import { Shell } from "@/shell/shell"
49: import { Truncate } from "@/tool/truncation"
50: import { xaiTools } from "@ai-sdk/xai"
51: 
52: // @ts-ignore
53: globalThis.AI_SDK_LOG_WARNINGS = false
54: 
55: const STRUCTURED_OUTPUT_DESCRIPTION = `Use this tool to return your final response in the requested structured format.
56: 
57: IMPORTANT:

(Showing lines 48-57 of 1988. Use offset=58 to continue.)
</content>
5,088in / 200out$0.0015
assistantMar 14, 12:06:44 PM
deepseek/deepseek-v3.2 · openrouter$0.0004

Now let me check what models are supported in v3 vs v2. Let me search for npm registry info about v3:

websearchcompleted
Web search: @ai-sdk/xai version 3.0.0 release notes breaking changes v2 vs v3
Output
Title: vercel/ai @ai-sdk/xai@3.0.0 on GitHub
Published Date: 2025-12-22T00:00:00.000Z
URL: https://newreleases.io/project/github/vercel/ai/release/@ai-sdk%2Fxai@3.0.0
Text: vercel/ai @ai-sdk/xai@3.0.0 on GitHub

# vercel/ai @ai-sdk/xai@3.0.0 on GitHub

latest releases: @ai-sdk/rsc@3.0.0-beta.12, @ai-sdk/google@4.0.0-beta.8, ai@7.0.0-beta.12...

2 months ago

### Major Changes

- dee8b05: ai SDK 6 beta

### Minor Changes

- 78928cb: release: start 5.1 beta

### Patch Changes

8d9e8ad: chore(provider): remove generics from EmbeddingModelV3

Before

```
model.textEmbeddingModel('my-model-id');
```

After

```
model.embeddingModel('my-model-id');
```

Updated dependencies [95f65c2]

- @ai-sdk/provider-utils@4.0.0
- @ai-sdk/provider@3.0.0
- @ai-sdk/openai-compatible@2.0.0

Check out latest releases or releases around vercel/ai @ai-sdk/xai@3.0.0

Don't miss a new ai release

NewReleases is sending notifications on new releases.

Get notifications

Title: CHANGELOG.md - xai-org/xai-sdk-python - GitHub
URL: https://github.com/xai-org/xai-sdk-python/blob/main/CHANGELOG.md
Text: # File: xai-org/xai-sdk-python/CHANGELOG.md

- Repository: xai-org/xai-sdk-python | The official Python SDK for the xAI API | 377 stars | Python
- Branch: main

```md
# Changelog

## [Unreleased]

## v1.6.0 - 2026-01-27
### Added
- **Batch API**: Added new `client.batch` sub-client for interacting with the Batch API:
    - Ability to create, manage, and retrieve batch jobs
    - Integration with existing chat objects when adding requests to a batch
    - New optional `batch_request_id` field in `chat.create` method
- **Developer Role**: Added `developer` role support for chat messages with `developer()` utility function
- **Tool Call ID Field**: Added `tool_call_id` field and argument to `tool_result` utility function for explicit tool call identification
- **User Location for Web Search**: Added user-location support to `web_search()` server-side tool with new location-related arguments

### Changed
...
1,280in / 107out$0.0004
assistantMar 14, 12:06:53 PM
deepseek/deepseek-v3.2 · openrouter$0.0008

Let me check for more specific information about version 3.0.0 breaking changes:

webfetchcompleted
https://registry.npmjs.org/@ai-sdk/xai/3.0.0 (text/plain)
Output
{"name":"@ai-sdk/xai","version":"3.0.0","keywords":["ai"],"license":"Apache-2.0","_id":"@ai-sdk/xai@3.0.0","maintainers":[{"name":"vercel-release-bot","email":"infra+release@vercel.com"},{"name":"matheuss","email":"matheus.frndes@gmail.com"},{"name":"matt.straka","email":"matt.j.straka@gmail.com"}],"homepage":"https://ai-sdk.dev/docs","bugs":{"url":"https://github.com/vercel/ai/issues"},"dist":{"shasum":"f48d940e1fa50ed09d37fe7c1401b3f486a3ac13","tarball":"https://registry.npmjs.org/@ai-sdk/xai/-/xai-3.0.0.tgz","fileCount":10,"integrity":"sha512-LTEfmNo0ESXnfWwOnEWcyYuJi/0xKGdjiEwRhCe7IjVxqLk2jF0Cpqk1U+S414JZdB5s9cqjcxFNdkLZ5AfbUA==","signatures":[{"sig":"MEUCIQCta0iWhoEFFIOlJuyYNSH2qNM/4BkHyJc9WR5Ni8igxAIgHpDBxhaf6MB9Sc4NWDlvCbcmbqj+AUwi33G8rctURMc=","keyid":"SHA256:DhQ8wR5APBvFHLF/+Tc+AYvPOdTpcIDqOhxsBHRwC7U"}],"unpackedSize":456445},"main":"./dist/index.js","_from":"file:ai-sdk-xai-3.0.0.tgz","types":"./dist/index.d.ts","module":"./dist/index.mjs","engines":{"node":">=18"},"exports":{".":{"types":"./dist/index.d.ts","import":"./dist/index.mjs","require":"./dist/index.js"},"./package.json":"./package.json"},"scripts":{"lint":"eslint \"./**/*.ts*\"","test":"pnpm test:node && pnpm test:edge","build":"pnpm clean && tsup --tsconfig tsconfig.build.json","clean":"del-cli dist *.tsbuildinfo","test:edge":"vitest --config vitest.edge.config.js --run","test:node":"vitest --config vitest.node.config.js --run","test:watch":"vitest --config vitest.node.config.js","type-check":"tsc --build","build:watch":"pnpm clean && tsup --watch","test:update":"pnpm test:node -u","prettier-check":"prettier --check \"./**/*.ts*\""},"_npmUser":{"name":"vercel-release-bot","email":"infra+release@vercel.com"},"_resolved":"/tmp/a242c9656be726615c6a5a0fe0c7f4e5/ai-sdk-xai-3.0.0.tgz","_integrity":"sha512-LTEfmNo0ESXnfWwOnEWcyYuJi/0xKGdjiEwRhCe7IjVxqLk2jF0Cpqk1U+S414JZdB5s9cqjcxFNdkLZ5AfbUA==","repository":{"url":"git+https://github.com/vercel/ai.git","type":"git"},"_npmVersion":"10.9.4","descriptio...
2,657in / 96out$0.0008
assistantMar 14, 12:07:00 PM
deepseek/deepseek-v3.2 · openrouter$0.0003

Let me check what the current installed version supports and look at what v3.x releases exist:

websearchcompleted
Web search: "@ai-sdk/xai" "3.0" release changelog webSearch xSearch tools import
Output
Title: ai-sdk/xai - UNPKG
URL: https://app.unpkg.com/@ai-sdk/xai@3.0.43/files/CHANGELOG.md
Text: UNPKG

2,037 lines (1,297 loc) • 41.1 kB

Markdown

View Raw

# @ai-sdk/xai

Title: ai-sdk/xai - NPM
Published Date: 2026-02-24T00:00:00.000Z
URL: https://www.npmjs.com/package/@ai-sdk/xai
Text: @ai-sdk/xai - npm

# @ai-sdk/xai

3.0.67 • Public • Published 8 days ago

- 327 Versions
- 188 Dependents
- 3 Dependencies
- Code Beta
- Readme

# AI SDK - xAI Grok Provider

The xAI Grok provider for the AI SDK contains language model support for the xAI chat and completion APIs.

## Setup

The xAI Grok provider is available in the`@ai-sdk/xai` module. You can install it with

```
npm i @ai-sdk/xai
```

## Skill for Coding Agents

If you use coding agents such as Claude Code or Cursor, we highly recommend adding the AI SDK skill to your repository:

```
npx skills add vercel/ai
```

## Provider Instance

You can import the default provider instance`xai` from`@ai-sdk/xai`:

```
import { xai } from '@ai-sdk/xai';
```

## Example

```
import { xai } from '@ai-sdk/xai';
import { generateText } from 'ai';

const { text } = await generateText({
  model: xai('grok-3'),
  prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
```

## Documentation

Please check out the xAI Grok provider documentation for more information.

## Readme

### Keywords

- ai

Viewing @ai-sdk/xai version 3.0.67

Title: vercel/ai @ai-sdk/xai@3.0.0 on GitHub
Published Date: 2025-12-22T00:00:00.000Z
URL: https://newreleases.io/project/github/vercel/ai/release/@ai-sdk%2Fxai@3.0.0
Text: vercel/ai @ai-sdk/xai@3.0.0 on GitHub

# vercel/ai @ai-sdk/xai@3.0.0 on GitHub

latest releases: @ai-sdk/rsc@3.0.0-beta.12, @ai-sdk/google@4.0.0-beta.8, ai@7.0.0-beta.12...

2 months ago

### Major Changes

- dee8b05: ai SDK 6 beta

### Minor Changes

- 78928cb: release: start 5.1 beta

### Patch Changes

8d9e8ad: chore(provider): remove generics from EmbeddingModelV3

Before

```
model.textEmbeddingModel('my-model-id');
```...
1,104in / 101out$0.0003
assistantMar 14, 12:07:09 PM
deepseek/deepseek-v3.2 · openrouter$0.0008

Now let me check what version 3.0.67 exports by looking at the source:

webfetchcompleted
https://raw.githubusercontent.com/vercel/ai/refs/tags/%40ai-sdk/xai%403.0.67/packages/xai/src/index.ts (text/plain; charset=utf-8)
Output
export type {
  XaiLanguageModelChatOptions,
  /** @deprecated Use `XaiLanguageModelChatOptions` instead. */
  XaiLanguageModelChatOptions as XaiProviderOptions,
} from './xai-chat-options';
export type { XaiErrorData } from './xai-error';
export type {
  XaiLanguageModelResponsesOptions,
  /** @deprecated Use `XaiLanguageModelResponsesOptions` instead. */
  XaiLanguageModelResponsesOptions as XaiResponsesProviderOptions,
} from './responses/xai-responses-options';
export type {
  XaiImageModelOptions,
  /** @deprecated Use `XaiImageModelOptions` instead. */
  XaiImageModelOptions as XaiImageProviderOptions,
} from './xai-image-options';
export type { XaiVideoModelId } from './xai-video-settings';
export type {
  XaiVideoModelOptions,
  /** @deprecated Use `XaiVideoModelOptions` instead. */
  XaiVideoModelOptions as XaiVideoProviderOptions,
} from './xai-video-options';
export { createXai, xai } from './xai-provider';
export type { XaiProvider, XaiProviderSettings } from './xai-provider';
export {
  codeExecution,
  mcpServer,
  viewImage,
  viewXVideo,
  webSearch,
  xSearch,
  xaiTools,
} from './tool';
export { VERSION } from './version';
2,681in / 113out$0.0008
assistantMar 14, 12:07:20 PM
deepseek/deepseek-v3.2 · openrouter$0.0002

Perfect! Now I have a clear picture. Let me check the tool exports in v3.0.67:

webfetchcompleted
https://raw.githubusercontent.com/vercel/ai/refs/tags/%40ai-sdk/xai%403.0.67/packages/xai/src/tool/index.ts (text/plain; charset=utf-8)
Output
import { codeExecution } from './code-execution';
import { fileSearch } from './file-search';
import { mcpServer } from './mcp-server';
import { viewImage } from './view-image';
import { viewXVideo } from './view-x-video';
import { webSearch } from './web-search';
import { xSearch } from './x-search';

export {
  codeExecution,
  fileSearch,
  mcpServer,
  viewImage,
  viewXVideo,
  webSearch,
  xSearch,
};

export const xaiTools = {
  codeExecution,
  fileSearch,
  mcpServer,
  viewImage,
  viewXVideo,
  webSearch,
  xSearch,
};
378in / 119out$0.0002
assistantMar 14, 12:07:29 PM
deepseek/deepseek-v3.2 · openrouter$0.0001

Now let me check what's different in v2.x vs v3.x by looking at the API differences. Let me check the xai-provider.ts file:

webfetchcompleted
https://raw.githubusercontent.com/vercel/ai/refs/tags/%40ai-sdk/xai%403.0.67/packages/xai/src/xai-provider.ts (text/plain; charset=utf-8)
Output
import {
  type Experimental_VideoModelV3,
  ImageModelV3,
  LanguageModelV3,
  NoSuchModelError,
  ProviderV3,
} from '@ai-sdk/provider';
import {
  FetchFunction,
  generateId,
  loadApiKey,
  withoutTrailingSlash,
  withUserAgentSuffix,
} from '@ai-sdk/provider-utils';
import { XaiChatLanguageModel } from './xai-chat-language-model';
import { XaiChatModelId } from './xai-chat-options';
import { XaiImageModel } from './xai-image-model';
import { XaiImageModelId } from './xai-image-settings';
import { XaiResponsesLanguageModel } from './responses/xai-responses-language-model';
import { XaiResponsesModelId } from './responses/xai-responses-options';
import { xaiTools } from './tool';
import { VERSION } from './version';
import { XaiVideoModel } from './xai-video-model';
import { XaiVideoModelId } from './xai-video-settings';

export interface XaiProvider extends ProviderV3 {
  /**
   * Creates an Xai chat model for text generation.
   */
  (modelId: XaiChatModelId): LanguageModelV3;

  /**
   * Creates an Xai language model for text generation.
   */
  languageModel(modelId: XaiChatModelId): LanguageModelV3;

  /**
   * Creates an Xai chat model for text generation.
   */
  chat: (modelId: XaiChatModelId) => LanguageModelV3;

  /**
   * Creates an Xai responses model for agentic tool calling.
   */
  responses: (modelId: XaiResponsesModelId) => LanguageModelV3;

  /**
   * Creates an Xai image model for image generation.
   */
  image(modelId: XaiImageModelId): ImageModelV3;

  /**
   * Creates an Xai image model for image generation.
   */
  imageModel(modelId: XaiImageModelId): ImageModelV3;

  /**
   * Creates an Xai video model for video generation.
   */
  video(modelId: XaiVideoModelId): Experimental_VideoModelV3;

  /**
   * Creates an Xai video model for video generation.
   */
  videoModel(modelId: XaiVideoModelId): Experimental_VideoModelV3;

  /**
   * Server-side agentic tools for use with the responses API.
   */
  tools: typeof xaiTools;

  /**
   * @d...
212in / 130out$0.0001
assistantMar 14, 12:07:38 PM
deepseek/deepseek-v3.2 · openrouter$0.0004

Now let me compare this with what's in the current v2.0.51 installed locally. Let me check the v2.0.51 xai-provider.ts from node_modules:

readerror
Error: File not found: /home/jm/data/code/opencode/node_modules/@ai-sdk/xai/src/xai-provider.ts
1,272in / 105out$0.0004
assistantMar 14, 12:07:47 PM
deepseek/deepseek-v3.2 · openrouter$0.0001

The source files aren't in node_modules, only the compiled files. Let me check what's exported in the current v2.0.51:

readcompleted
node_modules/@ai-sdk/xai/dist/index.d.ts
Output
<path>/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/dist/index.d.ts</path>
<type>file</type>
<content>220: 
221: interface XaiProvider extends ProviderV2 {
222:     /**
223:   Creates an Xai chat model for text generation.
224:      */
225:     (modelId: XaiChatModelId): LanguageModelV2;
226:     /**
227:   Creates an Xai language model for text generation.
228:      */
229:     languageModel(modelId: XaiChatModelId): LanguageModelV2;
230:     /**
231:   Creates an Xai chat model for text generation.
232:      */
233:     chat: (modelId: XaiChatModelId) => LanguageModelV2;
234:     /**
235:   Creates an Xai responses model for agentic tool calling.
236:      */
237:     responses: (modelId: XaiResponsesModelId) => LanguageModelV2;
238:     /**
239:   Creates an Xai image model for image generation.
240:      */
241:     image(modelId: XaiImageModelId): ImageModelV2;
242:     /**
243:   Creates an Xai image model for image generation.
244:      */
245:     imageModel(modelId: XaiImageModelId): ImageModelV2;
246:     /**
247:   Server-side agentic tools for use with the responses API.
248:      */
249:     tools: typeof xaiTools;
250: }
251: interface XaiProviderSettings {
252:     /**
253:   Base URL for the xAI API calls.
254:        */
255:     baseURL?: string;
256:     /**
257:   API key for authenticating requests.
258:      */
259:     apiKey?: string;
260:     /**
261:   Custom headers to include in the requests.
262:      */
263:     headers?: Record<string, string>;
264:     /**
265:   Custom fetch implementation. You can use it as a middleware to intercept requests,
266:   or to provide a custom fetch implementation for e.g. testing.
267:     */
268:     fetch?: FetchFunction;
269: }
270: declare function createXai(options?: XaiProviderSettings): XaiProvider;
271: declare const xai: XaiProvider;
272: 
273: declare const VERSION: string;
274: 
275: export { VERSION, type XaiErrorData, type XaiProvider, type XaiProviderOptions, type XaiProviderSettings...
80in / 126out$0.0001
assistantMar 14, 12:07:58 PM
deepseek/deepseek-v3.2 · openrouter$0.0002

Now I have comprehensive information. Let me check for multi-agent model support in v3:

websearchcompleted
Web search: "grok-4.20-multi-agent-beta-latest" "xai.responses" "xai(" import TypeScript
Output
Title: Grok 4.20 Beta Latest API by xAI — Pricing, Specs & Access
URL: https://inworld.ai/models/xai-grok-4-20-beta-latest
Text: Grok 4.20 Beta Latest API by xAI — Pricing, Specs &amp; Access | Inworld

Get started
Menu

Get started
xAI
xAI
# Grok 4.20 Beta Latest
xai/grok-4.20-beta-latest
Grok 4.20 Beta Latest is a xAI model. It is priced at $2 per million input tokens and $6 per million output tokens. Access the Grok 4.20 Beta Latest API through Inworld Router or Realtime API.
→
Use This Model
Router Config
`curl --location &#x27;https://api.inworld.ai/router/v1/routers&#x27; \\
--header &#x27;Content-Type: application/json&#x27; \\
--header &#x27;Authorization: Basic &lt;your-api-key&gt;&#x27; \\
--data &#x27;{
&quot;name&quot;: &quot;compare-frontier-models&quot;,
&quot;default\_route&quot;: {
&quot;route\_id&quot;: &quot;default&quot;,
&quot;variants&quot;: [
{
&quot;variant&quot;: {`
` &quot;variant\_id&quot;: &quot;xai&quot;,
&quot;model\_id&quot;: &quot;xai/grok-4.20-beta-latest&quot;`
` },
&quot;weight&quot;: 33
},
{
&quot;variant&quot;: {
&quot;variant\_id&quot;: &quot;xai&quot;,
&quot;model\_id&quot;: &quot;xai/grok-3&quot;
},
&quot;weight&quot;: 33
},
{
&quot;variant&quot;: {
&quot;variant\_id&quot;: &quot;xai&quot;,

Title: Multi Agent
URL: https://docs.x.ai/developers/model-capabilities/text/multi-agent
Text: #### Model Capabilities

# Multi Agent

This feature is currently in **beta**. The API interface and behavior may change as we iterate. Please bear in mind that the API interface is not final and may include breaking changes down the line.

Realtime Multi-agent Research enables Grok to orchestrate multiple AI agents that work together in real time to perform deep, multi-step research tasks. Each agent specializes in a particular aspect of the research (searching the web, analyzing data, synthesizing findings) and they collaborate to deliver comprehensive, well-sourced answers.

## Overview

Multi-agent research goes beyond single-turn tool use by ...
626in / 104out$0.0002
assistantMar 14, 12:08:16 PM
deepseek/deepseek-v3.2 · openrouter$0.0008

Now let me check what multi-agent models are supported in the xai SDK by looking at the xai-responses-options.ts:

webfetchcompleted
https://raw.githubusercontent.com/vercel/ai/refs/tags/%40ai-sdk/xai%403.0.67/packages/xai/src/responses/xai-responses-options.ts (text/plain; charset=utf-8)
Output
import { z } from 'zod/v4';

export type XaiResponsesModelId =
  | 'grok-4-1-fast-reasoning'
  | 'grok-4-1-fast-non-reasoning'
  | 'grok-4'
  | 'grok-4-fast-non-reasoning'
  | 'grok-4-fast-reasoning'
  | (string & {});

/**
 * @see https://docs.x.ai/docs/api-reference#create-new-response
 */
export const xaiLanguageModelResponsesOptions = z.object({
  /**
   * Constrains how hard a reasoning model thinks before responding.
   * Possible values are `low` (uses fewer reasoning tokens), `medium` and `high` (uses more reasoning tokens).
   */
  reasoningEffort: z.enum(['low', 'medium', 'high']).optional(),
  logprobs: z.boolean().optional(),
  topLogprobs: z.number().int().min(0).max(8).optional(),
  /**
   * Whether to store the input message(s) and model response for later retrieval.
   * @default true
   */
  store: z.boolean().optional(),
  /**
   * The ID of the previous response from the model.
   */
  previousResponseId: z.string().optional(),
  /**
   * Specify additional output data to include in the model response.
   * Example values: 'file_search_call.results'.
   */
  include: z.array(z.enum(['file_search_call.results'])).nullish(),
});

export type XaiLanguageModelResponsesOptions = z.infer<
  typeof xaiLanguageModelResponsesOptions
>;
2,794in / 128out$0.0008
assistantMar 14, 12:08:27 PM
deepseek/deepseek-v3.2 · openrouter$0.0002

Interesting! The v3.0.67 doesn't include grok-4.20-multi-agent-beta-latest in the type definitions. Let me check if there are newer versions or if this model ID is supported differently. Let me also check what model IDs are supported in the current v2.0.51:

readcompleted
node_modules/@ai-sdk/xai/dist/index.d.ts
Output
<path>/home/jm/data/code/opencode/node_modules/@ai-sdk/xai/dist/index.d.ts</path>
<type>file</type>
<content>1: import { z } from 'zod/v4';
2: import { ProviderV2, LanguageModelV2, ImageModelV2 } from '@ai-sdk/provider';
3: import * as _ai_sdk_provider_utils from '@ai-sdk/provider-utils';
4: import { FetchFunction } from '@ai-sdk/provider-utils';
5: 
6: type XaiChatModelId = 'grok-4-1' | 'grok-4-1-fast-reasoning' | 'grok-4-1-fast-non-reasoning' | 'grok-4-fast-non-reasoning' | 'grok-4-fast-reasoning' | 'grok-code-fast-1' | 'grok-4' | 'grok-4-0709' | 'grok-4-latest' | 'grok-3' | 'grok-3-latest' | 'grok-3-fast' | 'grok-3-fast-latest' | 'grok-3-mini' | 'grok-3-mini-latest' | 'grok-3-mini-fast' | 'grok-3-mini-fast-latest' | 'grok-2-vision-1212' | 'grok-2-vision' | 'grok-2-vision-latest' | 'grok-2-image-1212' | 'grok-2-image' | 'grok-2-image-latest' | 'grok-2-1212' | 'grok-2' | 'grok-2-latest' | 'grok-vision-beta' | 'grok-beta' | (string & {});
7: declare const xaiProviderOptions: z.ZodObject<{
8:     reasoningEffort: z.ZodOptional<z.ZodEnum<{
9:         low: "low";
10:         high: "high";
11:     }>>;
12:     parallel_function_calling: z.ZodOptional<z.ZodBoolean>;
13:     searchParameters: z.ZodOptional<z.ZodObject<{
14:         mode: z.ZodEnum<{
15:             off: "off";
16:             auto: "auto";
17:             on: "on";
18:         }>;
19:         returnCitations: z.ZodOptional<z.ZodBoolean>;
20:         fromDate: z.ZodOptional<z.ZodString>;

(Showing lines 1-20 of 275. Use offset=21 to continue.)
</content>
412in / 163out$0.0002
assistantMar 14, 12:08:40 PM
deepseek/deepseek-v3.2 · openrouter$0.0002

Now let me search for how multi-agent models are used in the current codebase:

grepcompleted
grok-4\.20-multi-agent-beta-latest
Output
Found 57 matches
/home/jm/data/code/opencode/session-ses_316f.md:
  Line 4119: xai/grok-4.20-multi-agent-beta-latest
  Line 4149:   "command": "./packages/opencode/dist/opencode-linux-x64/bin/opencode run --model=\"xai/grok-4.20-multi-agent-beta-latest\" Testing --print-logs"
  Line 4155: INFO  2026-03-14T08:30:42 +117ms service=default version=0.0.0-main-202603132146 args=["run","--model=xai/grok-4.20-multi-agent-beta-latest","Testing","--print-logs"] opencode
  Line 4237: > plan · grok-4.20-multi-agent-beta-latest
  Line 4321: INFO  2026-03-14T08:30:43 +0ms service=llm providerID=xai modelID=grok-4.20-multi-agent-beta-latest sessionID=ses_31487e424ffelzpy372Wfdp30U small=false agent=plan mode=primary stream
  Line 4328: ERROR 2026-03-14T08:30:43 +396ms service=llm providerID=xai modelID=grok-4.20-multi-agent-beta-latest sessionID=ses_31487e424ffelzpy372Wfdp30U small=false agent=plan mode=primary error={"error":{"name":"AI_APICallError","url":"https://api.x.ai/v1/responses","requestBodyValues":{"model":"grok-4.20-multi-agent-beta-latest","input":[{"role":"system","content":"You are opencode, an interactive CLI tool that helps users with software engineering tasks. Use the instructions below and the tools available to you to assist the user.\n\nIMPORTANT: Refuse to write code or explain code that may be used maliciously; even if the user claims it is for educational purposes. When working on files, if they seem related to improving, explaining, or interacting with malware or any malicious code you MUST refuse.\nIMPORTANT: Before you begin work, think about what the code you're editing is supposed to do based on the filenames directory structure. If it seems malicious, refuse to work on it or answer questions about it, even if the request does not seem malicious (for instance, just asking to explain or speed up the code).\nIMPORTANT: You must NEVER generate or guess URLs for the user unless you are confident that the URLs are for helping the user with programming. Yo...
663in / 96out$0.0002
assistantMar 14, 12:08:50 PM
deepseek/deepseek-v3.2 · openrouter$0.0117

Now I have a comprehensive understanding. Let me create a summary of my findings:

batchcompleted
Batch execution (2/2 successful)
Output
All 2 tools executed successfully.

Keep using the batch tool for optimal performance in your next response!
readcompleted
packages/opencode/test/session/xai-multi-agent.test.ts
Output
<path>/home/jm/data/code/opencode/packages/opencode/test/session/xai-multi-agent.test.ts</path>
<type>file</type>
<content>1: import { describe, expect, test, vi } from "bun:test"
2: import { SessionPrompt } from "../../src/session/prompt"
3: import { SystemPrompt } from "../../src/session/system"
4: import { Log } from "../../src/util/log"
5: import type { Tool as AITool } from "ai"
6: 
7: Log.init({ print: false })
8: 
9: const XAI_MODEL = {
10:   providerID: "xai",
11:   id: "grok-4.20-multi-agent-beta-latest",
12:   api: {
13:     id: "grok-4.20-multi-agent-beta-latest",
14:     npm: "@ai-sdk/xai",
15:   },
16: } as const
17: 
18: // Mock modules before they're used
19: vi.mock("../../src/mcp", () => ({
20:   MCP: {
21:     tools: vi.fn().mockResolvedValue({}),
22:   },
23: }))
24: 
25: vi.mock("../../src/tool/registry", () => ({
26:   ToolRegistry: {
27:     tools: vi.fn().mockReturnValue([]),
28:   },
29: }))
30: 

(Showing lines 1-30 of 55. Use offset=31 to continue.)
</content>

<system-reminder>
Instructions from: /home/jm/data/code/opencode/packages/opencode/test/AGENTS.md
# Test Fixtures Guide

## Temporary Directory Fixture

The `tmpdir` function in `fixture/fixture.ts` creates temporary directories for tests with automatic cleanup.

### Basic Usage

```typescript
import { tmpdir } from "./fixture/fixture"

test("example", async () => {
  await using tmp = await tmpdir()
  // tmp.path is the temp directory path
  // automatically cleaned up when test ends
})
```

### Options

- `git?: boolean` - Initialize a git repo with a root commit
- `config?: Partial<Config.Info>` - Write an `opencode.json` config file
- `init?: (dir: string) => Promise<T>` - Custom setup function, returns value accessible as `tmp.extra`
- `dispose?: (dir: string) => Promise<T>` - Custom cleanup function

### Examples

**Git repository:**

```typescript
await using tmp = await tmpdir({ git: true })
```

**With config file:**

```typescript
await using tmp = await tmpdir({
  config: ...
readcompleted
packages/opencode/src/session/prompt.ts
Output
<path>/home/jm/data/code/opencode/packages/opencode/src/session/prompt.ts</path>
<type>file</type>
<content>900:               attachments.push({
901:                 type: "file",
902:                 mime: resource.mimeType ?? "application/octet-stream",
903:                 url: `data:${resource.mimeType ?? "application/octet-stream"};base64,${resource.blob}`,
904:                 filename: resource.uri,
905:               })
906:             }
907:           }
908:         }
909: 
910:         const truncated = await Truncate.output(textParts.join("\n\n"), {}, input.agent)
911:         const metadata = {
912:           ...(result.metadata ?? {}),
913:           truncated: truncated.truncated,
914:           ...(truncated.truncated && { outputPath: truncated.outputPath }),
915:         }
916: 
917:         return {
918:           title: "",
919:           metadata,
920:           output: truncated.content,
921:           attachments: attachments.map((attachment) => ({
922:             ...attachment,
923:             id: PartID.ascending(),
924:             sessionID: ctx.sessionID,
925:             messageID: input.processor.message.id,
926:           })),
927:           content: result.content, // directly return content to preserve ordering when outputting to model
928:         }
929:       }
930:       tools[key] = item
931:       mcpToolIds.add(key)
932:     }
933: 
934:     // For xAI multi-agent models, use only xAI built-in tools
935:     // Client-side tools (registry + MCP) are not supported
936:     const isXaiMultiAgent = input.model.providerID === "xai" && input.model.id.includes("multi-agent")
937:     if (isXaiMultiAgent) {
938:       const builtIn: Record<string, AITool> = {
939:         web_search: xaiTools.webSearch(),
940:         x_search: xaiTools.xSearch(),
941:       }
942:       log.info("using xAI built-in tools for multi-agent model", {
943:         modelID: input.model.id,
944:         tools: Object.keys(builtIn),
945:       })
946:     ...
41,697in / 159out$0.0117
assistantMar 14, 12:09:11 PM
deepseek/deepseek-v3.2 · openrouter$0.0014

Perfect! Now I have all the information I need. Let me create a comprehensive research report:

Research Report: @ai-sdk/xai Latest Version and Tool Imports

Executive Summary

Based on my comprehensive research of the @ai-sdk/xai package, the current project uses version 2.0.51, but the latest version is 3.0.67 (as of March 2026). The import pattern for built-in tools like webSearch and xSearch has remained consistent between v2 and v3, but there are significant breaking changes in v3 that need to be addressed.

Current Project Status

The project currently uses:

  • Version: @ai-sdk/xai@2.0.51
  • Import pattern: import { xaiTools } from "@ai-sdk/xai"
  • Tool usage: xaiTools.webSearch() and xaiTools.xSearch()
  • Multi-agent model: grok-4.20-multi-agent-beta-latest

Latest Version (v3.0.67) Analysis

Key Changes in v3.x

  1. Breaking Changes:

    • ProviderV2ProviderV3 interface changes
    • LanguageModelV2LanguageModelV3 type changes
    • Dependencies updated: @ai-sdk/provider@3.0.0, @ai-sdk/provider-utils@4.0.0, @ai-sdk/openai-compatible@2.0.0
    • Added video and videoModel methods for video generation support
    • textEmbeddingModel deprecated in favor of embeddingModel
  2. Multi-Agent Model Support:

    • The grok-4.20-multi-agent-beta-latest model is supported through the xai.responses() API
    • The model ID pattern grok-4.20-multi-agent-beta-latest works as a string literal in both v2 and v3
    • According to xAI documentation, multi-agent models require reasoningEffort parameter:
      • "low" or "medium" = 4 agents
      • "high" or "xhigh" = 16 agents

Tool Imports Comparison

v2.0.51 (Current):

import { xaiTools } from "@ai-sdk/xai"
// or individual imports
import { webSearch, xSearch } from "@ai-sdk/xai"

// Usage
xaiTools.webSearch()
xaiTools.xSearch()
// or
webSearch()
xSearch()

v3.0.67 (Latest):

import { xaiTools } from "@ai-sdk/xai"
// or individual imports  
import { webSearch, xSearch } from "@ai-sdk/xai"

// Usage - SAME AS v2
xaiTools.webSearch()
xaiTools.xSearch()
// or
webSearch()
xSearch()

Model ID Types

v2.0.51:

type XaiResponsesModelId = 'grok-4-1' | 'grok-4-1-fast-reasoning' | 'grok-4-1-fast-non-reasoning' | 'grok-4' | 'grok-4-fast-non-reasoning' | (string & {})

v3.0.67:

type XaiResponsesModelId = 'grok-4-1-fast-reasoning' | 'grok-4-1-fast-non-reasoning' | 'grok-4' | 'grok-4-fast-non-reasoning' | 'grok-4-fast-reasoning' | (string & {})

Note: grok-4.20-multi-agent-beta-latest is not explicitly listed in type definitions but works as a string literal.

Migration from v2 to v3

Required Changes

  1. Update package.json:
"@ai-sdk/xai": "^3.0.0"
  1. Type Compatibility:

    • Update any imports/types from ProviderV2 to ProviderV3
    • Update LanguageModelV2 to LanguageModelV3
    • Check for deprecated textEmbeddingModel usage
  2. API Changes:

    • The xai() function now defaults to responses() API instead of chat() in v7 (planned)
    • For multi-agent models, use xai.responses('grok-4.20-multi-agent-beta-latest')

Current Project Implementation

The project already implements proper tool filtering for xAI multi-agent models:

// In src/session/prompt.ts line 936-947
const isXaiMultiAgent = input.model.providerID === "xai" && input.model.id.includes("multi-agent")
if (isXaiMultiAgent) {
  const builtIn: Record<string, AITool> = {
    web_search: xaiTools.webSearch(),
    x_search: xaiTools.xSearch(),
  }
  log.info("using xAI built-in tools for multi-agent model", {
    modelID: input.model.id,
    tools: Object.keys(builtIn),
  })
  return builtIn
}

Testing with Multi-Agent Models

The project has tests for multi-agent models:

const XAI_MODEL = {
  providerID: "xai",
  id: "grok-4.20-multi-agent-beta-latest",
  api: {
    id: "grok-4.20-multi-agent-beta-latest",
    npm: "@ai-sdk/xai",
  },
} as const

Documentation References

  1. Official xAI Documentation: Shows tool usage as:

    import { xai } from "@ai-sdk/xai"
    import { generateText } from "ai"
    
    const { text, sources } = await generateText({
      model: xai.responses('grok-4.20-beta-latest-non-reasoning'),
      prompt: 'What is xAI?',
      tools: {
        web_search: xai.tools.webSearch(),
      },
    })
    
  2. Multi-Agent Configuration:

    • Use reasoningEffort: "low" | "medium" | "high" parameter
    • "low" or "medium" = 4 agents
    • "high" = 16 agents

Recommendations

  1. Upgrade to v3.x:

    • The tool import pattern (xaiTools.webSearch(), xaiTools.xSearch()) remains the same
    • Update dependencies to avoid compatibility issues
    • Test thoroughly with multi-agent models
  2. Multi-Agent Model Usage:

    import { xai } from "@ai-sdk/xai"
    
    // For multi-agent models
    const model = xai.responses('grok-4.20-multi-agent-beta-latest')
    // Or with reasoning effort
    const model = xai.responses('grok-4.20-multi-agent-beta-latest', {
      reasoningEffort: "medium" // 4 agents
    })
    
  3. Current Project Fixes Needed:

    • The project already correctly filters tools for multi-agent models
    • The import import { xaiTools } from "@ai-sdk/xai" is correct
    • No changes needed to tool usage patterns

ASCII Art Diagram

Current Project (v2.0.51)                          Latest Version (v3.0.67)
┌─────────────────────┐                          ┌─────────────────────┐
│   @ai-sdk/xai@2.0.51│                          │   @ai-sdk/xai@3.0.67│
├─────────────────────┤                          ├─────────────────────┤
│ import { xaiTools } │ ───SAME──> │ import { xaiTools } │
│   from "@ai-sdk/xai"│                          │   from "@ai-sdk/xai"│
│                     │                          │                     │
│ xaiTools.webSearch()│ ───SAME──> │ xaiTools.webSearch()│
│ xaiTools.xSearch()  │                          │ xaiTools.xSearch()  │
└─────────────────────┘                          └─────────────────────┘
         │                                            │
         ▼                                            ▼
┌─────────────────┐                          ┌─────────────────┐
│ ProviderV2      │                          │ ProviderV3      │
│ LanguageModelV2 │                          │ LanguageModelV3 │
│                 │                          │ + video()       │
│                 │                          │ + videoModel()  │
└─────────────────┘                          └─────────────────┘

Conclusion

The correct import for built-in tools in both v2 and v3 is:

import { xaiTools } from "@ai-sdk/xai"
// or individually
import { webSearch, xSearch } from "@ai-sdk/xai"

// Usage
xaiTools.webSearch()
xaiTools.xSearch()

Multi-agent models like grok-4.20-multi-agent-beta-latest:

  • Use xai.responses('grok-4.20-multi-agent-beta-latest')
  • Require reasoningEffort: "low" | "medium" | "high" parameter
  • Only support built-in tools (web_search, x_search, code_execution, etc.)
  • Client-side tools are NOT supported for multi-agent models

The current project implementation is correct for v2.x, but upgrading to v3.x would require:

  1. Update package.json to "@ai-sdk/xai": "^3.0.0"
  2. Update type imports from ProviderV2/LanguageModelV2 to ProviderV3/LanguageModelV3
  3. Test multi-agent model functionality remains intact

No changes are needed to the tool import/usage patterns themselves.

1,722in / 2,220out$0.0014
userMar 14, 02:53:21 PM

Project Context Analysis

You are a project analysis specialist. When invoked, you will systematically analyze the project to understand its structure, purpose, technology stack, and current state. Use /unshare to focus on specific aspects if provided.

Your Analysis Process:

Step 1: Project Discovery

  • Read the README.md file to understand project purpose and setup
  • Examine package.json/requirements.txt/Cargo.toml for dependencies and scripts
  • Check for documentation files (CONTRIBUTING.md, CHANGELOG.md, etc.)

Step 2: Codebase Structure Analysis

  • Run git ls-files | head -50 to get an overview of file structure
  • Identify main directories and their purposes
  • Examine configuration files (.gitignore, .env.example, config files)
  • Look for framework-specific patterns

Step 3: Technology Stack Detection

  • Identify primary programming languages
  • Detect frameworks and libraries in use
  • Find build tools and development workflow
  • Check for containerization (Dockerfile, docker-compose.yml)

Step 4: Current Project State

  • Check git status and recent commit history with git log --oneline -10
  • Identify any immediate issues or TODO items
  • Look for test coverage and CI/CD setup

Step 5: Present Comprehensive Analysis

📋 Project Context Report

🎯 Project Overview

  • Name: [Project name from README/package.json]
  • Purpose: [What this project does]
  • Status: [Development stage, active/maintenance]

🛠️ Technology Stack

  • Primary Language: [Main programming language]
  • Framework: [React, Django, Express, etc.]
  • Database: [If applicable]
  • Build Tools: [Webpack, Vite, etc.]
  • Package Manager: [npm, yarn, pip, cargo, etc.]

📁 Project Structure

[Key directories and their purposes]
src/ - source code
tests/ - test files
docs/ - documentation
etc.

🔧 Development Workflow

  • Setup Commands: [How to get started]
  • Build Process: [How to build the project]
  • Testing: [How to run tests]
  • Deployment: [How to deploy]

📊 Current State

  • Recent Activity: [Summary of recent commits]
  • Open Issues: [Any obvious problems or TODOs]
  • Configuration: [Environment setup needed]

🎯 Key Files to Know

  • [List of important files developers should be aware of]

Analysis Guidelines:

  • Be thorough: Don't just read README, examine actual code structure
  • Focus on developer needs: What would a new team member need to know?
  • Identify gaps: Missing documentation, setup issues, etc.
  • Practical insights: Actual workflow vs documented workflow/unshare
assistantMar 14, 02:53:21 PM
grok-4.20-beta-latest-non-reasoning · xai
userMar 14, 02:53:35 PM

Project Context Analysis

You are a project analysis specialist. When invoked, you will systematically analyze the project to understand its structure, purpose, technology stack, and current state. Use /unshare to focus on specific aspects if provided.

Your Analysis Process:

Step 1: Project Discovery

  • Read the README.md file to understand project purpose and setup
  • Examine package.json/requirements.txt/Cargo.toml for dependencies and scripts
  • Check for documentation files (CONTRIBUTING.md, CHANGELOG.md, etc.)

Step 2: Codebase Structure Analysis

  • Run git ls-files | head -50 to get an overview of file structure
  • Identify main directories and their purposes
  • Examine configuration files (.gitignore, .env.example, config files)
  • Look for framework-specific patterns

Step 3: Technology Stack Detection

  • Identify primary programming languages
  • Detect frameworks and libraries in use
  • Find build tools and development workflow
  • Check for containerization (Dockerfile, docker-compose.yml)

Step 4: Current Project State

  • Check git status and recent commit history with git log --oneline -10
  • Identify any immediate issues or TODO items
  • Look for test coverage and CI/CD setup

Step 5: Present Comprehensive Analysis

📋 Project Context Report

🎯 Project Overview

  • Name: [Project name from README/package.json]
  • Purpose: [What this project does]
  • Status: [Development stage, active/maintenance]

🛠️ Technology Stack

  • Primary Language: [Main programming language]
  • Framework: [React, Django, Express, etc.]
  • Database: [If applicable]
  • Build Tools: [Webpack, Vite, etc.]
  • Package Manager: [npm, yarn, pip, cargo, etc.]

📁 Project Structure

[Key directories and their purposes]
src/ - source code
tests/ - test files
docs/ - documentation
etc.

🔧 Development Workflow

  • Setup Commands: [How to get started]
  • Build Process: [How to build the project]
  • Testing: [How to run tests]
  • Deployment: [How to deploy]

📊 Current State

  • Recent Activity: [Summary of recent commits]
  • Open Issues: [Any obvious problems or TODOs]
  • Configuration: [Environment setup needed]

🎯 Key Files to Know

  • [List of important files developers should be aware of]

Analysis Guidelines:

  • Be thorough: Don't just read README, examine actual code structure
  • Focus on developer needs: What would a new team member need to know?
  • Identify gaps: Missing documentation, setup issues, etc.
  • Practical insights: Actual workflow vs documented workflow/unshare/share/share/share
assistantMar 14, 02:53:35 PM
z-ai/glm-5 · openrouter$0.0939
The user wants me to analyze the project context. Let me start by systematically going through the steps outlined. ## Step 1: Project Discovery I need to: - Read the README.md file - Examine package.json for dependencies and scripts - Check for documentation files ## Step 2: Codebase Structure Analysis I need to: - Run `git ls-files | head -50` to get an overview - Identify main directories - Examine configuration files ## Step 3: Technology Stack Detection I need to: - Identify primary programming languages - Detect frameworks and libraries - Find build tools - Check for containerization ## Step 4: Current Project State I need to: - Check git status and recent commit history - Identify any immediate issues or TODO items - Look for test coverage and CI/CD setup Let me start executing these steps in parallel wherepossible.

I'll systematically analyze this project to provide you with a comprehensive contextreport.

readcompleted
README.md
Output
<path>/home/jm/data/code/opencode/README.md</path>
<type>file</type>
<content>1: ---
2: ## Opencode w/ Private Sessions API
3: 
4: This is a fork of [anamolyco's](https://github.com/anomalyco) [Opencode](https://github.com/anomalyco/opencode) coding agent. That is the main development repo, and it sees constant improvements to I recommend just building the code from there.
5: 
6: ### Development Notes - How is this different than Opencode?
7: 
8: This fork was made to isolate the agent tool from the opencode.ai session share server, and point it towards a server of my own. 
9: 
10: Another goal is to run the bun session remotely within cloudflare containers,
11: and potentially breaking down the server side session management from the single server running in containers,  on to cloudflares platform primitives. 
12: 
13: 
14: 
15: For more info on how to configure opencode [**head over to our docs**](https://opencode.ai/docs).
16: 
17: 
18: ---

(End of file - total 18 lines)
</content>
readcompleted
packages/opencode/README.md
Output
<path>/home/jm/data/code/opencode/packages/opencode/README.md</path>
<type>file</type>
<content>1: # js
2: 
3: To install dependencies:
4: 
5: ```bash
6: bun install
7: ```
8: 
9: To run:
10: 
11: ```bash
12: bun run index.ts
13: ```
14: 
15: This project was created using `bun init` in bun v1.2.12. [Bun](https://bun.sh) is a fast all-in-one JavaScript runtime.

(End of file - total 15 lines)
</content>
readcompleted
packages/opencode/package.json
Output
<path>/home/jm/data/code/opencode/packages/opencode/package.json</path>
<type>file</type>
<content>1: {
2:   "$schema": "https://json.schemastore.org/package.json",
3:   "version": "1.2.25",
4:   "name": "opencode",
5:   "type": "module",
6:   "license": "MIT",
7:   "private": true,
8:   "scripts": {
9:     "typecheck": "tsgo --noEmit",
10:     "test": "bun test --timeout 30000",
11:     "build": "bun run script/build.ts",
12:     "dev": "bun run --conditions=browser ./src/index.ts",
13:     "random": "echo 'Random script updated at $(date)' && echo 'Change queued successfully' && echo 'Another change made' && echo 'Yet another change' && echo 'One more change' && echo 'Final change' && echo 'Another final change' && echo 'Yet another final change'",
14:     "clean": "echo 'Cleaning up...' && rm -rf node_modules dist",
15:     "lint": "echo 'Running lint checks...' && bun test --coverage",
16:     "format": "echo 'Formatting code...' && bun run --prettier --write src/**/*.ts",
17:     "docs": "echo 'Generating documentation...' && find src -name '*.ts' -exec echo 'Processing: {}' \\;",
18:     "deploy": "echo 'Deploying application...' && bun run build && echo 'Deployment completed successfully'",
19:     "db": "bun drizzle-kit"
20:   },
21:   "bin": {
22:     "opencode": "./bin/opencode"
23:   },
24:   "randomField": "this-is-a-random-value-12345",
25:   "exports": {
26:     "./*": "./src/*.ts"
27:   },
28:   "devDependencies": {
29:     "@babel/core": "7.28.4",
30:     "@effect/language-service": "0.79.0",
31:     "@octokit/webhooks-types": "7.6.1",
32:     "@opencode-ai/script": "workspace:*",
33:     "@parcel/watcher-darwin-arm64": "2.5.1",
34:     "@parcel/watcher-darwin-x64": "2.5.1",
35:     "@parcel/watcher-linux-arm64-glibc": "2.5.1",
36:     "@parcel/watcher-linux-arm64-musl": "2.5.1",
37:     "@parcel/watcher-linux-x64-glibc": "2.5.1",
38:     "@parcel/watcher-linux-x64-musl": "2.5.1",
39:     "@parcel/watcher-win32-arm64": "2.5.1",
40:     "@parcel/watch...
bashcompleted
List first 50 files in repository
Output
.env.example
.gitignore
AGENTS.md
BUN_SHELL_MIGRATION_PLAN.md
Dockerfile
Multi-Agent.md
Opencode_Testing_Failures_Plan.md
README.md
bin/opencode
bunfig.toml
drizzle.config.ts
migration/20260127222353_familiar_lady_ursula/migration.sql
migration/20260127222353_familiar_lady_ursula/snapshot.json
migration/20260211171708_add_project_commands/migration.sql
migration/20260211171708_add_project_commands/snapshot.json
migration/20260213144116_wakeful_the_professor/migration.sql
migration/20260213144116_wakeful_the_professor/snapshot.json
migration/20260225215848_workspace/migration.sql
migration/20260225215848_workspace/snapshot.json
migration/20260227213759_add_session_workspace_id/migration.sql
migration/20260227213759_add_session_workspace_id/snapshot.json
migration/20260228203230_blue_harpoon/migration.sql
migration/20260228203230_blue_harpoon/snapshot.json
migration/20260303231226_add_workspace_fields/migration.sql
migration/20260303231226_add_workspace_fields/snapshot.json
migration/20260309230000_move_org_to_state/migration.sql
migration/20260309230000_move_org_to_state/snapshot.json
migration/20260312043431_session_message_cursor/migration.sql
migration/20260312043431_session_message_cursor/snapshot.json
package.json
parsers-config.ts
script/build.ts
script/check-migrations.ts
script/postinstall.mjs
script/publish.ts
script/schema.ts
script/seed-e2e.ts
session-ses_3166.md
src/account/account.sql.ts
src/account/index.ts
src/account/repo.ts
src/account/schema.ts
src/account/service.ts
src/acp/README.md
src/acp/agent.ts
src/acp/session.ts
src/acp/types.ts
src/agent/agent.ts
src/agent/generate.txt
src/agent/prompt/compaction.txt
bashcompleted
Show last 10 commits
Output
fed14593b Merge remote-tracking branch 'origin/main'
cf3396a3c merge: sync upstream @v1.2.25
bfb736e94 release: v1.2.25
df8464f89 zen: handle cache key
3ea387f36 fix(app): sidebar re-rendering too often
9d3c42c8c fix(app): task error state
f2cad046e fix(app): message loading
d722026a8 fix: if server password exists, use basic auth for plugin client by default (#17213)
42a5af6c8 feat(app): follow-up behavior (#17233)
f0542fae7 fix(app): optimistic revert/restore
bashcompleted
Check git status
Output
 M ../../bun.lock
 M ../../package.json
M  ../cloudsession/package.json
A  ../cloudsession/src/utils/diff-renderer.ts
A  ../cloudsession/src/utils/highlight.ts
A  ../cloudsession/src/utils/markdown-pipeline.ts
A  ../cloudsession/src/utils/render-text.ts
A  ../cloudsession/src/views/diff-view.tsx
M  ../cloudsession/src/views/layout.tsx
M  ../cloudsession/src/views/session-detail.tsx
A  ../cloudsession/wrangler.log
A  Opencode_Testing_Failures_Plan.md
 D bin/opencode
MM package.json
AM session-ses_3166.md
M  src/cli/cmd/import.ts
M  src/cli/cmd/run.ts
M  src/cli/cmd/tui/worker.ts
M  src/plugin/index.ts
M  src/provider/provider.ts
 M src/provider/sdk/copilot/responses/openai-responses-language-model.ts
 M src/provider/sdk/copilot/responses/tool/local-shell.ts
M  src/provider/transform.ts
M  src/session/processor.ts
M  src/session/prompt.ts
MM src/session/system.ts
M  src/share/share-next.ts
A  test.log
M  test/cli/import.test.ts
 M test/fixture/fixture.ts
 M test/server/project-init-git.test.ts
 M test/server/session-messages.test.ts
 M test/server/session-select.test.ts
M  test/session/messages-pagination.test.ts
 M test/session/prompt.test.ts
M  test/share/share-next.test.ts
?? ../../.opencode/assistant/
?? test/session/xai-multi-agent.test.ts
?? ../../session-ses_316f.md
?? ../../task.md
91,150in / 333out / 234reasoning$0.0939
assistantMar 14, 02:53:50 PM
z-ai/glm-5 · openrouter$0.0249
Now I have a good overview. Let me continue with more analysis: 1. The project is a fork of Opencode (AI coding agent) 2. It uses TypeScript with Bun runtime 3. There are several packages in the monorepo 4. Recent commits show they're actively developing 5. There are modified files related to xAI multi-agent support Let me get more details about the structure andconfiguration.
bashcompleted
List directory structure
Output
total 260
drwxrwxr-x 1 jm jm   530 Mar 14 14:43 .
drwxrwxr-x 1 jm jm   298 Mar 12 14:26 ..
-rw-rw-r-- 1 jm jm  7593 Feb 13 06:36 .env
-rw-rw-r-- 1 jm jm   679 Mar 13 13:34 .env.example
-rw-rw-r-- 1 jm jm    58 Mar 13 13:05 .gitignore
drwxrwxr-x 1 jm jm    38 Mar 14 13:47 .turbo
-rw-rw-r-- 1 jm jm  1826 Mar 13 13:35 AGENTS.md
-rw-rw-r-- 1 jm jm  3671 Mar 13 13:05 BUN_SHELL_MIGRATION_PLAN.md
-rw-rw-r-- 1 jm jm   593 Mar 13 13:05 Dockerfile
-rw-rw-r-- 1 jm jm 19748 Mar 13 13:15 Multi-Agent.md
-rw-rw-r-- 1 jm jm  3389 Mar 13 23:48 Opencode_Testing_Failures_Plan.md
-rw-rw-r-- 1 jm jm   217 Mar 13 13:05 README.md
-rw-rw-r-- 1 jm jm   258 Mar 13 13:05 bunfig.toml
-rw-rw-r-- 1 jm jm   233 Mar 13 13:05 drizzle.config.ts
drwxrwxr-x 1 jm jm   600 Mar 13 13:35 migration
drwxr-xr-x 1 jm jm  1336 Mar 14 14:43 node_modules
-rw-rw-r-- 1 jm jm  5185 Mar 14 12:22 package.json
-rw-rw-r-- 1 jm jm 10796 Mar 13 13:05 parsers-config.ts
drwxrwxr-x 1 jm jm   144 Mar 13 13:35 script
-rw-rw-r-- 1 jm jm 78786 Mar 14 14:43 session-ses_3166.md
drwxrwxr-x 1 jm jm   474 Mar 13 13:35 src
drwxrwxr-x 1 jm jm   514 Mar 13 13:35 test
-rw-rw-r-- 1 jm jm 93625 Mar 13 23:37 test.log
-rw-rw-r-- 1 jm jm   662 Mar 13 20:17 tsconfig.json
readcompleted
AGENTS.md
Output
<path>/home/jm/data/code/opencode/AGENTS.md</path>
<type>file</type>
<content>1: ## Build/Test/Lint
2: 
3: - `bun dev` - run dev server (packages/web or packages/opencode)
4: - `bun build` - build for Cloudflare Pages (web)
5: - To regenerate the JavaScript SDK, run `./packages/sdk/js/script/build.ts`.
6: - No test framework - use manual testing in dev mode
7: 
8: ## Tech Stack
9: 
10: - Astro 5 + Solid.js for reactive UI components (web package)
11: - TypeScript strict mode enabled
12: - Cloudflare Pages adapter for deployment
13: - Starlight for docs, MDX for content
14: - Wrangler for local testing
15: 
16: ## Code Style
17: 
18: - Use Solid.js reactive primitives: `createSignal`, `createMemo`, `createEffect`, `onMount`, `onCleanup`
19: - Control flow: `For`, `Show`, `Switch`/`Match` (no `else` statements)
20: - CSS Modules: `*.module.css` for component styles
21: - Solid components: `*.tsx` with JSX, Astro components: `*.astro`
22: - Avoid `any`, `let`, unnecessary destructuring, `try/catch` when possible
23: - Prefer single word variable names, keep logic in one function
24: 
25: ## Context
26: 
27: - Forked from upstream opencode/web - tracking changes with GitHub actions
28: - SST/github integrations intentionally removed per TODO.md
29: - Uses shared session types from opencode/session packages
30: 
31: ## Tool Calling
32: 
33: - ALWAYS USE PARALLEL TOOLS WHEN APPLICABLE.
34: - The default branch in this repo is `dev`.
35: - Local `main` ref may not exist; use `dev` or `origin/dev` for diffs.
36: - Prefer automation: execute requested actions without confirmation unless blocked by missing info or safety/irreversibility.
37: 
38: ## Style Guide
39: 
40: ### General Principles
41: 
42: - Keep things in one function unless composable or reusable
43: - Avoid `try`/`catch` where possible
44: - Avoid using the `any` type
45: - Prefer single word variable names where possible
46: - Use Bun APIs when possible, like `Bun.file()`
47: - Rely on type inference when poss...
globcompleted
packages/opencode
Output
/home/jm/data/code/opencode/packages/opencode/tsconfig.json
bashcompleted
Check for Docker and CI/CD
Output
./Dockerfile
4,749in / 189out / 102reasoning$0.0249
assistantMar 14, 02:53:58 PM
z-ai/glm-5 · openrouter$0.0229
Great! Now let me examine the Multi-Agent.md and check the current state more deeply, including looking at the main source structure and configuration files.
readerror
Error: File not found: /home/jm/data/code/opencode/Multi-Agent.md
bashcompleted
List source directories
Output
total 12
drwxrwxr-x 1 jm jm  474 Mar 13 13:35 .
drwxrwxr-x 1 jm jm  530 Mar 14 14:43 ..
drwxrwxr-x 1 jm jm   96 Mar 13 13:35 account
drwxrwxr-x 1 jm jm   70 Mar 13 13:35 acp
drwxrwxr-x 1 jm jm   52 Mar 13 13:35 agent
drwxrwxr-x 1 jm jm   36 Mar 13 13:33 auth
drwxrwxr-x 1 jm jm   38 Mar 13 21:36 bun
drwxrwxr-x 1 jm jm   58 Mar 13 13:05 bus
drwxrwxr-x 1 jm jm  122 Mar 13 13:35 cli
drwxrwxr-x 1 jm jm   32 Mar 13 13:35 command
drwxrwxr-x 1 jm jm  136 Mar 13 13:35 config
drwxrwxr-x 1 jm jm  250 Mar 13 13:35 control-plane
drwxrwxr-x 1 jm jm   20 Mar 13 13:35 effect
drwxrwxr-x 1 jm jm   16 Mar 13 13:05 env
drwxrwxr-x 1 jm jm  112 Mar 13 13:35 file
drwxrwxr-x 1 jm jm   14 Mar 13 13:35 flag
drwxrwxr-x 1 jm jm   40 Mar 13 13:05 format
drwxrwxr-x 1 jm jm   16 Mar 13 13:05 global
drwxrwxr-x 1 jm jm   10 Mar 13 13:05 id
drwxrwxr-x 1 jm jm   16 Mar 13 13:05 ide
-rw-rw-r-- 1 jm jm 7111 Mar 13 13:35 index.ts
drwxrwxr-x 1 jm jm   16 Mar 13 13:34 installation
drwxrwxr-x 1 jm jm   74 Mar 13 13:35 lsp
drwxrwxr-x 1 jm jm   98 Mar 13 13:35 mcp
drwxrwxr-x 1 jm jm   16 Mar 13 13:05 patch
drwxrwxr-x 1 jm jm   64 Mar 13 13:35 permission
drwxrwxr-x 1 jm jm   52 Mar 13 13:35 plugin
drwxrwxr-x 1 jm jm  140 Mar 13 13:35 project
drwxrwxr-x 1 jm jm  184 Mar 14 09:52 provider
drwxrwxr-x 1 jm jm   34 Mar 13 13:35 pty
drwxrwxr-x 1 jm jm   34 Mar 13 13:35 question
drwxrwxr-x 1 jm jm   16 Mar 13 13:05 scheduler
drwxrwxr-x 1 jm jm   76 Mar 13 21:49 server
drwxrwxr-x 1 jm jm  332 Mar 13 21:38 session
drwxrwxr-x 1 jm jm   80 Mar 13 18:06 share
drwxrwxr-x 1 jm jm   16 Mar 13 13:35 shell
drwxrwxr-x 1 jm jm   56 Mar 13 13:35 skill
drwxrwxr-x 1 jm jm   16 Mar 13 13:35 snapshot
-rw-rw-r-- 1 jm jm   76 Mar 13 13:05 sql.d.ts
drwxrwxr-x 1 jm jm  108 Mar 13 13:35 storage
drwxrwxr-x 1 jm jm  910 Mar 13 13:35 tool
drwxrwxr-x 1 jm jm  610 Mar 13 13:35 util
drwxrwxr-x 1 jm jm   16 Mar 13 13:35 worktree
bashcompleted
Check parent directory structure
Output
total 0
drwxrwxr-x 1 jm jm 298 Mar 12 14:26 .
drwxrwxr-x 1 jm jm 692 Mar 14 13:47 ..
drwxrwxr-x 1 jm jm 392 Mar 14 13:33 app
drwxrwxr-x 1 jm jm 392 Mar 14 13:47 cloudsession
drwxrwxr-x 1 jm jm  54 Feb  7 07:16 console
drwxrwxr-x 1 jm jm 124 Mar 13 13:05 containers
drwxrwxr-x 1 jm jm 224 Mar 14 13:33 desktop
drwxrwxr-x 1 jm jm 300 Mar 14 13:33 desktop-electron
drwxrwxr-x 1 jm jm 272 Mar 13 13:05 docs
drwxrwxr-x 1 jm jm 244 Mar 14 13:33 enterprise
drwxrwxr-x 1 jm jm   6 Feb  7 07:16 extensions
drwxrwxr-x 1 jm jm 190 Mar 14 13:33 function
drwxrwxr-x 1 jm jm 180 Mar 13 13:05 identity
drwxrwxr-x 1 jm jm 530 Mar 14 14:43 opencode
drwxrwxr-x 1 jm jm 132 Mar 14 13:47 plugin
drwxrwxr-x 1 jm jm  80 Mar 14 13:33 script
drwxrwxr-x 1 jm jm  48 Mar 13 13:35 sdk
drwxrwxr-x 1 jm jm  26 Mar 13 21:41 sessions
drwxrwxr-x 1 jm jm 142 Mar 14 13:33 slack
drwxrwxr-x 1 jm jm 176 Mar 14 13:33 storybook
drwxrwxr-x 1 jm jm 140 Mar 14 13:33 ui
drwxrwxr-x 1 jm jm  92 Mar 14 13:47 util
drwxrwxr-x 1 jm jm 474 Mar 14 13:33 web
2,269in / 97out / 40reasoning$0.0229
assistantMar 14, 02:54:05 PM
z-ai/glm-5 · openrouter$0.0230
Good! Now I have a comprehensive view. Let me check for documentation files and understand the current issues better. Good! Now I have a comprehensive view. Let me check for documentation files and understand the current issues better.
readcompleted
packages/opencode/Multi-Agent.md
Output
<path>/home/jm/data/code/opencode/packages/opencode/Multi-Agent.md</path>
<type>file</type>
<content>1: #### Model Capabilities
2: 
3: # Multi Agent
4: 
5: This feature is currently in **beta**. The API interface and behavior may change as we iterate. Please bear in mind that the API interface is not final and may include breaking changes down the line.
6: 
7: Realtime Multi-agent Research enables Grok to orchestrate multiple AI agents that work together in real time to perform deep, multi-step research tasks. Each agent specializes in a particular aspect of the research (searching the web, analyzing data, synthesizing findings) and they collaborate to deliver comprehensive, well-sourced answers.
8: 
9: ## Overview
10: 
11: Multi-agent research goes beyond single-turn tool use by coordinating a team of specialized agents that can:
12: 
13: * **Search and gather** information from multiple sources simultaneously
14: * **Analyze and cross-reference** findings across different domains
15: * **Synthesize** comprehensive answers with citations and supporting evidence
16: * **Iterate** on research in real time, refining results based on intermediate findings
17: 
18: ## Getting Started
19: 
20: To use Realtime Multi-agent Research, specify `grok-4.20-multi-agent-beta-0309` as the model name in your API requests. This model is optimized for orchestrating multiple agents that collaborate on research tasks.
21: 
22: ```python customLanguage="pythonXAI" highlightedLines="9"
23: import os
24: 
25: from xai_sdk import Client
26: from xai_sdk.chat import user
27: from xai_sdk.tools import web_search, x_search
28: 
29: client = Client(api_key=os.getenv("XAI_API_KEY"))
30: chat = client.chat.create(
31:     model="grok-4.20-multi-agent-beta-0309",
32:     tools=[web_search(), x_search()],
33:     include=["verbose_streaming"],
34: )
35: 
36: chat.append(user("Research the latest breakthroughs in quantum computing and summarize the key findings."))
37: 
38: is_thinking = True
39: f...
bashcompleted
Find TODO and CHANGELOG files
readcompleted
packages/opencode/Opencode_Testing_Failures_Plan.md
Output
<path>/home/jm/data/code/opencode/packages/opencode/Opencode_Testing_Failures_Plan.md</path>
<type>file</type>
<content>1: 
2: ---
3: 
4: The failures are entirely environment/infrastructure issues - the error message "Bad Request: Client side tool is not supported for multi-agent models" does not appear in any test output.
5: 
6: # Root Causes (by category):
7: 
8: 1. CLOUDFLARE_API_TOKEN missing (~90 tests) - Provider.ts:619
9:   Every test that initializes Provider (provider.test.ts, amazon-bedrock.test.ts, gitlab-duo.test.ts, llm.test.ts, prompt.test.ts) fails because provider.ts:619 throws:
10:   CLOUDFLARE_API_TOKEN (or CF_AIG_TOKEN) is required for Cloudflare AI Gateway
11:   This is a test fixture issue - tests need either the env var set or the Cloudflare gateway check to be mockable/skippable.
12: 
13: 2. biome binary missing (tool/read.test.ts env file tests)
14:   ENOENT: posix_spawn '/home/jm/data/code/opencode/node_modules/.bin/biome'
15:   The LSP server tries to spawn biome but it's not installed. Run bun install.
16: 
17: 3. Server.Default is not a function (server tests)
18:   Tests call Server.Default() but the export doesn't exist - the API likely changed.
19: 
20: 4. Git identity (fixed in second run)
21:   Set git config --global user.email + user.name.
22: 
23: 5. Minor test bugs (assertion/logic issues)
24:   - write.test.ts:170 - file perms 0o664 vs expected 0o644 (umask)
25:   - migrate-global.test.ts - migration code not executing
26:   - instance-state.test.ts - Instance.reload not properly disposing state
27:   - messages-pagination.test.ts - cursor/pagination returning wrong results
28:   - prompt.test.ts:192 - variant is undefined instead of "xhigh"
29: 
30: 
31: 
32: ---
33: 
34: | Metric | Count |
35: |--------|-------|
36: | Total test files | 105 |
37: | Total tests | 1,307 |
38: | Test files created by "manno23" | 1 |
39: | Test files with failing tests | 15 |
40: | Tests created by "manno23" that are failing | 0 |
41: 
42: ## Use...
1,650in / 148out / 60reasoning$0.0230
assistantMar 14, 02:54:14 PM
z-ai/glm-5 · openrouter$0.0305
Excellent! Now I have a comprehensive understanding of the project. Let me compile a detailed analysis report for the user.

📋 Project Context Report

🎯 Project Overview

-Name: Opencode (Fork with Private Sessions API)
-Purpose: AI-powered development tool for coding assistance with multi-agent model support
-Status: Active development, forked from upstream to isolate agent tool from session share server
-Version: 1.2.25
-License: MIT

Key Focus Areas:

  1. xAI Multi-Agent Integration: Recent work focuses on supporting grok-4.20-multi-agent-beta-latest model
  2. Private Sessions: Modified to point towards custom server instead of opencode.ai
  3. Cloudflare Deployment: Targeting Cloudflare containers for remote session management

🛠️ Technology Stack

Primary Language

  • TypeScript (strict mode enabled)
  • Bun Runtime v1.2.12 (fast all-in-one JavaScript runtime)

Framework & Libraries

  • Effect - Functional programming library (v4.0.0-beta.31)
  • Solid.js - Reactive UI components
  • Drizzle ORM - Database ORM (beta version)
  • Hono - Web framework
  • AI SDK Providers:
    • @ai-sdk/xai@2.0.51 (xAI Grok integration)
    • @ai-sdk/anthropic, @ai-sdk/openai, @ai-sdk/google, etc.

Database

  • SQLite (via Drizzle ORM)
  • Migrations: Located in migration/ directory

Build Tools

  • Bun (package manager & build tool)
  • TypeScript compiler
  • Turbo (monorepo management)

Package Manager

  • Bun (workspace management withcatalog dependencies)

📁 Project Structure

packages/opencode/
├── src/                    # Main source code
│   ├── account/           # User account management
│   ├── acp/               # Agent communication protocol
│   ├── agent/             # AI agent implementation
│   ├── cli/               # Command-line interface
│   ├── config/            # Configuration management
│   ├── control-plane/     # Session control plane
│   ├── lsp/               # Language server protocol integration
│   ├── mcp/               # Model context protocol
│   ├── provider/          # AI provider implementations
│   │   ├── transform.ts   # Provider options transformation
│   │   └── sdk/          # SDK integrations (including xAI)
│   ├── session/           # Session management
│   │   ├── prompt.ts      # Tool resolution & filtering
│   │   └── processor.ts  # Message processing
│   ├── tool/              # Tool implementations
│   └── util/              # Utility functions
├── test/                   # Test files
├── migration/              # Database migrations
├── package.json           # Dependencies & scripts
└── tsconfig.json          # TypeScript configuration

Monorepo Structure:

packages/
├── opencode/     # Main agent application
├── app/          # Web application
├── cloudsession/ # Cloud session management
├── desktop/      # Desktop application
├── docs/         # Documentation
├── plugin/       # Plugin system
└── sdk/          # SDK packages

🔧 Development Workflow

Setup Commands

# Install dependencies
bun install

# Run development server
bun dev

# Build project
bun run build

# Run tests
bun test

Key Scripts

  • bun dev - Development server with hot reload
  • bun run build - Build for production
  • bun test - Run test suite
  • bun run db - Database management with Drizzle Kit

Testing

  • Framework: Bun test runner
  • Coverage: bun test --coverage
  • Timeout: 30000ms default
  • Test Files: Located in test/ directory
  • Current Status: 1,193 passing, 106 failing, 8 skipped

Build Process

  • Build script: script/build.ts
  • Builds for multiple platforms (Linux, macOS, Windows)
  • Creates binaries in dist/ directory

📊 Current State

Recent Activity (Last 10 Commits)

  1. fed14593b - Merge remote-tracking branch 'origin/main'
  2. cf3396a3c - merge: sync upstream @v1.2.25
  3. bfb736e94 - release: v1.2.25
  4. df8464f89 - zen: handle cache key
  5. 3ea387f36 - fix(app): sidebar re-rendering too often
  6. 9d3c42c8c - fix(app): task error state
  7. f2cad046e - fix(app): message loading
  8. d722026a8 - fix: if server password exists, use basic auth for plugin client
  9. 42a5af6c8 - feat(app): follow-up behavior
  10. f0542fae7 - fix(app): optimistic revert/restore

Current Branch Status

  • Default Branch: dev
  • Working Directory: Modified files in:
    • package.json (version bump, dependencies)
    • src/provider/ (xAI SDK integration)
    • src/session/ (multi-agent tool filtering)
    • test/ (new test files for xAI multi-agent)

Open Issues & TODOs

  1. Testing Failures (106 tests failing):

    • Missing CLOUDFLARE_API_TOKEN environment variable
    • Biome binary not installed
    • Server.Default API changes
    • Minor test bugs (permissions, migrations, pagination)
  2. Active Development:

    • xAI multi-agent model support (grok-4.20-multi-agent-beta-latest)
    • Tool filtering for multi-agent models
    • Session management improvements

Configuration Requirements

  • Environment Variables:
    • CLOUDFLARE_API_TOKEN or CF_AIG_TOKEN (for AI Gateway)
    • XAI_API_KEY (for xAI models)
    • Git configuration (user.email, user.name)

🎯 Key Files to Know

Configuration Files

  • package.json - Dependencies, scripts, workspace configuration
  • tsconfig.json - TypeScript compiler options
  • drizzle.config.ts - Database configuration
  • bunfig.toml - Bun runtime configuration
  • .env.example - Environment variable template

Core Source Files

  • src/index.ts - Main entry point
  • src/session/prompt.ts - Tool resolution & multi-agent filtering
  • src/provider/provider.ts - AI provider initialization
  • src/provider/transform.ts - Provider options transformation
  • src/cli/cmd/run.ts - CLI run command

Test Files

  • test/session/xai-multi-agent.test.ts - Multi-agent model tests
  • test/session/prompt.test.ts - Prompt processing tests
  • test/provider/provider.test.ts - Provider tests

Documentation

  • README.md - Project overview
  • AGENTS.md - Development guidelines
  • Multi-Agent.md - xAI multi-agent documentation
  • Opencode_Testing_Failures_Plan.md - Test failure analysis

🔍 Special Notes

xAI Multi-Agent Integration

The project has specific handling for xAI multi-agent models:

// Located in: src/session/prompt.ts (lines 936-947)
const isXaiMultiAgent = input.model.providerID === "xai" && input.model.id.includes("multi-agent")
if (isXaiMultiAgent) {
  const builtIn: Record<string, AITool> = {
    web_search: xaiTools.webSearch(),
    x_search: xaiTools.xSearch(),
  }
  return builtIn
}

Code Style Guidelines

  • Use single-word variable names where possible
  • Prefer const over let
  • Avoid else statements (use early returns)
  • Avoid unnecessary destructuring
  • Use snake_case for database schemas
  • Prefer functional array methods over for loops

Testing Guidelines

  • Avoid mocks when possible
  • Test actual implementation
  • Tests must run from package directories (not repo root)

🚨 Immediate Action Items

  1. Fix Environment Variables: Set up CLOUDFLARE_API_TOKEN for tests
  2. Install Biome: Run bun install to get all binaries
  3. Review Failing Tests: Focus on the 106 failing tests
  4. Update Dependencies: Consider upgrading @ai-sdk/xai from v2.0.51 to v3.x
  5. Complete Multi-Agent Integration: Finalize xAI multi-agent model support

📈 Development Status

Overall: The project is in active development with recent focus on xAI multi-agent model integration. The fork is customized to use private sessions API and deploy to Cloudflare. Test infrastructure needs attention due to environment configuration issues.

2,984in / 2,006out / 31reasoning$0.0305