feat: AG-UI core interop - spec-compliant event types#411
feat: AG-UI core interop - spec-compliant event types#411AlemTuzlak wants to merge 25 commits intomainfrom
Conversation
Replace custom AG-UI event types with interfaces that extend @ag-ui/core types for spec compliance. This is the foundational type change for the AG-UI protocol alignment. - Import all event types from @ag-ui/core with AGUI* aliases - Replace BaseAGUIEvent to extend @ag-ui/core BaseEvent - Replace each event interface to extend its @ag-ui/core equivalent - Add TanStack-internal extension fields (model, deprecated aliases) - Add new event types: ToolCallResultEvent, Reasoning* events - Deprecate AGUIEventType in favor of EventType enum - Re-export EventType enum from @ag-ui/core - Add threadId/runId to TextOptions interface - Update AGUIEvent union and StreamChunk type alias
…ream events Creates a middleware that removes TanStack-internal extension fields (model, rawEvent, deprecated aliases) from StreamChunk events so the yielded stream is @ag-ui/core spec-compliant. Registered as the last middleware in the chat activity chain so devtools and user middleware still see the full extended events.
Add threadId/runId to TextActivityOptions interface and TextEngine class so they flow from user-facing chat() options through to adapter.chatStream(). ThreadId is auto-generated if not provided. Adapters will consume these in subsequent tasks to include them in RUN_STARTED/RUN_FINISHED events.
Add threadId to RUN_STARTED/RUN_FINISHED events, toolCallName to TOOL_CALL_START/TOOL_CALL_END, stepName to STEP_STARTED/STEP_FINISHED, flatten RUN_ERROR with top-level message/code fields, and emit REASONING_START/MESSAGE_START/CONTENT/MESSAGE_END/END events alongside legacy STEP events for reasoning content.
Update test utilities and tests to use AG-UI spec field names: - Add threadId to RUN_STARTED/RUN_FINISHED events - Add toolCallName alongside deprecated toolName on tool events - Add stepName alongside deprecated stepId on step events - Use flat message field on RUN_ERROR (with deprecated error nested form) Fix critical bugs discovered during testing: - StreamProcessor: prefer chunk.message over chunk.error?.message for RUN_ERROR - TextEngine: process original chunks for internal state before middleware strips fields - Remove auto-applied stripToSpecMiddleware from chat() (breaks internal state since it strips finishReason, delta, content needed by TextEngine and StreamProcessor) - Fix type compatibility issues with @ag-ui/core EventType enum vs string literals Also fix type errors in: - stream-generation-result.ts: use EventType enum and add threadId - generateVideo/index.ts: add StreamChunk casts and threadId - tool-calls.ts: cast TOOL_CALL_END yield to ToolCallEndEvent - devtools-middleware.ts: handle toolCallName fallback and RUN_ERROR message field - processor.ts: handle developer role, Messages snapshot type cast, finishReason undefined
…, fix test assertions
…e, fix type errors - Fix EventType enum vs string literal type errors in test files by relaxing chunk helper type params and adding cast helpers - Pipe tool-phase events (TOOL_CALL_END, TOOL_CALL_RESULT, CUSTOM) through the middleware pipeline so strip-to-spec and devtools middleware observe all events, not just model-stream events - Add toolCallName to TOOL_CALL_END strip set in strip-to-spec middleware since AG-UI spec ToolCallEndEvent only has toolCallId - Update test assertions to use TOOL_CALL_RESULT (spec event) instead of checking stripped fields on TOOL_CALL_END
… and strip compliance
- Fix 5 ESLint errors in @tanstack/ai (array-type, no-unnecessary-condition, no-unnecessary-type-assertion, sort-imports) - Fix ESLint error in @tanstack/ai-event-client (no-unnecessary-condition) - Fix string literal vs EventType enum type errors across all 7 adapter packages by adding asChunk helper that casts event objects to StreamChunk - Fix @tanstack/ai-client source type errors (chunk.error possibly undefined, runId access on RUN_ERROR events, connection-adapters push calls) - Fix @tanstack/ai-client and @tanstack/ai-openrouter test type errors - Fix tool-call-manager tests to use toolCallName instead of deprecated toolName
Remove assertions for fields (content, finishReason, usage) that the stripToSpec middleware now strips from emitted events. Fix unnecessary nullish coalescing in ai-openai and add type casts in ai-vue tests.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdapters, clients, core activities, and tests were updated to align stream events with the AG‑UI spec: added thread/run IDs, renamed toolName→toolCallName, introduced REASONING_* lifecycle, added strip-to-spec middleware, strengthened RUN_ERROR payloads, and added explicit StreamChunk casts in many generators/tests. Changes
Sequence Diagram(s)sequenceDiagram
participant Adapter as Adapter (provider)
participant Middleware as stripToSpecMiddleware
participant Processor as Stream Processor
participant UI as UI / Thinking Part
Adapter->>Middleware: yield REASONING_MESSAGE_CONTENT (yield asChunk(...))
Middleware->>Processor: stripToSpec(chunk) → spec-compliant chunk
Processor->>Processor: handleReasoningMessageContentEvent (accumulate delta)
Processor->>UI: updateThinkingPart / onThinkingUpdate
Adapter->>Middleware: yield TEXT_MESSAGE_START (transition to text)
Middleware->>Processor: stripped TEXT_MESSAGE_START
Processor->>UI: create text message part / finalize thinking
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🚀 Changeset Version PreviewNo changeset entries found. Merging this PR will not cause a version bump for any packages. |
|
View your CI Pipeline Execution ↗ for commit 6d27aca
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
packages/typescript/ai-client/src/connection-adapters.ts (1)
198-210:⚠️ Potential issue | 🟡 MinorInclude top-level
messageandcodefields in syntheticRUN_ERRORevent.The synthetic
RUN_ERRORat lines 200-209 uses only the deprecated nestederror: { message }format. TheRunErrorEventtype definition marks this field deprecated in favor of top-levelmessageandcodefields fromAGUIRunErrorEvent, and all other RUN_ERROR creations throughout the codebase (stream-generation-result, adapters for OpenAI, Anthropic, Gemini, etc.) include both formats for spec compliance and backward compatibility.Suggested fix
push({ type: 'RUN_ERROR', timestamp: Date.now(), + message: + err instanceof Error + ? err.message + : 'Unknown error in connect()', + error: { + message: + err instanceof Error + ? err.message + : 'Unknown error in connect()', + }, } as unknown as StreamChunk)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-client/src/connection-adapters.ts` around lines 198 - 210, The synthetic RUN_ERROR pushed in the catch block of connect() (the push({...} as unknown as StreamChunk) call) uses only the deprecated nested error field; update that object to also include top-level message and code fields for AGUIRunErrorEvent compatibility: set message to err instanceof Error ? err.message : 'Unknown error in connect()', and set code to (err as any)?.code ?? (err instanceof Error ? err.name : 'UNKNOWN'), while keeping the existing error: { message: ... } nested field for backward compatibility; modify the object built in the catch branch inside connect() in connection-adapters.ts accordingly.packages/typescript/ai/src/activities/chat/stream/processor.ts (1)
868-889:⚠️ Potential issue | 🟠 MajorPreserve the deprecated
toolNamealias while the transition is in flight.This handler now reads only
chunk.toolCallName. Older chunks or replay fixtures that still emittoolNamewill create tool calls with an undefined name, which then leaks into the UI and the next model-message round-trip.♻️ Proposed fix
- const toolName = chunk.toolCallName + const toolName = + chunk.toolCallName ?? + ('toolName' in chunk + ? (chunk as { toolName?: string }).toolName + : undefined)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/activities/chat/stream/processor.ts` around lines 868 - 889, The code currently uses only chunk.toolCallName which can be undefined for older chunks; update the handler to preserve the deprecated alias by falling back to chunk.toolName when chunk.toolCallName is falsy (e.g., const toolName = chunk.toolCallName ?? chunk.toolName) so that the name is populated everywhere it's used (newToolCall.name, updateToolCallPart payload, and any mapping like this.toolCallToMessage.set and state.toolCalls.set) to avoid leaking undefined names into the UI and subsequent model round-trips.packages/typescript/ai-gemini/src/adapters/text.ts (2)
526-584:⚠️ Potential issue | 🟠 MajorDon’t emit
RUN_FINISHEDafter theMAX_TOKENSerror path.The
MAX_TOKENSbranch yieldsRUN_ERROR, but control still falls through to the sharedRUN_FINISHEDemission below. Direct adapter consumers will see contradictory terminal events for a single response.💡 Suggested fix
- yield asChunk({ - type: 'RUN_FINISHED', - runId, - threadId, - model, - timestamp, - finishReason: toolCallMap.size > 0 ? 'tool_calls' : 'stop', - usage: chunk.usageMetadata - ? { - promptTokens: chunk.usageMetadata.promptTokenCount ?? 0, - completionTokens: chunk.usageMetadata.candidatesTokenCount ?? 0, - totalTokens: chunk.usageMetadata.totalTokenCount ?? 0, - } - : undefined, - }) + if (finishReason !== FinishReason.MAX_TOKENS) { + yield asChunk({ + type: 'RUN_FINISHED', + runId, + threadId, + model, + timestamp, + finishReason: toolCallMap.size > 0 ? 'tool_calls' : 'stop', + usage: chunk.usageMetadata + ? { + promptTokens: chunk.usageMetadata.promptTokenCount ?? 0, + completionTokens: chunk.usageMetadata.candidatesTokenCount ?? 0, + totalTokens: chunk.usageMetadata.totalTokenCount ?? 0, + } + : undefined, + }) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/adapters/text.ts` around lines 526 - 584, When finishReason === FinishReason.MAX_TOKENS the code yields a RUN_ERROR via asChunk but then continues and also emits a RUN_FINISHED, causing conflicting terminal events; change the control flow so that after yielding the RUN_ERROR for finishReason === FinishReason.MAX_TOKENS you exit the generator branch (e.g., return or otherwise skip the remaining emission logic) so no RUN_FINISHED is yielded for that runId; ensure this change is applied around the finishReason check that yields RUN_ERROR so subsequent logic that emits TEXT_MESSAGE_END, REASONING_END, and the RUN_FINISHED event is not executed for the max-tokens error case.
443-519:⚠️ Potential issue | 🟠 MajorAvoid emitting
TOOL_CALL_ENDtwice forUNEXPECTED_TOOL_CALL.This branch emits
TOOL_CALL_ENDimmediately, then the unconditionaltoolCallMaploop emits anotherTOOL_CALL_ENDfor the same IDs. Middleware and clients will observe duplicate completion events for one call.💡 Suggested fix
+ const alreadyClosedToolCalls = new Set<string>() if (finishReason === FinishReason.UNEXPECTED_TOOL_CALL) { if (chunk.candidates[0].content?.parts) { for (const part of chunk.candidates[0].content.parts) { const functionCall = part.functionCall if (functionCall) { @@ yield asChunk({ type: 'TOOL_CALL_END', toolCallId, toolCallName: functionCall.name || '', toolName: functionCall.name || '', model, timestamp, input: parsedInput, }) + alreadyClosedToolCalls.add(toolCallId) } } } } // Emit TOOL_CALL_END for all tracked tool calls for (const [toolCallId, toolCallData] of toolCallMap.entries()) { + if (alreadyClosedToolCalls.has(toolCallId)) { + continue + } let parsedInput: unknown = {}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/adapters/text.ts` around lines 443 - 519, The UNEXPECTED_TOOL_CALL branch currently emits a TOOL_CALL_END immediately and then the unconditional for-loop over toolCallMap emits TOOL_CALL_END again, causing duplicate completion events; fix this by removing the immediate TOOL_CALL_END emission inside the FinishReason.UNEXPECTED_TOOL_CALL handling (keep the TOOL_CALL_START emission and ensure toolCallMap.set(...) stores name/args/index/started) so the final for-loop over toolCallMap (that yields TOOL_CALL_END) is the single place that emits tool completion events; update references in this block (finishReason, FinishReason.UNEXPECTED_TOOL_CALL, chunk.candidates[0].content.parts, functionCall, toolCallMap, nextToolIndex, asChunk) accordingly.packages/typescript/ai-openai/src/adapters/text.ts (1)
291-322:⚠️ Potential issue | 🟠 MajorAdd
REASONING_MESSAGE_CONTENTemission forreasoning_textcontent parts inhandleContentPart().The
reasoning_textbranch currently only emitsSTEP_FINISHED, missing the spec-compliantREASONING_MESSAGE_CONTENTevent. When OpenAI delivers reasoning viaresponse.content_part.addedorresponse.content_part.done(lines 630, 651), consumers never receive the content deltas, breaking the reasoning stream.Convert
handleContentPart()to a generator function to yield bothREASONING_MESSAGE_CONTENTandSTEP_FINISHEDfor reasoning content, and update the call sites to useyield*delegation.Suggested fix
- const handleContentPart = ( + const handleContentPart = function* ( contentPart: | OpenAI_SDK.Responses.ResponseOutputText | OpenAI_SDK.Responses.ResponseOutputRefusal | OpenAI_SDK.Responses.ResponseContentPartAddedEvent.ReasoningText, - ): StreamChunk => { + ): Generator<StreamChunk> { if (contentPart.type === 'output_text') { accumulatedContent += contentPart.text - return asChunk({ + yield asChunk({ type: 'TEXT_MESSAGE_CONTENT', messageId, model: model || options.model, timestamp, delta: contentPart.text, content: accumulatedContent, }) + return } if (contentPart.type === 'reasoning_text') { accumulatedReasoning += contentPart.text const currentStepId = stepId || genId() + yield asChunk({ + type: 'REASONING_MESSAGE_CONTENT', + messageId: reasoningMessageId!, + delta: contentPart.text, + model: model || options.model, + timestamp, + }) - return asChunk({ + yield asChunk({ type: 'STEP_FINISHED', stepName: currentStepId, stepId: currentStepId, model: model || options.model, timestamp, delta: contentPart.text, content: accumulatedReasoning, }) + return } - return asChunk({ + yield asChunk({ type: 'RUN_ERROR', runId, message: contentPart.refusal, model: model || options.model, timestamp, error: { message: contentPart.refusal, }, }) } @@ - yield handleContentPart(contentPart) + yield* handleContentPart(contentPart) @@ - yield handleContentPart(contentPart) + yield* handleContentPart(contentPart)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/adapters/text.ts` around lines 291 - 322, handleContentPart currently returns a single StreamChunk and for reasoning_text only emits STEP_FINISHED, so deltas aren't streamed; convert handleContentPart into a generator (function* handleContentPart(...)) that yields a REASONING_MESSAGE_CONTENT chunk (including messageId, model, timestamp, delta and accumulatedReasoning content) for each reasoning_text delta and then yields the STEP_FINISHED chunk (with stepId/stepName, model, timestamp, delta and accumulatedReasoning) when appropriate; update all call sites that invoked handleContentPart (where response.content_part.added/ done are processed) to use yield* handleContentPart(...) so both events are emitted to consumers.
🧹 Nitpick comments (7)
packages/typescript/ai-grok/src/adapters/text.ts (1)
124-128: Minor: Inconsistent error logging prefix.The error logging here uses bare
console.errorcalls without an adapter prefix, while line 395 uses[Grok Adapter]prefix. Consider standardizing for easier log filtering.🔧 Suggested fix
- console.error('>>> chatStream: Fatal error during response creation <<<') - console.error('>>> Error message:', err.message) - console.error('>>> Error stack:', err.stack) - console.error('>>> Full error:', err) + console.error('[Grok Adapter] chatStream: Fatal error during response creation') + console.error('[Grok Adapter] Error message:', err.message) + console.error('[Grok Adapter] Error stack:', err.stack) + console.error('[Grok Adapter] Full error:', err)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-grok/src/adapters/text.ts` around lines 124 - 128, The console.error calls in the chatStream error handler are missing the adapter prefix used elsewhere; update the error logging inside the chatStream (or the function/method handling "Fatal error during response creation") to use the same "[Grok Adapter]" prefix as other logs (e.g., the log at line ~395) and ensure all three/error messages (message, stack, full error) include that prefix for consistent filtering and traceability.packages/typescript/ai/src/types.ts (1)
885-897: MaketoolNameoptional to avoid requiring bothtoolNameandtoolCallNameduring transition.
ToolCallStartEventextendsAGUIToolCallStartEvent, which requirestoolCallNameas a required field. The current code also makestoolNamerequired (marked as deprecated). This means adapters must provide both fields during the backward-compatibility transition period. SincetoolCallNameis already required from the parent interface, consider makingtoolNameoptional—users migrating to the new spec only need to providetoolCallName, while legacy code can still passtoolNameif needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/types.ts` around lines 885 - 897, The ToolCallStartEvent interface currently declares the deprecated field toolName as required while it already extends AGUIToolCallStartEvent which mandates toolCallName; change toolName to an optional property (toolName?: string) in the ToolCallStartEvent declaration so adapters can supply only toolCallName during migration while still accepting legacy toolName when present; update the interface comment to reflect deprecation and optionality for clarity.packages/typescript/ai-openrouter/src/adapters/summarize.ts (1)
88-91: Consider providing a fallback message whenchunk.erroris undefined.The optional chaining
chunk.error?.messageis a good defensive change, but ifchunk.erroris undefined, the thrown error will read"Error during summarization: undefined". Given the PR's change to flattenRUN_ERRORto top-levelmessage/code, consider also checkingchunk.message:Proposed improvement
// AG-UI RUN_ERROR event if (chunk.type === 'RUN_ERROR') { - throw new Error(`Error during summarization: ${chunk.error?.message}`) + throw new Error(`Error during summarization: ${chunk.message || chunk.error?.message || 'Unknown error'}`) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openrouter/src/adapters/summarize.ts` around lines 88 - 91, When handling the AG-UI RUN_ERROR branch (the if (chunk.type === 'RUN_ERROR') block that currently throws new Error(`Error during summarization: ${chunk.error?.message}`)), include a safe fallback so the thrown message isn't "undefined": construct the error message from chunk.error?.message first, then fallback to chunk.message, then a generic string like "Unknown error during summarization" so the throw uses a real message; update that throw site accordingly.packages/typescript/ai-client/tests/generation-client.test.ts (1)
166-173: Cover the flattenedRUN_ERRORshape in this fixture.This still only exercises the legacy nested
error.messagepath. IfGenerationClientregresses on the new top-levelmessage/codecontract, this test will keep passing.🧪 Suggested fixture update
const connection = createMockConnection([ asChunk({ type: 'RUN_STARTED', runId: 'run-1', timestamp: Date.now() }), asChunk({ type: 'RUN_ERROR', runId: 'run-1', + message: 'Generation failed', error: { message: 'Generation failed' }, timestamp: Date.now(), }), ])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-client/tests/generation-client.test.ts` around lines 166 - 173, The RUN_ERROR test fixture only covers the legacy nested error shape; update the asChunk call in the createMockConnection fixture used by generation-client.test.ts (the RUN_ERROR chunk created via asChunk) to include the flattened top-level fields (e.g., message and code) in addition to the existing nested error.message so the test exercises both shapes and will catch regressions in GenerationClient handling of the new top-level message/code contract.packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts (1)
375-377: Assert the flattenedRUN_ERRORfield here, not just the legacy fallback.The new optional-chained check still only validates
error?.message. With the flattened event shape introduced in this PR, these assertions should preferchunk.message(or at least fall back to it) so the adapter contract is actually covered.🧪 Suggested assertion update
- expect(errorChunk.error?.message).toBe('Invalid API key') + expect(errorChunk.message ?? errorChunk.error?.message).toBe( + 'Invalid API key', + ) … - expect(runErrorChunk.error?.message).toBe('API key invalid') + expect(runErrorChunk.message ?? runErrorChunk.error?.message).toBe( + 'API key invalid', + )Also applies to: 668-670
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts` around lines 375 - 377, The test currently asserts only the legacy nested error field (errorChunk.error?.message) for RUN_ERROR; update the assertion to prefer the flattened event shape by asserting errorChunk.message first and falling back to errorChunk.error?.message if absent (i.e., expect((errorChunk.message ?? errorChunk.error?.message)).toBe('Invalid API key')), and apply the same change to the other occurrence around lines 668-670 so the adapter contract covers both flattened and legacy shapes.packages/typescript/ai/tests/middleware.test.ts (1)
1424-1439: Assert phases for the actual post-tool chunks here.This bucket check is loose enough that
TOOL_CALL_RESULTcould regress back tomodelStreamand the test would still pass as long as some other chunk lands inafterTools. Since this PR is about post-tool routing, pin the phase onTOOL_CALL_END/TOOL_CALL_RESULTexplicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/tests/middleware.test.ts` around lines 1424 - 1439, The test currently only checks that some onChunk phases are 'modelStream' and some are 'afterTools', which lets TOOL_CALL_RESULT regress; instead, explicitly locate events with type TOOL_CALL_RESULT and/or TOOL_CALL_END from phaseLog (filter where e.type === 'TOOL_CALL_RESULT' || e.type === 'TOOL_CALL_END') and assert that every such event has e.phase === 'afterTools' and that at least one such event exists; keep the existing checks for modelStream chunks but add this targeted assertion against TOOL_CALL_RESULT/TOOL_CALL_END to pin post-tool routing.packages/typescript/ai-client/tests/test-utils.ts (1)
325-347:createThinkingChunks()still skips the newREASONING_*path.This helper only emits
STEP_FINISHED, so the chat/client suites that use it never exercise the spec event family added in this PR. Keeping the legacy alias is fine, but this should generate reasoning fixtures too.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-client/tests/test-utils.ts` around lines 325 - 347, createThinkingChunks currently only emits legacy STEP_FINISHED events so tests never exercise the new REASONING_* event family; update the function (inside the thinkingContent loop and where the final STEP_FINISHED is pushed) to also push corresponding REASONING_* fixtures (e.g. a per-character REASONING_STEP_DELTA-like event with delta and content and a final REASONING_STEP_FINISHED-like event) using the same runId, stepId, stepName, model, timestamp, delta and accumulatedThinking values so both legacy and new spec paths are covered in tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-anthropic/src/adapters/text.ts`:
- Around line 547-549: The adapter currently always generates a fresh runId with
const runId = genId(); change this to honor caller-provided options.runId by
using options.runId || genId() so externally passed run IDs are preserved; keep
the existing generation behavior for threadId and messageId (threadId uses
options.threadId || genId() and messageId stays genId()) and ensure any
downstream uses of runId (e.g., where runId is passed to events or logs)
continue to reference the updated runId variable.
In `@packages/typescript/ai-client/src/realtime-client.ts`:
- Around line 410-413: The handler for the 'tool_call' event currently only
reads toolCallName and looks up the tool via this.clientTools.get(toolCallName),
causing older emitters that send the deprecated toolName to be ignored; update
the destructuring in the 'tool_call' listener to accept both toolCallName and
the deprecated toolName (e.g., async ({ toolCallId, toolCallName, toolName,
input }) => ...) and resolve the lookup with a fallback (use toolCallName ||
toolName) when calling this.clientTools.get so older clients continue to work
during the transition.
In `@packages/typescript/ai-gemini/src/adapters/text.ts`:
- Around line 222-224: The adapter unconditionally generates runId with const
runId = generateId(this.name) instead of honoring caller-provided options.runId;
change the logic in the text adapter so runId uses options.runId when present
(e.g., const runId = options.runId || generateId(this.name)), ensuring
downstream events like RUN_STARTED/RUN_FINISHED use the provided id; update any
places referencing runId in this module (e.g., messageId/threadId generation
code nearby) to remain consistent and keep generateId(this.name) only as the
fallback.
In `@packages/typescript/ai-openai/src/adapters/text.ts`:
- Around line 265-267: The adapter always overwrites the caller-provided runId
by unconditionally setting const runId = genId(); change this to honor
options.runId by using options.runId when present (e.g., set runId =
options.runId || genId()) so the RUN_STARTED/RUN_FINISHED correlation override
from chat/index.ts is preserved; update the const runId declaration near genId()
usage in this file (keep threadId/messageId generation as-is).
In `@packages/typescript/ai-openrouter/src/adapters/text.ts`:
- Around line 308-324: processChoice() currently closes reasoning (yielding
REASONING_MESSAGE_END/REASONING_END) as soon as delta.content is seen, which
allows later reasoningDetails in the same chunk to emit
REASONING_MESSAGE_CONTENT out of order; update the handling in text processing
(around the aguiState checks) to first check delta.reasoningDetails and fully
process/flush any reasoning-related deltas for aguiState.reasoningMessageId
before handling delta.content, or alternatively mark a hard stop (e.g., set
aguiState.hasClosedReasoning) before emitting any further reasoning and ignore
subsequent reasoningDetails once content has started; ensure the change is
applied to the same branching logic referenced (aguiState, delta.content,
delta.reasoningDetails, REASONING_MESSAGE_END/REASONING_MESSAGE_CONTENT) and
mirrored for the other similar blocks (the regions noted at lines ~352-399 and
~413-458).
In `@packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts`:
- Around line 720-727: Replace the indexOf casts by using findIndex on the
chunks array: instead of computing
textStartIndex/textContentIndex/textEndIndex/runFinishedIndex via
eventTypes.indexOf(... as any), call chunks.findIndex(c => c.type ===
'TEXT_MESSAGE_START' | 'TEXT_MESSAGE_CONTENT' | 'TEXT_MESSAGE_END' |
'RUN_FINISHED') (one findIndex per event type) so the tests use the existing
StreamChunk union typing; ensure you keep the same expect assertions
(toBeGreaterThan checks) and reference the same variables textStartIndex,
textContentIndex, textEndIndex, runFinishedIndex.
In `@packages/typescript/ai-vue/tests/use-generation.test.ts`:
- Line 29: The test uses unsafe casts "as unknown as Array<StreamChunk>" and an
outdated RUN_ERROR payload shape; replace those double-casts with TypeScript's
structural check using "satisfies Array<StreamChunk>" for the arrays (locate the
arrays typed as StreamChunk in use-generation.test.ts) and update any RUN_ERROR
event payloads (the test constant/name RUN_ERROR) to use the spec-compliant
top-level "message" field instead of nested "error: { message }" so the payload
matches the current schema.
In `@packages/typescript/ai/src/activities/chat/stream/processor.ts`:
- Around line 498-507: The processor currently appends the same reasoning text
twice because adapters emit both legacy STEP_* deltas and new REASONING_*
deltas; modify the switch handling in the stream processor so that once any
REASONING_* event (e.g., in handleReasoningMessageContentEvent) is observed you
stop appending legacy STEP_* deltas to thinkingContent (and likewise prevent
STEP_FINISHED from mutating the same buffer); implement this by adding a boolean
flag (e.g., sawReasoningEvents) on the processor state, set it when handling any
REASONING_* case, and branch the STEP_* / STEP_FINISHED handlers to no-op if
that flag is true; apply the same guard in the other symmetric handler block
referenced (the second switch handling the same message flow) so
getResult().thinking only comes from one source of truth per message.
In `@packages/typescript/ai/src/activities/chat/tools/tool-calls.ts`:
- Around line 94-101: In addToolCallStartEvent (ToolCallManager) preserve the
deprecated fallback by using event.toolCallName ?? event.toolName when setting
the function.name so older callers aren't stored with an empty name; update the
creation of the map entry in addToolCallStartEvent (and any related uses that
read the stored name, e.g., getToolCalls) to prefer toolCallName but fall back
to toolName to ensure legacy tool calls are retained.
In `@packages/typescript/ai/src/activities/generateVideo/index.ts`:
- Around line 347-357: The RUN_ERROR yield in the catch block of generateVideo
(the yield returning type: 'RUN_ERROR') is missing runId and threadId, so add
the existing runId and threadId identifiers to the top-level object and also
include them inside the nested error object (mirroring
RUN_STARTED/RUN_FINISHED), using the same runId and threadId variables used
earlier in the function (e.g., the runId/threadId captured for the run) so
consumers can correlate the failed run/thread; ensure the produced object still
matches StreamChunk shape expected by callers.
In `@packages/typescript/ai/src/activities/stream-generation-result.ts`:
- Around line 56-66: The RUN_ERROR chunk is missing runId and threadId, so
failed streams can't be correlated with the corresponding run; update the
yielded error object (the StreamChunk produced for EventType.RUN_ERROR) to
include the same runId and threadId fields the success path emits (and if you
keep the deprecated nested error object, include them there too) so downstream
consumers can key off runId/threadId; locate the yield that returns
EventType.RUN_ERROR and add runId and threadId (sourced from the same variables
used in the success path).
In `@packages/typescript/ai/tests/chat.test.ts`:
- Around line 1292-1319: The tests currently only assert that RUN_STARTED and
RUN_FINISHED events include a threadId, but must assert the propagated value;
update the assertions in the test for the chat() stream so that the threadId on
both runStarted and runFinished chunks equals the caller-supplied 'my-thread-id'
(replace the toBeDefined checks with exact equality checks against
'my-thread-id' for the chunks found via chunks.find where c.type ===
'RUN_STARTED' and c.type === 'RUN_FINISHED').
In `@packages/typescript/ai/tests/custom-events-integration.test.ts`:
- Around line 1-8: Move the external dependency import for "z" (zod) to appear
before any local imports; specifically, reorder the imports so "import { z }
from 'zod'" comes before importing local modules like toolDefinition and
StreamProcessor in the test file, ensuring import order places third-party
modules before project files.
In `@packages/typescript/ai/tests/stream-generation.test.ts`:
- Around line 97-100: The tests currently assert the deprecated nested error
shape (e.g., chunks[i]!.error!.message) for RUN_ERROR; update each assertion to
validate the new flattened top-level contract by checking chunk.message (and
chunk.code if present) instead of chunk.error.message (and chunk.error.code).
Locate the test assertions referencing RUN_ERROR (e.g., the occurrences around
chunks[1], and the other instances noted) and replace checks like
chunks[x]!.error!.message / .code with chunks[x]!.message / .code (or
expect(chunks[x]!.message).toBe(...)), preserving the same expectations but
targeting the top-level properties.
In `@packages/typescript/ai/tests/stream-to-response.test.ts`:
- Around line 227-230: The async generator errorStream currently aborts and
throws but never yields (violating ESLint require-yield) and uses a generic
return type; change its signature to AsyncGenerator<StreamChunk> and add a no-op
yield like `yield* []` before/after abortController.abort() (keeping the throw
for semantics) so the generator yields at least once while preserving the test
behavior that abort prevents the error from being emitted; reference the
errorStream function and StreamChunk type and the existing abortController
variable when making this change.
In `@packages/typescript/ai/tests/strip-to-spec-middleware.test.ts`:
- Line 1: The named imports from vitest are not alphabetized; reorder the named
import list in the top-level import so the identifiers are alphabetical
(describe, expect, it) — update the import that currently lists describe, it,
expect to use describe, expect, it to satisfy the import sorting rule in
strip-to-spec-middleware.test.ts.
In `@packages/typescript/ai/tests/tool-call-manager.test.ts`:
- Around line 7-14: Reorder the imported type specifiers in the import block so
they are alphabetized: RunFinishedEvent, Tool, ToolCall, ToolCallArgsEvent,
ToolCallEndEvent, ToolCallStartEvent; update the import in the file (the import
that lists RunFinishedEvent, Tool, ToolCall, ToolCallStartEvent,
ToolCallArgsEvent, ToolCallEndEvent) to that alphabetical order to satisfy the
sort-imports rule.
---
Outside diff comments:
In `@packages/typescript/ai-client/src/connection-adapters.ts`:
- Around line 198-210: The synthetic RUN_ERROR pushed in the catch block of
connect() (the push({...} as unknown as StreamChunk) call) uses only the
deprecated nested error field; update that object to also include top-level
message and code fields for AGUIRunErrorEvent compatibility: set message to err
instanceof Error ? err.message : 'Unknown error in connect()', and set code to
(err as any)?.code ?? (err instanceof Error ? err.name : 'UNKNOWN'), while
keeping the existing error: { message: ... } nested field for backward
compatibility; modify the object built in the catch branch inside connect() in
connection-adapters.ts accordingly.
In `@packages/typescript/ai-gemini/src/adapters/text.ts`:
- Around line 526-584: When finishReason === FinishReason.MAX_TOKENS the code
yields a RUN_ERROR via asChunk but then continues and also emits a RUN_FINISHED,
causing conflicting terminal events; change the control flow so that after
yielding the RUN_ERROR for finishReason === FinishReason.MAX_TOKENS you exit the
generator branch (e.g., return or otherwise skip the remaining emission logic)
so no RUN_FINISHED is yielded for that runId; ensure this change is applied
around the finishReason check that yields RUN_ERROR so subsequent logic that
emits TEXT_MESSAGE_END, REASONING_END, and the RUN_FINISHED event is not
executed for the max-tokens error case.
- Around line 443-519: The UNEXPECTED_TOOL_CALL branch currently emits a
TOOL_CALL_END immediately and then the unconditional for-loop over toolCallMap
emits TOOL_CALL_END again, causing duplicate completion events; fix this by
removing the immediate TOOL_CALL_END emission inside the
FinishReason.UNEXPECTED_TOOL_CALL handling (keep the TOOL_CALL_START emission
and ensure toolCallMap.set(...) stores name/args/index/started) so the final
for-loop over toolCallMap (that yields TOOL_CALL_END) is the single place that
emits tool completion events; update references in this block (finishReason,
FinishReason.UNEXPECTED_TOOL_CALL, chunk.candidates[0].content.parts,
functionCall, toolCallMap, nextToolIndex, asChunk) accordingly.
In `@packages/typescript/ai-openai/src/adapters/text.ts`:
- Around line 291-322: handleContentPart currently returns a single StreamChunk
and for reasoning_text only emits STEP_FINISHED, so deltas aren't streamed;
convert handleContentPart into a generator (function* handleContentPart(...))
that yields a REASONING_MESSAGE_CONTENT chunk (including messageId, model,
timestamp, delta and accumulatedReasoning content) for each reasoning_text delta
and then yields the STEP_FINISHED chunk (with stepId/stepName, model, timestamp,
delta and accumulatedReasoning) when appropriate; update all call sites that
invoked handleContentPart (where response.content_part.added/ done are
processed) to use yield* handleContentPart(...) so both events are emitted to
consumers.
In `@packages/typescript/ai/src/activities/chat/stream/processor.ts`:
- Around line 868-889: The code currently uses only chunk.toolCallName which can
be undefined for older chunks; update the handler to preserve the deprecated
alias by falling back to chunk.toolName when chunk.toolCallName is falsy (e.g.,
const toolName = chunk.toolCallName ?? chunk.toolName) so that the name is
populated everywhere it's used (newToolCall.name, updateToolCallPart payload,
and any mapping like this.toolCallToMessage.set and state.toolCalls.set) to
avoid leaking undefined names into the UI and subsequent model round-trips.
---
Nitpick comments:
In `@packages/typescript/ai-client/tests/generation-client.test.ts`:
- Around line 166-173: The RUN_ERROR test fixture only covers the legacy nested
error shape; update the asChunk call in the createMockConnection fixture used by
generation-client.test.ts (the RUN_ERROR chunk created via asChunk) to include
the flattened top-level fields (e.g., message and code) in addition to the
existing nested error.message so the test exercises both shapes and will catch
regressions in GenerationClient handling of the new top-level message/code
contract.
In `@packages/typescript/ai-client/tests/test-utils.ts`:
- Around line 325-347: createThinkingChunks currently only emits legacy
STEP_FINISHED events so tests never exercise the new REASONING_* event family;
update the function (inside the thinkingContent loop and where the final
STEP_FINISHED is pushed) to also push corresponding REASONING_* fixtures (e.g. a
per-character REASONING_STEP_DELTA-like event with delta and content and a final
REASONING_STEP_FINISHED-like event) using the same runId, stepId, stepName,
model, timestamp, delta and accumulatedThinking values so both legacy and new
spec paths are covered in tests.
In `@packages/typescript/ai-grok/src/adapters/text.ts`:
- Around line 124-128: The console.error calls in the chatStream error handler
are missing the adapter prefix used elsewhere; update the error logging inside
the chatStream (or the function/method handling "Fatal error during response
creation") to use the same "[Grok Adapter]" prefix as other logs (e.g., the log
at line ~395) and ensure all three/error messages (message, stack, full error)
include that prefix for consistent filtering and traceability.
In `@packages/typescript/ai-openrouter/src/adapters/summarize.ts`:
- Around line 88-91: When handling the AG-UI RUN_ERROR branch (the if
(chunk.type === 'RUN_ERROR') block that currently throws new Error(`Error during
summarization: ${chunk.error?.message}`)), include a safe fallback so the thrown
message isn't "undefined": construct the error message from chunk.error?.message
first, then fallback to chunk.message, then a generic string like "Unknown error
during summarization" so the throw uses a real message; update that throw site
accordingly.
In `@packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts`:
- Around line 375-377: The test currently asserts only the legacy nested error
field (errorChunk.error?.message) for RUN_ERROR; update the assertion to prefer
the flattened event shape by asserting errorChunk.message first and falling back
to errorChunk.error?.message if absent (i.e., expect((errorChunk.message ??
errorChunk.error?.message)).toBe('Invalid API key')), and apply the same change
to the other occurrence around lines 668-670 so the adapter contract covers both
flattened and legacy shapes.
In `@packages/typescript/ai/src/types.ts`:
- Around line 885-897: The ToolCallStartEvent interface currently declares the
deprecated field toolName as required while it already extends
AGUIToolCallStartEvent which mandates toolCallName; change toolName to an
optional property (toolName?: string) in the ToolCallStartEvent declaration so
adapters can supply only toolCallName during migration while still accepting
legacy toolName when present; update the interface comment to reflect
deprecation and optionality for clarity.
In `@packages/typescript/ai/tests/middleware.test.ts`:
- Around line 1424-1439: The test currently only checks that some onChunk phases
are 'modelStream' and some are 'afterTools', which lets TOOL_CALL_RESULT
regress; instead, explicitly locate events with type TOOL_CALL_RESULT and/or
TOOL_CALL_END from phaseLog (filter where e.type === 'TOOL_CALL_RESULT' ||
e.type === 'TOOL_CALL_END') and assert that every such event has e.phase ===
'afterTools' and that at least one such event exists; keep the existing checks
for modelStream chunks but add this targeted assertion against
TOOL_CALL_RESULT/TOOL_CALL_END to pin post-tool routing.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5f97c503-dec7-474c-b4a4-a8ff09a60e9c
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (47)
packages/typescript/ai-anthropic/src/adapters/summarize.tspackages/typescript/ai-anthropic/src/adapters/text.tspackages/typescript/ai-anthropic/tests/anthropic-adapter.test.tspackages/typescript/ai-client/src/chat-client.tspackages/typescript/ai-client/src/connection-adapters.tspackages/typescript/ai-client/src/generation-client.tspackages/typescript/ai-client/src/realtime-client.tspackages/typescript/ai-client/src/video-generation-client.tspackages/typescript/ai-client/tests/chat-client-abort.test.tspackages/typescript/ai-client/tests/chat-client.test.tspackages/typescript/ai-client/tests/connection-adapters.test.tspackages/typescript/ai-client/tests/generation-client.test.tspackages/typescript/ai-client/tests/test-utils.tspackages/typescript/ai-client/tests/video-generation-client.test.tspackages/typescript/ai-event-client/src/devtools-middleware.tspackages/typescript/ai-gemini/src/adapters/summarize.tspackages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-gemini/tests/gemini-adapter.test.tspackages/typescript/ai-grok/src/adapters/text.tspackages/typescript/ai-groq/src/adapters/text.tspackages/typescript/ai-ollama/src/adapters/summarize.tspackages/typescript/ai-ollama/src/adapters/text.tspackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-openai/src/realtime/adapter.tspackages/typescript/ai-openrouter/src/adapters/summarize.tspackages/typescript/ai-openrouter/src/adapters/text.tspackages/typescript/ai-openrouter/tests/openrouter-adapter.test.tspackages/typescript/ai-vue/tests/use-generation.test.tspackages/typescript/ai/package.jsonpackages/typescript/ai/src/activities/chat/index.tspackages/typescript/ai/src/activities/chat/stream/processor.tspackages/typescript/ai/src/activities/chat/tools/tool-calls.tspackages/typescript/ai/src/activities/generateVideo/index.tspackages/typescript/ai/src/activities/stream-generation-result.tspackages/typescript/ai/src/realtime/types.tspackages/typescript/ai/src/strip-to-spec-middleware.tspackages/typescript/ai/src/types.tspackages/typescript/ai/tests/chat.test.tspackages/typescript/ai/tests/custom-events-integration.test.tspackages/typescript/ai/tests/extend-adapter.test.tspackages/typescript/ai/tests/middleware.test.tspackages/typescript/ai/tests/stream-generation.test.tspackages/typescript/ai/tests/stream-processor.test.tspackages/typescript/ai/tests/stream-to-response.test.tspackages/typescript/ai/tests/strip-to-spec-middleware.test.tspackages/typescript/ai/tests/test-utils.tspackages/typescript/ai/tests/tool-call-manager.test.ts
…ix reasoning ordering - Honor caller-provided runId/threadId in all 7 adapters using ?? fallback - Prevent duplicate thinking content from dual STEP_FINISHED/REASONING_MESSAGE_CONTENT events - Assert exact threadId value in chat test instead of just toBeDefined - Add runId/threadId to RUN_ERROR in generateVideo and stream-generation-result - Move reasoning processing before content processing in OpenRouter adapter
There was a problem hiding this comment.
🧹 Nitpick comments (4)
packages/typescript/ai-openai/src/adapters/text.ts (2)
350-358:RUN_ERRORevents are missingthreadIdfor consistency.All four
RUN_ERRORemission points omitthreadIdwhileRUN_STARTEDandRUN_FINISHEDinclude it. For consistent correlation across the run lifecycle, consider addingthreadIdto error events.💡 Suggested fix
yield asChunk({ type: 'RUN_ERROR', runId, + threadId, message: chunk.response.error.message, code: chunk.response.error.code,Apply similarly to all four
RUN_ERRORblocks.Also applies to: 363-372, 773-784, 796-807
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/adapters/text.ts` around lines 350 - 358, The RUN_ERROR emission blocks (the asChunk calls that emit type: 'RUN_ERROR' in text.ts) are missing threadId causing inconsistency with RUN_STARTED and RUN_FINISHED; update each RUN_ERROR payload to include the threadId property (alongside runId, message, code, model, timestamp, error) so the emitted chunk can be correlated by thread — locate the four asChunk(... type: 'RUN_ERROR' ...) sites (around the shown block and the ones at the other ranges: 363-372, 773-784, 796-807) and add threadId to each payload.
486-492: Potential stepId inconsistency in STEP_FINISHED emission.When
stepIdis null,stepId || genId()is evaluated twice (lines 486 and 487), which could generate two different IDs forstepNameandstepIdfields within the same event. This is unlikely to cause issues in practice sincestepIdshould always be set by this point, but defensively consider using a single assignment.💡 Suggested fix
+ const currentStepId = stepId || genId() + // Legacy STEP event yield asChunk({ type: 'STEP_FINISHED', - stepName: stepId || genId(), - stepId: stepId || genId(), + stepName: currentStepId, + stepId: currentStepId, model: model || options.model,This pattern is already used in
handleContentPart(line 311-312) - apply consistently here and in the similar block at lines 553-559.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/adapters/text.ts` around lines 486 - 492, The STEP_FINISHED event object currently evaluates stepId || genId() twice which can produce two different IDs; fix by computing a single local id (e.g., const resolvedStepId = stepId || genId()) and use resolvedStepId for both stepName and stepId when constructing the object in the block that emits STEP_FINISHED (mirror the approach used in handleContentPart), and apply the same single-assignment pattern to the similar block around lines 553-559 so both fields are consistent.packages/typescript/ai-grok/src/adapters/text.ts (1)
111-122:RUN_ERRORevents are missingthreadIdfor consistency.Same issue as the Groq adapter - both
RUN_ERRORemission sites should includethreadIdfor correlation consistency withRUN_STARTEDandRUN_FINISHED.💡 Suggested fix
yield asChunk({ type: 'RUN_ERROR', runId: aguiState.runId, + threadId: aguiState.threadId, model: options.model, timestamp, message: err.message || 'Unknown error',Apply to both
RUN_ERRORblocks.Also applies to: 398-409
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-grok/src/adapters/text.ts` around lines 111 - 122, The RUN_ERROR event payloads emitted via asChunk are missing threadId for correlation; update both RUN_ERROR emission sites (the one shown and the other similar block) to include threadId: aguiState.threadId at the top level of the emitted object so the event matches RUN_STARTED/RUN_FINISHED correlation (keep the existing fields like runId: aguiState.runId, model: options.model, timestamp, message, code, and error).packages/typescript/ai-groq/src/adapters/text.ts (1)
111-122:RUN_ERRORevents are missingthreadIdfor consistency.The
RUN_STARTEDandRUN_FINISHEDevents includethreadId, but bothRUN_ERRORemission sites omit it. For correlation and debugging purposes, consider addingthreadIdto these events as well.💡 Suggested fix
yield asChunk({ type: 'RUN_ERROR', runId: aguiState.runId, + threadId: aguiState.threadId, model: options.model, timestamp, message: err.message || 'Unknown error',Apply similar change to the second
RUN_ERRORat line 388.Also applies to: 388-399
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/src/adapters/text.ts` around lines 111 - 122, The RUN_ERROR event emissions in text adapter are missing threadId; update both RUN_ERROR yields (the one building the error chunk with aguiState.runId and the second RUN_ERROR emission later) to include threadId: aguiState.threadId (or fallback null/undefined if needed) in the top-level event and inside the error payload if your schema expects it, ensuring consistency with RUN_STARTED and RUN_FINISHED; locate the two places that call asChunk with type: 'RUN_ERROR' and add threadId: aguiState.threadId.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/typescript/ai-grok/src/adapters/text.ts`:
- Around line 111-122: The RUN_ERROR event payloads emitted via asChunk are
missing threadId for correlation; update both RUN_ERROR emission sites (the one
shown and the other similar block) to include threadId: aguiState.threadId at
the top level of the emitted object so the event matches
RUN_STARTED/RUN_FINISHED correlation (keep the existing fields like runId:
aguiState.runId, model: options.model, timestamp, message, code, and error).
In `@packages/typescript/ai-groq/src/adapters/text.ts`:
- Around line 111-122: The RUN_ERROR event emissions in text adapter are missing
threadId; update both RUN_ERROR yields (the one building the error chunk with
aguiState.runId and the second RUN_ERROR emission later) to include threadId:
aguiState.threadId (or fallback null/undefined if needed) in the top-level event
and inside the error payload if your schema expects it, ensuring consistency
with RUN_STARTED and RUN_FINISHED; locate the two places that call asChunk with
type: 'RUN_ERROR' and add threadId: aguiState.threadId.
In `@packages/typescript/ai-openai/src/adapters/text.ts`:
- Around line 350-358: The RUN_ERROR emission blocks (the asChunk calls that
emit type: 'RUN_ERROR' in text.ts) are missing threadId causing inconsistency
with RUN_STARTED and RUN_FINISHED; update each RUN_ERROR payload to include the
threadId property (alongside runId, message, code, model, timestamp, error) so
the emitted chunk can be correlated by thread — locate the four asChunk(...
type: 'RUN_ERROR' ...) sites (around the shown block and the ones at the other
ranges: 363-372, 773-784, 796-807) and add threadId to each payload.
- Around line 486-492: The STEP_FINISHED event object currently evaluates stepId
|| genId() twice which can produce two different IDs; fix by computing a single
local id (e.g., const resolvedStepId = stepId || genId()) and use resolvedStepId
for both stepName and stepId when constructing the object in the block that
emits STEP_FINISHED (mirror the approach used in handleContentPart), and apply
the same single-assignment pattern to the similar block around lines 553-559 so
both fields are consistent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 47704f0a-903b-4ccc-880c-9ec8d9098a89
📒 Files selected for processing (12)
packages/typescript/ai-anthropic/src/adapters/text.tspackages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-grok/src/adapters/text.tspackages/typescript/ai-groq/src/adapters/text.tspackages/typescript/ai-ollama/src/adapters/text.tspackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-openrouter/src/adapters/text.tspackages/typescript/ai/src/activities/chat/stream/processor.tspackages/typescript/ai/src/activities/chat/stream/types.tspackages/typescript/ai/src/activities/generateVideo/index.tspackages/typescript/ai/src/activities/stream-generation-result.tspackages/typescript/ai/tests/chat.test.ts
✅ Files skipped from review due to trivial changes (3)
- packages/typescript/ai/src/activities/chat/stream/types.ts
- packages/typescript/ai-ollama/src/adapters/text.ts
- packages/typescript/ai-openrouter/src/adapters/text.ts
🚧 Files skipped from review as they are similar to previous changes (5)
- packages/typescript/ai/src/activities/generateVideo/index.ts
- packages/typescript/ai/src/activities/stream-generation-result.ts
- packages/typescript/ai-gemini/src/adapters/text.ts
- packages/typescript/ai-anthropic/src/adapters/text.ts
- packages/typescript/ai/tests/chat.test.ts
TOOL_CALL_START now uses toolCallName (spec) instead of toolName (deprecated). TOOL_CALL_END fields (toolName, input, result) are stripped by spec middleware; harness now falls back to data captured during START/ARGS phases. Added TOOL_CALL_RESULT handler for spec-compliant tool result delivery. RUN_FINISHED finishReason/usage are optional extensions.
…pliance Ollama doesn't stream tool args incrementally — it delivers them all at once in TOOL_CALL_END.input. Since the strip middleware removes input from TOOL_CALL_END, consumers had no way to get the args. Now emits a TOOL_CALL_ARGS event with the full args as delta before TOOL_CALL_END.
…sult parts Root cause: The strip middleware removes 'result' from TOOL_CALL_END events. The StreamProcessor's TOOL_CALL_END handler only creates tool-result parts when chunk.result is present. With it stripped, no tool-result parts were created on the client side. TOOL_CALL_RESULT events (spec-compliant tool result delivery) were received but ignored (no-op). Without tool-result parts, areAllToolsComplete() behaved incorrectly, and the client could not detect server tool completion. Fix: Handle TOOL_CALL_RESULT by creating tool-result parts and updating tool-call output, mirroring TOOL_CALL_END's result handling logic.
finishReason is essential for client-side continuation logic. Without it, the chat-client cannot distinguish 'stop' (no continuation needed) from 'tool_calls' (client tools need execution), causing infinite request loops when server-side tool results leave tool-call parts as the last message part.
@ag-ui/core BaseEventSchema uses .passthrough(), so extra fields are allowed and won't break spec validation. Only strip: - Deprecated aliases: toolName, stepId, state (nudge toward spec names) - Deprecated nested error object on RUN_ERROR - rawEvent (debug payload, potentially large) Keep everything else: model, content, args, usage, finishReason, input, result, index, providerMetadata, stepType, delta, etc.
@ag-ui/core BaseEventSchema uses .passthrough() so extra fields are allowed. Only strip the deprecated nested error object from RUN_ERROR (conflicts with spec's flat message/code). Everything else passes through: model, content, toolName, stepId, usage, finishReason, result, input, args, etc.
Summary
@ag-ui/coreevent types instead of custom definitions, makingStreamChunkstructurally compatible with@ag-ui/core'sBaseEventstripToSpecMiddleware(always last in middleware chain) that removes non-spec fields (model,content,args,finishReason,usage,toolName,stepId, etc.) before yielding events to consumersthreadId/runIdtoChatOptions— flows through toRUN_STARTED/RUN_FINISHEDeventsREASONING_*events (START, MESSAGE_START, MESSAGE_CONTENT, MESSAGE_END, END) alongside legacySTEP_*events in all 5 adapters with reasoning support (OpenAI, Anthropic, Gemini, Ollama, OpenRouter)TOOL_CALL_RESULTevents in the agent loop after tool executiontoolCallName/toolName,stepName/stepId,snapshot/state— deprecated aliases marked with@deprecatedRunErrorEventfrom nestederror: {message, code}to spec-compliant flatmessage, codeStateDeltaEvent.deltato JSON Patch format (any[]) per spechandleStreamChunksees extended fields while consumers get spec-compliant outputTest plan
pnpm testpasses (131 tasks across 34 projects)@tanstack/ai(including new REASONING, TOOL_CALL_RESULT, threadId, strip compliance tests)@tanstack/ai-clientchat()don't containmodel,toolName,finishReason,usageetc.REASONING_MESSAGE_CONTENTupdates thinking parts in StreamProcessorSummary by CodeRabbit
New Features
Bug Fixes
Refactor
Tests