UI SDK Adapters
UI SDK adapters connect Interaction Core to UI-facing streaming primitives while keeping UI frameworks out of the core package. They live in the adapters layer and map InteractionEvent values into UI SDK stream chunks or DOM-style events. Host code can then push updates into UI hooks, transports, or custom components.
These adapters use a *-ui suffix, for example ai-sdk-ui, to show that they target UI transport protocols rather than provider SDKs. They share the same adapter surface as the other integrations, so the API feels consistent across recipes and interactions.
Quick start with AI SDK streams
import { createAiSdkInteractionEventStream, fromAiSdkModel } from "@geekist/llm-core/adapters";
import { createInteractionHandle } from "@geekist/llm-core/interaction";
import { openai } from "@ai-sdk/openai";
import { createUIMessageStream } from "ai";
const model = fromAiSdkModel(openai("gpt-4o-mini"));
const handle = createInteractionHandle().configure({
adapters: { model },
});
/** @param {{ writer: import("ai").UIMessageStreamWriter }} param0 */
async function executeInteraction({ writer }) {
await handle.run(
{ message: { role: "user", content: "Hello!" } },
{ eventStream: createAiSdkInteractionEventStream({ writer }) },
);
}
const stream = createUIMessageStream({ execute: executeInteraction });AI SDK ChatTransport helper
If you already use useChat, you can plug in a transport that runs Interaction Core directly. The helper keeps the usual AI SDK ergonomics while the interaction runtime handles the conversation logic.
import { createAiSdkChatTransport, fromAiSdkModel } from "@geekist/llm-core/adapters";
import { createInteractionHandle } from "@geekist/llm-core/interaction";
import { openai } from "@ai-sdk/openai";
const handle = createInteractionHandle().defaults({
adapters: { model: fromAiSdkModel(openai("gpt-4o-mini")) },
});
const transport = createAiSdkChatTransport({ handle });
// useChat({ transport })
void transport;import {
createAiSdkChatTransport,
fromAiSdkModel,
type AiSdkChatTransportOptions,
} from "@geekist/llm-core/adapters";
import { createInteractionHandle } from "@geekist/llm-core/interaction";
import { openai } from "@ai-sdk/openai";
const handle = createInteractionHandle().defaults({
adapters: { model: fromAiSdkModel(openai("gpt-4o-mini")) },
});
const transportOptions: AiSdkChatTransportOptions = { handle };
const transport = createAiSdkChatTransport(transportOptions);
// useChat({ transport })
void transport;AI SDK WebSocket Transport
For WebSocket-based streaming (ideal for custom backends like Bun or Node.js WebSocket servers), use createAiSdkWebSocketChatTransport. This creates a transport compatible with useChat but routed over a persistent WebSocket connection.
import { createAiSdkWebSocketChatTransport } from "@geekist/llm-core/adapters";
import { useChat } from "@ai-sdk/react";
const transport = createAiSdkWebSocketChatTransport({
url: "ws://localhost:3001/chat",
});
const chat = useChat({
transport,
});Assistant UI transport
assistant-ui provides an assistant-transport command protocol. The adapter maps interaction model events to add-message and add-tool-result commands that fire when the model run completes. This fits command-driven runtimes that prefer a clear, structured stream of UI actions.
import { createAssistantUiInteractionEventStream, fromAiSdkModel } from "@geekist/llm-core/adapters";
import { createInteractionHandle } from "@geekist/llm-core/interaction";
import { useAssistantTransportSendCommand } from "@assistant-ui/react";
import { openai } from "@ai-sdk/openai";
const model = fromAiSdkModel(openai("gpt-4o-mini"));
const handle = createInteractionHandle().configure({
adapters: { model },
});
function useRunInteraction() {
const sendCommand = useAssistantTransportSendCommand();
/** @param {import("#adapters/types").Message} message */
return async function runInteraction(message) {
await handle.run(
{ message },
{ eventStream: createAssistantUiInteractionEventStream({ sendCommand }) },
);
};
}Server-side streaming with Assistant Transport
When you serve assistant-ui from a Node or Edge route (like a Next.js Route Handler), you often want to return a standard Response that speaks the assistant-stream protocol.
The createAssistantUiInteractionStream adapter handles this. It creates a controller that you can pump interaction events into, and exposes a toResponse() helper that formats the stream correctly for the client.
import { createAssistantUiInteractionStream, createBuiltinModel } from "@geekist/llm-core/adapters";
import { AssistantStream, DataStreamEncoder } from "assistant-stream";
import { runInteractionRequest } from "@geekist/llm-core/interaction";
export async function POST() {
// 1. Create the stream adapter
const { stream, eventStream } = createAssistantUiInteractionStream();
// 2. Run your interaction (passing the eventStream)
await runInteractionRequest({
recipeId: "agent",
model: createBuiltinModel(),
messages: [{ role: "user", content: "Hello from the assistant stream!" }],
interactionId: "chat-123",
eventStream,
});
// 3. Return the response
return AssistantStream.toResponse(stream, new DataStreamEncoder());
}This sits alongside the command-based adapter above. Use the command adapter when you are sending batch updates or working with a custom transport. Use this streaming adapter when you are building a standard HTTP endpoint that speaks the native assistant-stream protocol.
For streaming behaviour via the AI SDK protocol, the ai-sdk-ui adapter often works better. You can plug it into @assistant-ui/react-ai-sdk, which turns AI SDK streams into assistant-ui commands on the client. This new adapter skips that translation layer.
OpenAI ChatKit events
ChatKit exposes a DOM event interface. The adapter converts interaction events into chatkit.* events so you can feed a headless Interaction Core run into the ChatKit Web Component event stream.
import { createChatKitInteractionEventStream, fromAiSdkModel } from "@geekist/llm-core/adapters";
import { createInteractionHandle } from "@geekist/llm-core/interaction";
import { openai } from "@ai-sdk/openai";
const model = fromAiSdkModel(openai("gpt-4o-mini"));
const handle = createInteractionHandle().configure({ adapters: { model } });A similar bridge works for any event-driven UI SDK. Use createInteractionEventEmitterStream with an event mapper that suits your own component model and event names.
NLUX ChatAdapter
NLUX expects a ChatAdapter implementation that can either stream text or return a batch result. The adapter here wires Interaction Core into that contract so your NLUX chat components can consume the same interaction flows as the rest of the system.
import { createAiChat } from "@nlux/core";
import { createNluxChatAdapter, fromAiSdkModel } from "@geekist/llm-core/adapters";
import { createInteractionHandle } from "@geekist/llm-core/interaction";
import { openai } from "@ai-sdk/openai";
const handle = createInteractionHandle().defaults({
adapters: { model: fromAiSdkModel(openai("gpt-4o-mini")) },
});
const adapter = createNluxChatAdapter({ handle });
const chat = createAiChat().withAdapter(adapter);
void chat;import { createAiChat } from "@nlux/core";
import { createNluxChatAdapter, fromAiSdkModel, type NluxChatAdapterOptions } from "@geekist/llm-core/adapters";
import { createInteractionHandle } from "@geekist/llm-core/interaction";
import { openai } from "@ai-sdk/openai";
const handle = createInteractionHandle().defaults({
adapters: { model: fromAiSdkModel(openai("gpt-4o-mini")) },
});
const adapterOptions: NluxChatAdapterOptions = { handle };
const adapter = createNluxChatAdapter(adapterOptions);
const chat = createAiChat().withAdapter(adapter);
void chat;Mapping behaviour
The AI SDK adapter focuses on how streams behave rather than on how messages are constructed.
When an interaction event has kind "model", the adapter turns it into UIMessageChunk parts. These parts cover text, reasoning segments, and tool activity so the UI can show intermediate work as it happens.
Events of type trace, diagnostic, query, and event-stream become data-* chunks. This keeps them visible for logging, debugging, and telemetry, while message text stays clean and user facing.
Message and part identifiers follow deterministic rules based on interaction metadata. Client code can resume a stream or merge multiple streams and still rely on stable identifiers.
When you need deterministic grouping across several events, use a shared mapper:
import { createAiSdkInteractionMapper } from "@geekist/llm-core/adapters";
const mapper = createAiSdkInteractionMapper({ messageId: "chat-1" });
// mapper.mapEvent(event) -> UIMessageChunk[]When to use these adapters
These adapters suit projects that want to stream Interaction Core output into useChat or other AI SDK transports and still keep the runtime independent from any specific UI library.
They also fit projects that use assistant-ui or similar UI packages. These tools already work with AI SDK streams and rely on a clear event or command protocol. The adapter turns Interaction Core events into that shape, so your UI code keeps using its usual hooks and components while the runtime focuses on reasoning, tools, and memory.
The general goal is to keep UI concerns inside adapter code while Interaction Core stays focused on orchestration, recipes, and interaction logic. That way you can evolve your UI stack, or support several UI stacks in parallel, while reusing the same interaction pipeline.