End-to-End UI (Interaction + Host + Adapter)
This guide wires the full path from a single turn to UI commands: Interaction core → sessions → UI adapter.
> **Demo path (3/4)** — Next up:
How the UI boundary works
flowchart LR U[User message] --> H[Host handler] H --> S[Session] S --> P[Interaction pipeline] P --> E[EventStream] E --> UI[UI adapter]
Prerequisites
Install the adapter dependencies:
npm install ai @assistant-ui/reactpnpm add ai @assistant-ui/reactyarn add ai @assistant-ui/reactbun add ai @assistant-ui/reactStep 1: Set up storage and adapters
import { createAssistantUiInteractionEventStream, createBuiltinModel } from "@geekist/llm-core/adapters";
import { createInteractionSession } from "@geekist/llm-core/interaction";
const sessionState = new Map();
/** @type {import("#interaction").SessionStore} */
const store = {
load: loadSessionState,
save: saveSessionState,
};import { createAssistantUiInteractionEventStream, createBuiltinModel } from "@geekist/llm-core/adapters";
import { createInteractionSession } from "@geekist/llm-core/interaction";
import type { SessionId, SessionStore, InteractionState } from "@geekist/llm-core/interaction";
import type { AssistantTransportCommand } from "@assistant-ui/react";
const sessionState = new Map<string, InteractionState>();
const store: SessionStore = {
load: loadSessionState,
save: saveSessionState,
};This keeps persistence and UI output in the host layer while the interaction core stays headless.
Step 2: Handle a chat turn and emit UI commands
/**
* @typedef ChatTurnInput
* @property {import("#interaction").SessionId} sessionId
* @property {string} message
* @property {(command: import("@assistant-ui/react").AssistantTransportCommand) => void} sendCommand
*/
/**
* @param {ChatTurnInput} input
*/
export function handleChatTurn(input) {
const eventStream = createAssistantUiInteractionEventStream({
sendCommand: input.sendCommand,
});
const session = createInteractionSession({
sessionId: input.sessionId,
store,
adapters: { model: createBuiltinModel() },
eventStream,
});
return session.send({ role: "user", content: input.message });
}export type ChatTurnInput = {
sessionId: SessionId;
message: string;
sendCommand: (command: AssistantTransportCommand) => void;
};
export function handleChatTurn(input: ChatTurnInput) {
const eventStream = createAssistantUiInteractionEventStream({
sendCommand: input.sendCommand,
});
const session = createInteractionSession({
sessionId: input.sessionId,
store,
adapters: { model: createBuiltinModel() },
eventStream,
});
return session.send({ role: "user", content: input.message });
}Your app calls handleChatTurn(...) with a sessionId, a message, and a UI command sink. The adapter turns interaction events into UI-specific commands.
Swap UI adapters without touching interaction logic
The adapter is the only thing that changes:
- createAssistantUiInteractionEventStream
+ createAiSdkInteractionEventStreamThe rest of the handler stays the same.
Run the demo locally
The examples/interaction-node-sse app shows the same end-to-end path (Interaction → Session → EventStream) in a tiny Node server.
# from the repo root
bun install
bun examples/interaction-node-sse/server.jsThen open:
http://localhost:3030To hit the SSE endpoint directly:
http://localhost:3030/chat?sessionId=demo&message=HelloAgentic playground (WebSocket + agent loop)
The examples/agentic app is a fuller UI that wires the agent loop runtime to an assistant-ui chat and a configurable control panel (tools, skills, MCP, approvals, sub-agents) while still using the same event stream surface.
# from the repo root
bun install
bun --cwd examples/agentic/server dev
bun --cwd examples/agentic/client devThen open:
http://localhost:5173Next step
If you need multi-step orchestration (RAG, tools, HITL), move to full workflows: