Recipe: Simple Chat (Preset + Base Handle)
NOTE
Goal: A minimal entry point that wires model and system defaults into a reusable handle.
Simple Chat is a compact recipe for straight‑line chat. It wires your system prompt and model choice into a preset handle and keeps the flow itself very small. You reach for it when you want a clean baseline: quick assistants, prototypes, or a stable fallback before you bring in RAG or tools.
For a full agentic loop, see Agent. For RAG plus chat, see RAG.
flowchart LR Config["System + Model"] --> Run["run()"] Run --> Answer(["Outcome"])
1) Quick start (system and model defaults)
npm install ai @ai-sdk/openaipnpm add ai @ai-sdk/openaiyarn add ai @ai-sdk/openaibun add ai @ai-sdk/openaiimport { recipes } from "@geekist/llm-core/recipes";
import { fromAiSdkModel } from "@geekist/llm-core/adapters";
import { openai } from "@ai-sdk/openai";
const chat = recipes["chat.simple"]({
system: "You are a helpful coding assistant.",
model: "gpt-4o-mini",
}).defaults({
adapters: {
model: fromAiSdkModel(openai("gpt-4o-mini")),
},
});The recipe returns a structured outcome object: { status, artefact, diagnostics, trace }. When the run succeeds, the artefact carries an answer.text field. Paused and error outcomes reuse the same trace and diagnostics so every run remains explainable.
If you see model: "gpt-4o-mini" in a config block, treat it as a label for the recipe surface. Actual inference happens through the adapter you pass in defaults.
Related: Recipes API, Runtime outcomes, and Adapters overview.
2) Configure and tune (common tweaks)
Simple Chat accepts a small recipe config: a system prompt, model defaults, and any response formatting you want to standardise. This config gives the application its voice and makes behaviour repeatable across runs.
For stricter guarantees, set runtime.diagnostics = "strict" in the runtime options so adapter issues or schema mismatches surface immediately.
3) Streaming (the cleanest place to learn it)
Simple Chat is the easiest recipe for learning how streaming behaves. Streaming lives on the model adapter, and you can use the same adapter that you already wired into the recipe. That gives you streaming output while the outcome shape stays the same.
import { collectStep, fromAiSdkModel, isPromiseLike, maybeToStep } from "@geekist/llm-core/adapters";
import { openai } from "@ai-sdk/openai";
const model = fromAiSdkModel(openai("gpt-4o-mini"));
if (!model.stream) {
throw new Error("Model does not support streaming");
}
const stepResult = maybeToStep(model.stream({ prompt: "Explain DSP in one paragraph." }));
const step = isPromiseLike(stepResult) ? await stepResult : stepResult;
const collected = collectStep(step);
const events = isPromiseLike(collected) ? await collected : collected;
for (const event of events) {
if (event.type === "delta") {
process.stdout.write(event.text || "");
}
}See Models: streaming for the full adapter-side lifecycle.
4) Use it as a base (compose with other recipes)
Simple Chat works well as a base recipe. It sets shared defaults and delegates the rest of the work to other recipes.
Common patterns:
- Run Simple Chat for the first response, then hand off to Agent for multi‑step tool usage.
- Pair Simple Chat with RAG so you capture a clean conversational answer while a retrieval step prepares context in the background.
- Use it as a fallback when other recipes fail strict validation, since the handle stays very small and predictable.
import { recipes } from "@geekist/llm-core/recipes";
const chat = recipes["chat.simple"]({
system: "You are a helpful coding assistant.",
model: "gpt-4o-mini",
}).use(recipes.agent());
const outcome = await chat.run({ input: "Explain DSP." });The key idea is that Simple Chat stabilises how the model sees system and user input. Other recipes then add retrieval, tools, memory, or evaluation around that stable core.
5) Diagnostics and trace
Even when you treat Simple Chat as a preset, you still get full diagnostics and trace from the composed recipe. The outcome structure does not change, which means higher level code can inspect results in a consistent way across different recipes.
import { recipes } from "@geekist/llm-core/recipes";
const chat = recipes["chat.simple"]({
system: "You are a helpful coding assistant.",
model: "gpt-4o-mini",
});
// chat handle from above
const outcome = await chat.run({ input: "Explain DSP." }, { runtime: { diagnostics: "strict" } });
console.log(outcome.diagnostics);
console.log(outcome.trace);Related: Runtime diagnostics and Runtime trace.
6) Plan view (explicit single step)
Simple Chat participates fully in the explain view. Even this smallest recipe exposes its step as simple-chat.respond, which keeps the step list visible for every run.
flowchart LR Step["simple-chat.respond"] --> Outcome(["Outcome"])
This makes it easier to debug adapters and runtime behaviour because you can see exactly which step executed and which outcome it produced.
7) Why Simple Chat is special
Simple Chat is the canonical “hello world” recipe. It gives you a fast way to validate adapters, diagnostics, streaming, and runtime wiring with very little configuration. The outcome object matches the structure used by richer recipes, so once Simple Chat feels comfortable you can migrate to RAG or agentic flows with the same mental model.
In practice, many applications begin with Simple Chat in development and keep it around in production for health checks, smoke tests, or a safe fallback path.
Implementation
Source: src/recipes/simple-chat.ts