Skip to content

Recipe: Simple Chat (Preset + Base Handle)

> **Goal**: A minimal, low-friction entry point that wires model/system defaults.

Simple Chat is intentionally small: it does not introduce extra steps. Instead, it provides a recipe handle that sets model/system defaults and can be composed into richer flows. You use it when you want a clean baseline: quick assistants, prototypes, or a stable fallback before you add RAG or tools.

If you want a full agentic loop, see Agent. If you want RAG + chat, see RAG.

flowchart LR
  Config["System + Model"] --> Run["run()"]
  Run --> Answer(["Outcome"])

1) Quick start (system + model defaults)

ts
import { recipes } from "@geekist/llm-core/recipes";
import type { SimpleChatConfig } from "@geekist/llm-core/recipes";
import { fromAiSdkModel } from "@geekist/llm-core/adapters";
import { openai } from "@ai-sdk/openai";

Outcomes are explicit: { status, artefact, diagnostics, trace }. A successful run carries answer.text in the artefact; paused and error outcomes keep the same trace and diagnostics so you can always explain what happened. If you see model: "gpt-4o-mini" in config, think of it as a label/selector for the recipe surface; the actual inference happens through the adapter you pass in defaults.

Related: Recipes API, Runtime Outcomes, and Adapters overview.


2) Configure and tune (common tweaks)

Simple Chat accepts a recipe‑specific config that stays intentionally small: a system prompt, model defaults, and any response formatting you want to standardize. This is where you set the “voice” of your app. If you want strict enforcement, run with runtime.diagnostics = "strict" so missing adapters or schema mismatches fail early.


3) Streaming (the cleanest place to learn it)

This is the simplest recipe to experiment with streaming. Streaming lives on the model adapter, and you can use the same adapter you already wired into the recipe. That gives you streaming output without changing the recipe’s outcome guarantees.

ts
import { fromAiSdkModel } from "@geekist/llm-core/adapters";
import { openai } from "@ai-sdk/openai";

const model = fromAiSdkModel(openai("gpt-4o-mini"));

if (!model.stream) {
  throw new Error("Model does not support streaming");
}

const stream = await model.stream({ prompt: "Explain DSP in one paragraph." });

for await (const event of stream) {
  if (event.type === "delta") {
    process.stdout.write(event.text ?? "");
  }
}

See: Models -> Streaming.


4) Use it as a base (compose with other recipes)

Because Simple Chat only wires defaults, pair it with another recipe's steps.

ts
import { recipes } from "@geekist/llm-core/recipes";

const chat = recipes["chat.simple"]({
  model: "gpt-4o-mini",
}).use(recipes.agent());

const outcome = await chat.run({ input: "Explain DSP." });

void outcome;

5) Diagnostics + trace

Even when used as a preset, you still get full diagnostics and trace from the composed recipe.

ts
import { recipes } from "@geekist/llm-core/recipes";

const chat = recipes["chat.simple"]({
  model: "gpt-4o-mini",
});

// chat handle from above
const outcome = await chat.run({ input: "Explain DSP." }, { runtime: { diagnostics: "strict" } });

console.log(outcome.diagnostics);
console.log(outcome.trace);

Related: Runtime -> Diagnostics and Runtime -> Trace.


6) Plan (explicit, single step)

Even in its simplest form, the plan is visible. This keeps “no hidden steps” as a constant.

flowchart LR
  Step["simple-chat.respond"] --> Outcome(["Outcome"])

7) Why Simple Chat is special

Simple Chat is the canonical “hello world” recipe. It is the quickest way to validate adapters, diagnostics, and streaming without any other moving parts, and it gives you a stable baseline before you move to RAG or agentic flows.


Implementation