Appearance
Observability Adapters (Tracing)
What is Tracing?
Tracing allows you to observe the execution of your AI workflows. It captures latency, token usage, errors, and input/output payloads across every step of the pipeline.
In llm-core, we normalize tracing into Trace Sinks. A sink is simply a destination for AdapterTraceEvent objects.
Native Tracing
The Minimalist Approach
You don't need to install heavy observability SDKs just to see what your AI is doing. llm-core ships with a lightweight, in-memory trace sink you can emit to from packs or adapters.
When to use this?
- Debugging: Quickly dump the trace to see why a model hallucinated.
- Testing: Assert that a specific tool was called with specific arguments.
- Local Dev: See the "thought process" of your agent in the console.
Example: capturing a "Trace Dump"
ts
// #region docs
import { Adapter, createBuiltinTrace } from "@geekist/llm-core/adapters";
import type { AdapterTraceEvent } from "@geekist/llm-core/adapters";
// 1. Create a sink (typed)
const builtin = createBuiltinTrace();
const tracePlugin = Adapter.trace("local.trace", builtin);
const input = "Why is the sky blue?";
// 2. Emit from a pack or adapter step
await builtin.emitMany?.([
{ name: "run.start", data: { input } },
{ name: "provider.response", data: { usage: { inputTokens: 12, outputTokens: 42 } } },
{ name: "run.end", data: { status: "ok" } },
]);
// 3. Inspect the timeline (builtin-only)
const events = (builtin as { events?: AdapterTraceEvent[] }).events ?? [];
console.log(JSON.stringify(events, null, 2));
// #endregion docs
void tracePlugin;LangChain Bridge
LangChain has a mature callback ecosystem for tracing (e.g., LangSmith, Lunary). We provide an adapter that maps llm-core lifecycle events into LangChain callbacks.
Mapped Events
| llm-core Event | LangChain Method | Notes |
|---|---|---|
run.start | handleChainStart | Maps input payloads. |
run.end (ok) | handleChainEnd | Maps output payloads. |
run.end (error) | handleChainError | Maps error stack. |
provider.response | handleLLMEnd | Maps model usage. |
| Custom | handleCustomEvent | For all other telemetry. |
Usage
Wrap any BaseCallbackHandler into a Trace Sink:
ts
// #region docs
import { Adapter, fromLangChainCallbackHandler } from "@geekist/llm-core/adapters";
import { RunCollectorCallbackHandler } from "@langchain/core/tracers/run_collector";
const handler = new RunCollectorCallbackHandler();
const sink = fromLangChainCallbackHandler(handler);
const tracePlugin = Adapter.trace("custom.trace", sink);
// #endregion docs
void tracePlugin;Supported Integrations
| Ecosystem | Adapter Factory | Deep Link |
|---|---|---|
| LangChain | fromLangChainCallbackHandler | Docs |
| LlamaIndex | fromLlamaIndexTraceSink | Docs |
LlamaIndex Bridge
LlamaIndex workflows can emit trace events via trace plugins. This adapter maps workflow handler events to AdapterTraceEvent.
Usage
ts
// #region docs
import { createBuiltinTrace, fromLlamaIndexTraceSink } from "@geekist/llm-core/adapters";
import { createWorkflow } from "@llamaindex/workflow-core";
import { withTraceEvents } from "@llamaindex/workflow-core/middleware/trace-events";
const builtin = createBuiltinTrace();
const tracePlugin = fromLlamaIndexTraceSink(builtin);
const workflow = withTraceEvents(createWorkflow(), { plugins: [tracePlugin] });
// #endregion docs
void workflow;