Appearance
Adapters Overview
Adapters normalize external ecosystem constructs into one consistent shape, then let workflows mix and match them. This is the high-level entry point; the detailed contracts live in Adapters API.
Related:
Quick start (value-first helpers)
Register a retriever without touching registry types:
ts
import { recipes } from "@geekist/llm-core/recipes";
import type { Retriever } from "@geekist/llm-core/adapters";
const retriever: Retriever = {
retrieve: () => ({ documents: [] }),
};
const wf = recipes.rag().defaults({ adapters: { retriever } }).build();
void wf;Custom constructs (e.g., mcp) go into constructs:
ts
// #region docs
import { Adapter } from "@geekist/llm-core/adapters";
import type { AdapterPlugin } from "@geekist/llm-core/adapters";
const client = {}; // Mock client
const plugin = Adapter.register("custom.mcp", "mcp", { client });
plugin satisfies AdapterPlugin;
// #endregion docs
void plugin;Effect return semantics
Effectful adapter operations return MaybePromise<boolean | null>:
true: the operation succeeded.false: the operation failed (validation errors, missing inputs, or upstream failure).null: not applicable (capability not supported or intentionally skipped).
This applies to cache/storage writes and deletes, memory writes, checkpoint/event-stream/trace emits, and vector store deletes.
Write path (vector store)
Vector stores let you ingest or delete embeddings without reaching for raw SDKs:
ts
import { Adapter } from "@geekist/llm-core/adapters";
import type { VectorStore, VectorStoreDeleteInput, VectorStoreUpsertInput } from "@geekist/llm-core/adapters";
const readUpsertIds = (input: VectorStoreUpsertInput) =>
"documents" in input ? input.documents.map((doc) => doc.id ?? "new") : [];
const readDeleteIds = (input: VectorStoreDeleteInput) => ("ids" in input ? input.ids : []);
const store: VectorStore = {
upsert: (input) => ({ ids: readUpsertIds(input) }),
delete: (input) => {
console.log(readDeleteIds(input));
return true;
},
};
const vectorStore = Adapter.vectorStore("custom.vectorStore", store);Indexing (Ingestion)
Indexers manage the synchronization between your source documents and your vector store to prevent duplication.
ts
import { fromLangChainIndexing } from "@geekist/llm-core/adapters";
import type { Indexing } from "@geekist/llm-core/adapters";
import type { RecordManagerInterface } from "@langchain/core/indexing";
import type { VectorStore } from "@langchain/core/vectorstores";
const recordManager: RecordManagerInterface = {
createSchema: async () => {},
getTime: async () => Date.now(),
update: async () => {},
exists: async (keys) => keys.map(() => false),
listKeys: async () => [],
deleteKeys: async () => {},
};
const langChainVectorStore = {} as unknown as VectorStore;
// Note: Requires a raw LangChain vector store instance
const indexing: Indexing = fromLangChainIndexing(recordManager, langChainVectorStore);Query engines (LlamaIndex)
Query engines return final answers from a retriever + synthesizer pipeline:
ts
import { fromLlamaIndexQueryEngine } from "@geekist/llm-core/adapters";
import type { QueryEngine } from "@geekist/llm-core/adapters";
import { BaseQueryEngine } from "@llamaindex/core/query-engine";
import { EngineResponse } from "@llamaindex/core/schema";
class DemoQueryEngine extends BaseQueryEngine {
async _query(query: string) {
return EngineResponse.fromResponse(`Answer for ${query}`, false, []);
}
protected _getPrompts() {
return {};
}
protected _updatePrompts() {}
protected _getPromptModules() {
return {};
}
}
const engine = new DemoQueryEngine();
const queryEngine: QueryEngine = fromLlamaIndexQueryEngine(engine);Response synthesizers (LlamaIndex)
Response synthesizers focus on combining retrieved nodes into an answer:
ts
import { fromLlamaIndexResponseSynthesizer } from "@geekist/llm-core/adapters";
import type { ResponseSynthesizer } from "@geekist/llm-core/adapters";
import { BaseSynthesizer } from "@llamaindex/core/response-synthesizers";
import { EngineResponse } from "@llamaindex/core/schema";
class DemoSynthesizer extends BaseSynthesizer {
constructor() {
super({});
}
async getResponse(query: string, _nodes: unknown[], _stream: boolean) {
return EngineResponse.fromResponse(`Answer for ${query}`, false, []);
}
protected _getPrompts() {
return {};
}
protected _updatePrompts() {}
protected _getPromptModules() {
return {};
}
}
const synthesizerInstance: BaseSynthesizer = new DemoSynthesizer();
const synthesizer: ResponseSynthesizer = fromLlamaIndexResponseSynthesizer(synthesizerInstance);Media models (AI SDK)
AI SDK exposes image, speech, and transcription models. Wrap them directly:
ts
import { Adapter } from "@geekist/llm-core/adapters";
import type { Blob, SpeechCall, SpeechModel, SpeechResult } from "@geekist/llm-core/adapters";
const emptyAudio: Blob = { bytes: new Uint8Array(), contentType: "audio/wav" };
const generateSpeech = (_call: SpeechCall): SpeechResult => ({
audio: emptyAudio,
});
const speechModel: SpeechModel = {
generate: generateSpeech,
};
const speech = Adapter.speech("custom.speech", speechModel);Trace sinks (LangChain callbacks)
LangChain callbacks/tracers can act as trace sinks. We forward run.start into handleChainStart and run.end into handleChainEnd / handleChainError (based on status). We map provider.response into handleLLMEnd. All other events are emitted as custom events.
ts
// #region docs
import { Adapter, fromLangChainCallbackHandler } from "@geekist/llm-core/adapters";
import { RunCollectorCallbackHandler } from "@langchain/core/tracers/run_collector";
const handler = new RunCollectorCallbackHandler();
const sink = fromLangChainCallbackHandler(handler);
const tracePlugin = Adapter.trace("custom.trace", sink);
// #endregion docs
void tracePlugin;Registry (advanced)
If you need explicit provider resolution, use the registry directly:
ts
// #region docs
import { createRegistryFromDefaults } from "@geekist/llm-core/adapters";
import type { Model } from "@geekist/llm-core/adapters";
const myModelAdapter = {} as Model; // Mock
const registry = createRegistryFromDefaults();
registry.registerProvider({
construct: "model",
providerKey: "custom",
id: "custom:model",
priority: 10,
factory: () => myModelAdapter as Model,
});
// #endregion docs