Appearance
Storage & Memory (Persistence)
Concepts: State Management
LLMs are stateless. They don't remember you. To build a conversation, you need to manage state in two ways:
- Memory (Chat History): Short-term context. "What did we just talk about?"
- Storage (Key-Value): Long-term persistence. User preferences, session data, or cached results.
1. Memory Adapters
Conversation History
Memory adapters provide a standard interface to load() past messages and save() new ones.
When to use what?
LangChain (
BaseListChatMessageHistory): The Industry Standard. LangChain has adapters for Redis, Postgres, Mongo, DynamoDB, and dozens more. If you need to save chat logs to a real database, use a LangChain history adapter.- Upstream Docs:
BaseListChatMessageHistory
- Upstream Docs:
In-Memory (Simple): For quick scripts or stateless serverless functions, you might just pass an array of messages directly to the Workflow
run()method. You don't always need an adapter.
2. Key-Value Stores
Arbitrary Data
Sometimes you need to save more than just messages—like a user's uploaded PDF ID, a session token, or a serialized workflow state.
- Interface:
KVStore - Why use it?: It abstracts the backend. You can write your logic against
store.get(key)and swap Redis for In-Memory for S3 without changing your business logic.
Example: Redis Storage
We recommend passing LangChain Stores into our adapter.
ts
import type { KVStore } from "@geekist/llm-core/adapters";3. Caching
Performance & Cost Savings
Caching allows llm-core to remember the result of an expensive operation (like a GPT-4 generation or a long embedding) and return it instantly next time.
How it works
We utilize a simple Cache interface that can sit in front of any operation.
ts
// #region docs
import { createMemoryCache } from "@geekist/llm-core/adapters";
import type { Cache } from "@geekist/llm-core/adapters";
// 1. Create a cache (TTL is provided at call time)
const cache: Cache = createMemoryCache();
// 2. Use it in a workflow (example conceptual usage)
// The workflow engine checks this cache before calling the model
// #endregion docs
void cache;We also support Persistent Caching by wrapping a KVStore:
ts
import { createCacheFromKVStore } from "@geekist/llm-core/adapters";
import type { Blob, Cache, KVStore } from "@geekist/llm-core/adapters";Supported Integrations (Flex)
| Capability | Ecosystem | Adapter Factory | Upstream Interface | Deep Link |
|---|---|---|---|---|
| Memory | AI SDK | fromAiSdkMemory | MemoryProvider | (Experimental) |
| Memory | LangChain | fromLangChainMemory | BaseListChatMessageHistory | Docs |
| Memory | LlamaIndex | fromLlamaIndexMemory | BaseMemory | Docs |
| KV Store | LangChain | fromLangChainStore | BaseStore | Docs |
| KV Store | LlamaIndex | fromLlamaIndexDocumentStore | BaseDocumentStore | Docs |
| Cache | AI SDK | fromAiSdkCacheStore | (Custom) | - |
| Cache | LangChain | fromLangChainStoreCache | BaseStore | Docs |
| Cache | LlamaIndex | fromLlamaIndexKVStoreCache | BaseKVStore | Docs |