OpenClaw Memory Toolkit
OpenClaw indexes your files. This extracts the knowledge that actually matters.
File indexing isn't memory
OpenClaw's built-in memory indexes your markdown files. That's fine for recent context — but after 100 sessions, your daily logs are a haystack. You can't ask "what does this client prefer?" and get a precise answer. You get a wall of old conversation.
This toolkit adds fact-based memory. An LLM reads your session transcripts, extracts the knowledge that matters, classifies it, scrubs credentials, and stores structured facts in a vector database.
Beyond file indexing
| OpenClaw Built-in | + This Toolkit | |
|---|---|---|
| Session context | ✓ | — |
| Markdown file indexing | ✓ | — |
| Structured fact extraction | — | LLM extracts discrete facts from session noise |
| Long-term vector memory | — | Separate Qdrant store |
| Documentation knowledge base | — | Embed any docs, searchable |
| Client data isolation | — | Mandatory client_id on every memory |
| Credential scrubbing | — | API keys, tokens, emails redacted |
| GDPR-compliant deletion | — | Per-client erasure with audit log |
| Cross-agent bridge | — | Optional bridge to Multi-Agent Memory |
| Encrypted backups | — | GPG-encrypted Qdrant snapshots |
How it all connects
Hover over any node to learn what it does. Session data flows through consolidation into searchable memory that your agents can query.
↑ Hover a node to see details
Install and forget
git clone https://github.com/ZenSystemAI/openclaw-memory.git cd openclaw-memory cp .env.example .env # Set OPENAI_API_KEY and QDRANT_API_KEY # Install skills cp -r skills/* ~/.openclaw/skills/ # Set up cron for automatic fact extraction crontab -e # 0 11,23 * * * bash ~/.openclaw/scripts/memory-consolidate.sh
Works with Multi-Agent Memory
If you run multiple agents across machines, the consolidation engine can automatically bridge cross-agent-relevant facts to Multi-Agent Memory's shared brain. Add two env vars and it just works.