You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: update READMEs with protocol positioning, align EN/ZH structure
- Add File Injection row (Claude Code Memory) to pattern comparison table
- Add protocol gap paragraph (MCP/ODBC/JDBC analogy, three primitives)
- Add intent-native protocol bullet to Features
- Add memory gateway paragraph to Vision
- Add Graph-LLM structural insight to References
- Align ZH with EN: add Go Report Card badge, Claude Max/Pro callout,
move "from source" under Install, update OpenClaw to --yes with link,
add NanoClaw skill location
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy file name to clipboardExpand all lines: README.md
+8-1Lines changed: 8 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,9 +28,12 @@ Most memory tools embed their own LLM inside the pipeline. Mnemon takes a differ
28
28
| Pattern | LLM Role | Representative |
29
29
|---|---|---|
30
30
|**LLM-Embedded**| Executor inside the pipeline | Mem0, Letta |
31
+
|**File Injection**| None — reads file at session start | Claude Code Memory |
31
32
|**MCP Server**| Tool provider via MCP protocol | claude-mem |
32
33
|**LLM-Supervised**| External supervisor of a standalone binary |**Mnemon**|
33
34
35
+
Mnemon also addresses a gap in the protocol stack. MCP standardizes how LLMs discover and invoke tools. ODBC/JDBC standardizes how applications access databases. But how LLMs interact with databases using memory semantics — this layer has no protocol. Mnemon's three primitives — `remember`, `link`, `recall` — form an intent-native protocol: command names map to the LLM's cognitive vocabulary (`remember` not INSERT, `recall` not SELECT), and output is structured JSON with signal transparency rather than raw database rows.
36
+
34
37
<palign="center">
35
38
<imgsrc="docs/diagrams/llm-supervised-concept.jpg"width="720"alt="LLM-Supervised Architecture — three patterns compared, with detailed Mnemon implementation showing hooks, brain/organ split, and sub-agent delegation" />
36
39
<br />
@@ -145,6 +148,7 @@ You don't run mnemon commands yourself. The agent does — driven by hooks and g
145
148
-**LLM-supervised** — the host LLM decides what to remember, update, and forget; no embedded LLM, no API keys
146
149
-**Hook-based integration** — four lifecycle hooks: Prime (load guide), Remind (recall & remember), Nudge (remember), and Compact (save before compression)
147
150
-**Four-graph architecture** — temporal, entity, causal, and semantic edges, not just vector similarity
151
+
-**Intent-native protocol** — three primitives (`remember`, `link`, `recall`) map to the LLM's cognitive vocabulary, not database syntax; structured JSON output with signal transparency
148
152
-**Intent-aware recall** — graph traversal + optional vector search (RRF fusion), enabled by default for all queries
149
153
-**Built-in deduplication** — `remember` auto-detects duplicates and conflicts; skips or auto-replaces
150
154
-**Retention lifecycle** — importance decay, access-count boosting, and garbage collection
@@ -168,6 +172,8 @@ All your local agentic AIs — across sessions and frameworks — sharing one po
168
172
169
173
The foundation is in place: a single `~/.mnemon` database that any agent can read and write. Claude Code's hook integration is the reference implementation; OpenClaw uses a plugin-based approach; NanoClaw integrates via container skills and volume mounts. The same pattern can be replicated for any LLM CLI that supports event hooks or system prompts.
170
174
175
+
The longer-term direction is a **memory gateway**: protocol decoupled from storage engine. The current SQLite backend is the first adapter; the protocol surface (`remember / link / recall`) can sit on top of PostgreSQL, Neo4j, or any graph database. Agent-side optimization (when to recall, what to remember) and storage-side optimization (indexing, graph algorithms) evolve independently. See [Future Direction](docs/design/08-decisions.md#82-future-direction) for details.
176
+
171
177
## FAQ
172
178
173
179
**Do different sessions share memory?**
@@ -228,10 +234,11 @@ make help # show all targets
228
234
229
235
## References
230
236
231
-
Mnemon combines the paradigm of one paper with the methodology of another. See [Theoretical Foundations](docs/DESIGN.md#24-theoretical-foundations) for details.
237
+
Mnemon combines the paradigm of one paper with the methodology of another, grounded in the structural insight that graph memory is isomorphic to LLM attention. See [Theoretical Foundations](docs/DESIGN.md#25-theoretical-foundations) for details.
232
238
233
239
-**RLM** — Zhang, Kraska & Khattab. [Recursive Language Models](https://arxiv.org/abs/2512.24601). 2025. Establishes the paradigm: LLMs are more effective as orchestrators of external environments than as direct data processors.
234
240
-**MAGMA** — Zou et al. [A Multi-Graph based Agentic Memory Architecture](https://arxiv.org/abs/2601.03236). 2025. Provides the methodology: four-graph model (temporal, entity, causal, semantic) with intent-adaptive retrieval.
241
+
-**Graph-LLM Structural Insight** — Joshi & Zhu. [Building Powerful GNNs from Transformers](https://arxiv.org/abs/2506.22084). 2025; and the Graph-based Agent Memory survey (Chang Yang et al., 2026). Confirms that LLM attention is computationally equivalent to GNN operations — graph memory is a structural match, not an engineering convenience.
-**MAGMA** — Zou et al. [A Multi-Graph based Agentic Memory Architecture](https://arxiv.org/abs/2601.03236). 2025. 提供方法论:四图模型(temporal、entity、causal、semantic)+ intent-adaptive retrieval。
240
+
-**Graph-LLM 结构洞察** — Joshi & Zhu. [Building Powerful GNNs from Transformers](https://arxiv.org/abs/2506.22084). 2025;及图智能体记忆综述(Chang Yang et al., 2026)。证实 LLM 注意力机制在计算上等价于 GNN 操作——图记忆是结构性匹配,而非工程便利。
0 commit comments