Skip to content

Commit ce7ab5d

Browse files
Grivnclaude
andcommitted
docs: update READMEs with protocol positioning, align EN/ZH structure
- Add File Injection row (Claude Code Memory) to pattern comparison table - Add protocol gap paragraph (MCP/ODBC/JDBC analogy, three primitives) - Add intent-native protocol bullet to Features - Add memory gateway paragraph to Vision - Add Graph-LLM structural insight to References - Align ZH with EN: add Go Report Card badge, Claude Max/Pro callout, move "from source" under Install, update OpenClaw to --yes with link, add NanoClaw skill location Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent d4fec9e commit ce7ab5d

2 files changed

Lines changed: 30 additions & 14 deletions

File tree

README.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,12 @@ Most memory tools embed their own LLM inside the pipeline. Mnemon takes a differ
2828
| Pattern | LLM Role | Representative |
2929
|---|---|---|
3030
| **LLM-Embedded** | Executor inside the pipeline | Mem0, Letta |
31+
| **File Injection** | None — reads file at session start | Claude Code Memory |
3132
| **MCP Server** | Tool provider via MCP protocol | claude-mem |
3233
| **LLM-Supervised** | External supervisor of a standalone binary | **Mnemon** |
3334

35+
Mnemon also addresses a gap in the protocol stack. MCP standardizes how LLMs discover and invoke tools. ODBC/JDBC standardizes how applications access databases. But how LLMs interact with databases using memory semantics — this layer has no protocol. Mnemon's three primitives — `remember`, `link`, `recall` — form an intent-native protocol: command names map to the LLM's cognitive vocabulary (`remember` not INSERT, `recall` not SELECT), and output is structured JSON with signal transparency rather than raw database rows.
36+
3437
<p align="center">
3538
<img src="docs/diagrams/llm-supervised-concept.jpg" width="720" alt="LLM-Supervised Architecture — three patterns compared, with detailed Mnemon implementation showing hooks, brain/organ split, and sub-agent delegation" />
3639
<br />
@@ -145,6 +148,7 @@ You don't run mnemon commands yourself. The agent does — driven by hooks and g
145148
- **LLM-supervised** — the host LLM decides what to remember, update, and forget; no embedded LLM, no API keys
146149
- **Hook-based integration** — four lifecycle hooks: Prime (load guide), Remind (recall & remember), Nudge (remember), and Compact (save before compression)
147150
- **Four-graph architecture** — temporal, entity, causal, and semantic edges, not just vector similarity
151+
- **Intent-native protocol** — three primitives (`remember`, `link`, `recall`) map to the LLM's cognitive vocabulary, not database syntax; structured JSON output with signal transparency
148152
- **Intent-aware recall** — graph traversal + optional vector search (RRF fusion), enabled by default for all queries
149153
- **Built-in deduplication**`remember` auto-detects duplicates and conflicts; skips or auto-replaces
150154
- **Retention lifecycle** — importance decay, access-count boosting, and garbage collection
@@ -168,6 +172,8 @@ All your local agentic AIs — across sessions and frameworks — sharing one po
168172

169173
The foundation is in place: a single `~/.mnemon` database that any agent can read and write. Claude Code's hook integration is the reference implementation; OpenClaw uses a plugin-based approach; NanoClaw integrates via container skills and volume mounts. The same pattern can be replicated for any LLM CLI that supports event hooks or system prompts.
170174

175+
The longer-term direction is a **memory gateway**: protocol decoupled from storage engine. The current SQLite backend is the first adapter; the protocol surface (`remember / link / recall`) can sit on top of PostgreSQL, Neo4j, or any graph database. Agent-side optimization (when to recall, what to remember) and storage-side optimization (indexing, graph algorithms) evolve independently. See [Future Direction](docs/design/08-decisions.md#82-future-direction) for details.
176+
171177
## FAQ
172178

173179
**Do different sessions share memory?**
@@ -228,10 +234,11 @@ make help # show all targets
228234

229235
## References
230236

231-
Mnemon combines the paradigm of one paper with the methodology of another. See [Theoretical Foundations](docs/DESIGN.md#24-theoretical-foundations) for details.
237+
Mnemon combines the paradigm of one paper with the methodology of another, grounded in the structural insight that graph memory is isomorphic to LLM attention. See [Theoretical Foundations](docs/DESIGN.md#25-theoretical-foundations) for details.
232238

233239
- **RLM** — Zhang, Kraska & Khattab. [Recursive Language Models](https://arxiv.org/abs/2512.24601). 2025. Establishes the paradigm: LLMs are more effective as orchestrators of external environments than as direct data processors.
234240
- **MAGMA** — Zou et al. [A Multi-Graph based Agentic Memory Architecture](https://arxiv.org/abs/2601.03236). 2025. Provides the methodology: four-graph model (temporal, entity, causal, semantic) with intent-adaptive retrieval.
241+
- **Graph-LLM Structural Insight** — Joshi & Zhu. [Building Powerful GNNs from Transformers](https://arxiv.org/abs/2506.22084). 2025; and the Graph-based Agent Memory survey (Chang Yang et al., 2026). Confirms that LLM attention is computationally equivalent to GNN operations — graph memory is a structural match, not an engineering convenience.
235242

236243
## License
237244

docs/zh/README.md

Lines changed: 22 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010

1111
[![Go 1.24+](https://img.shields.io/badge/Go-1.24%2B-00ADD8?logo=go&logoColor=white)](https://go.dev/)
1212
[![CI](https://github.com/mnemon-dev/mnemon/actions/workflows/ci.yml/badge.svg)](https://github.com/mnemon-dev/mnemon/actions/workflows/ci.yml)
13+
[![Go Report Card](https://goreportcard.com/badge/github.com/mnemon-dev/mnemon)](https://goreportcard.com/report/github.com/mnemon-dev/mnemon)
1314
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](../../LICENSE)
1415

1516
---
@@ -18,16 +19,21 @@ LLM 智能体在会话之间会遗忘一切。上下文压缩丢失关键决策
1819

1920
Mnemon 为你的 LLM 提供持久的跨会话记忆 — 四图知识存储、意图感知检索、重要度衰减、自动去重。单一二进制,零 API 密钥,一条命令完成部署。
2021

22+
> **Claude Max / Pro 订阅用户?** Mnemon 完全通过你现有的订阅运作——不需要额外的 API 密钥。你的 LLM 订阅*本身*就是智能层。两条命令即可完成。
23+
2124
### 为什么选择 Mnemon?
2225

2326
多数记忆工具在管线内嵌入自己的 LLM。Mnemon 采用不同路线:**你的宿主 LLM 就是监督者。** 二进制处理确定性计算(存储、图索引、搜索、衰减);LLM 做判断(记什么、怎么关联、何时遗忘)。没有中间人,没有额外推理开销。
2427

2528
| 模式 | LLM 角色 | 代表项目 |
2629
|---|---|---|
2730
| **LLM-Embedded** | 管线内部的执行者 | Mem0, Letta |
31+
| **File Injection** | 无 — 会话启动时读取文件 | Claude Code Memory |
2832
| **MCP Server** | 通过 MCP 协议提供工具 | claude-mem |
2933
| **LLM-Supervised** | 独立二进制的外部监督者 | **Mnemon** |
3034

35+
Mnemon 同时填补了协议栈中的空白。MCP 标准化了 LLM 如何发现和调用工具,ODBC/JDBC 标准化了应用如何访问数据库,但 LLM 以记忆语义与数据库交互——这一层尚无协议。Mnemon 的三个原语——`remember``link``recall`——构成一个意图原生协议:命令名称映射到 LLM 的认知词汇(`remember` 而非 INSERT,`recall` 而非 SELECT),输出是带有信号透明度的结构化 JSON,而非原始数据库行。
36+
3137
<p align="center">
3238
<img src="../diagrams/llm-supervised-concept.jpg" width="720" alt="LLM 监督式架构 — 三种模式对比,及 Mnemon 实现细节:钩子、大脑/器官分离、Sub-agent 委派" />
3339
<br />
@@ -60,6 +66,13 @@ brew install mnemon-dev/tap/mnemon
6066
go install github.com/mnemon-dev/mnemon@latest
6167
```
6268

69+
**从源码构建**
70+
71+
```bash
72+
git clone https://github.com/mnemon-dev/mnemon.git && cd mnemon
73+
make install
74+
```
75+
6376
**验证安装**
6477

6578
```bash
@@ -74,16 +87,13 @@ mnemon setup
7487

7588
`mnemon setup` 自动检测 Claude Code,交互式部署技能文件、钩子和行为引导。启动新会话 — 记忆自动运作。
7689

77-
### OpenClaw
90+
### [OpenClaw](https://github.com/openclaw/openclaw)
7891

7992
```bash
80-
go install github.com/mnemon-dev/mnemon@latest
81-
mnemon setup --target openclaw
93+
mnemon setup --target openclaw --yes
8294
```
8395

84-
部署技能文件和行为引导到 `~/.mnemon/prompt/guide.md`。由于 OpenClaw 的钩子集成暂未自动化,需要手动配置:
85-
86-
> 阅读 `~/.mnemon/prompt/guide.md` 并按照其 recall/remember 工作流配置自身。
96+
一条命令将技能文件、钩子、插件和行为引导部署到 `~/.openclaw/`。重启 OpenClaw 网关即可激活。
8797

8898
### [NanoClaw](https://github.com/qwibitai/nanoclaw)
8999

@@ -93,12 +103,7 @@ NanoClaw 在 Linux 容器内运行智能体。使用 `/add-mnemon` 技能集成
93103
2. 在 NanoClaw 项目中运行 `/add-mnemon` — Claude Code 将修改 Dockerfile、添加容器技能、配置卷挂载
94104
3. 每个 WhatsApp 群组获得独立的记忆存储,可选全局共享记忆(只读)
95105

96-
### 从源码构建
97-
98-
```bash
99-
git clone https://github.com/mnemon-dev/mnemon.git && cd mnemon
100-
make install && mnemon setup
101-
```
106+
技能文件位于 NanoClaw 仓库的 `.claude/skills/add-mnemon/` 目录。
102107

103108
### 卸载
104109

@@ -143,6 +148,7 @@ mnemon setup --eject
143148
- **LLM 监督式** — 宿主 LLM 主动决定记什么、更新什么、遗忘什么;无内嵌 LLM,无 API 密钥
144149
- **钩子集成** — 四个生命周期钩子:Prime(加载引导)、Remind(recall 和 remember)、Nudge(remember)、Compact(压缩前保存)
145150
- **四图架构** — 时序、实体、因果、语义四种边,不仅仅是向量相似度
151+
- **意图原生协议** — 三个原语(`remember``link``recall`)映射到 LLM 的认知词汇而非数据库语法;结构化 JSON 输出,带信号透明度
146152
- **意图感知召回** — 图遍历 + 可选向量搜索(RRF 融合),所有查询默认启用
147153
- **内置去重**`remember` 自动检测重复和冲突;跳过或自动替换
148154
- **保留度生命周期** — 重要性衰减、访问计数提升、免疫规则、垃圾回收
@@ -166,6 +172,8 @@ mnemon setup --eject
166172

167173
基础已就绪:一个 `~/.mnemon` 数据库,任何 agent 都可以读写。Claude Code 的钩子集成是参考实现;OpenClaw 使用插件方式集成;NanoClaw 通过容器技能和卷挂载集成。同样的模式可以复制到任何支持事件钩子或系统提示的 LLM CLI。
168174

175+
更长远的方向是**记忆网关**:协议层与存储引擎解耦。当前 SQLite 后端是第一个适配器;协议面(`remember / link / recall`)可运行在 PostgreSQL、Neo4j 或任何图数据库之上。Agent 侧优化(何时召回、记什么)与存储侧优化(索引、图算法)独立演进。详见[未来方向](design/08-decisions.md#82-未来方向)
176+
169177
## 常见问题
170178

171179
**不同会话共享记忆吗?**
@@ -225,10 +233,11 @@ make help # 显示所有目标
225233

226234
## 参考文献
227235

228-
Mnemon 取用了一篇论文的范式和另一篇论文的方法论。详见[理论基础](DESIGN.md#24-理论基础)
236+
Mnemon 取用了一篇论文的范式和另一篇论文的方法论,并基于图记忆与 LLM 注意力同构这一结构洞察。详见[理论基础](DESIGN.md#25-理论基础)
229237

230238
- **RLM** — Zhang, Kraska & Khattab. [Recursive Language Models](https://arxiv.org/abs/2512.24601). 2025. 建立范式:LLM 作为外部环境的 orchestrator 比直接处理数据更有效。
231239
- **MAGMA** — Zou et al. [A Multi-Graph based Agentic Memory Architecture](https://arxiv.org/abs/2601.03236). 2025. 提供方法论:四图模型(temporal、entity、causal、semantic)+ intent-adaptive retrieval。
240+
- **Graph-LLM 结构洞察** — Joshi & Zhu. [Building Powerful GNNs from Transformers](https://arxiv.org/abs/2506.22084). 2025;及图智能体记忆综述(Chang Yang et al., 2026)。证实 LLM 注意力机制在计算上等价于 GNN 操作——图记忆是结构性匹配,而非工程便利。
232241

233242
## 许可证
234243

0 commit comments

Comments
 (0)