Skip to content

Commit 73aa3f1

Browse files
Grivnclaude
andcommitted
Emphasize LLM-supervised positioning in README
Lead with the technical differentiator (comparison table + architecture diagram) before the value proposition. Add feature summary to tagline. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 01c40f7 commit 73aa3f1

File tree

2 files changed

+23
-27
lines changed

2 files changed

+23
-27
lines changed

README.md

Lines changed: 12 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
# Mnemon
66

7-
**Persistent memory for LLM agents.**
7+
**LLM-supervised persistent memory for AI agents.**
88

99
[![Go 1.24+](https://img.shields.io/badge/Go-1.24%2B-00ADD8?logo=go&logoColor=white)](https://go.dev/)
1010
[![CI](https://github.com/mnemon-dev/mnemon/actions/workflows/ci.yml/badge.svg)](https://github.com/mnemon-dev/mnemon/actions/workflows/ci.yml)
@@ -15,34 +15,32 @@
1515

1616
LLM agents forget everything between sessions. Context compaction drops critical decisions, cross-session knowledge vanishes, and long conversations push early information out of the window.
1717

18-
Mnemon gives your agent persistent, cross-session memory — with a single binary and one setup command.
19-
20-
<p align="center">
21-
<img src="docs/diagrams/10-knowledge-graph.jpg" width="720" alt="Knowledge Graph — 87 insights connected by temporal, entity, semantic, and causal edges" />
22-
<br />
23-
<sub>A real knowledge graph built by Mnemon — 87 insights, 2150 edges across four graph types.</sub>
24-
</p>
18+
Mnemon gives your agent persistent, cross-session memory — a four-graph knowledge store with intent-aware recall, importance decay, and automatic deduplication. Single binary, zero API keys, one setup command.
2519

2620
### Why Mnemon?
2721

28-
Memory has a **compound interest effect** — the longer it accumulates, the greater its value. LLM engines iterate constantly, skill files cost nearly nothing to write, but memory is a private asset that grows with the user. It is the only component in the agent ecosystem worth deep investment.
29-
30-
Mnemon is built on one core belief: **the LLM itself is the best orchestrator.** Rather than embedding a small LLM inside the pipeline, Mnemon lets your host LLM — the one already holding full conversation context — act as supervisor. The binary is the organ (deterministic storage, graph indexing, search, decay); the LLM is the brain (decides what to remember, how to link, when to forget). The skill file is the textbook that teaches the protocol.
31-
32-
This means: **memory management logic moves from prompt to code — deterministic, testable, portable.** The same binary + skill can run on Claude Code, Cursor, or any LLM CLI that reads markdown.
22+
Most memory tools embed their own LLM inside the pipeline. Mnemon takes a different approach: **your host LLM is the supervisor.** The binary handles deterministic computation (storage, graph indexing, search, decay); the LLM makes judgment calls (what to remember, how to link, when to forget). No middleman, no extra inference cost.
3323

3424
| Pattern | LLM Role | Representative |
3525
|---|---|---|
3626
| **LLM-Embedded** | Executor inside the pipeline | Mem0, Letta |
3727
| **MCP Server** | Tool provider via MCP protocol | claude-mem |
38-
| **LLM-Supervised** | External supervisor of a standalone binary | Mnemon |
28+
| **LLM-Supervised** | External supervisor of a standalone binary | **Mnemon** |
3929

4030
<p align="center">
4131
<img src="docs/diagrams/llm-supervised-concept.jpg" width="720" alt="LLM-Supervised Architecture — three patterns compared, with detailed Mnemon implementation showing hooks, brain/organ split, and sub-agent delegation" />
4232
<br />
4333
<sub>The LLM-Supervised pattern: hooks drive the lifecycle, the host LLM makes judgment calls, the binary handles deterministic computation.</sub>
4434
</p>
4535

36+
Memory has a **compound interest effect** — the longer it accumulates, the greater its value. LLM engines iterate constantly, skill files cost nearly nothing to write, but memory is a private asset that grows with the user. It is the only component in the agent ecosystem worth deep investment.
37+
38+
<p align="center">
39+
<img src="docs/diagrams/10-knowledge-graph.jpg" width="720" alt="Knowledge Graph — 87 insights connected by temporal, entity, semantic, and causal edges" />
40+
<br />
41+
<sub>A real knowledge graph built by Mnemon — 87 insights, 2150 edges across four graph types.</sub>
42+
</p>
43+
4644
See [Design & Architecture](docs/DESIGN.md) for details.
4745

4846
## Quick Start

docs/zh/README.md

Lines changed: 11 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -14,34 +14,32 @@
1414

1515
LLM 智能体在会话之间会遗忘一切。上下文压缩丢失关键决策,跨会话知识消失,长对话将早期信息推出窗口。
1616

17-
Mnemon 为你的 LLM 提供持久的跨会话记忆 — 只需一个 Go 二进制文件和一条 setup 命令。
18-
19-
<p align="center">
20-
<img src="../diagrams/10-knowledge-graph.jpg" width="720" alt="知识图谱 — 87 条洞察通过时序、实体、语义和因果边连接" />
21-
<br />
22-
<sub>Mnemon 构建的真实知识图谱 — 87 条洞察,2150 条边,横跨四种图类型。</sub>
23-
</p>
17+
Mnemon 为你的 LLM 提供持久的跨会话记忆 — 四图知识存储、意图感知检索、重要度衰减、自动去重。单一二进制,零 API 密钥,一条命令完成部署。
2418

2519
### 为什么选择 Mnemon?
2620

27-
记忆具有**复利效应** — 积累越久,价值越大。LLM 引擎不断迭代,技能文件几乎零成本编写,但记忆是随用户一起增长的私有资产。它是智能体生态中唯一值得深度投入的组件。
28-
29-
Mnemon 基于一个核心理念:**LLM 本身就是最好的编排器。** 不在管线内嵌入小型 LLM,而是让你的宿主 LLM — 对话中已有完整上下文的那个 — 充当监督者。二进制是器官(确定性存储、图索引、搜索、衰减);LLM 是大脑(决定记什么、怎么关联、何时遗忘)。技能文件是教授协议的教科书。
30-
31-
这意味着:**记忆管理逻辑从提示词迁移到代码 — 确定性的、可测试的、可移植的。** 同一套二进制 + 技能文件可在 Claude Code、Cursor 或任何读取 markdown 的 LLM CLI 上运行。
21+
多数记忆工具在管线内嵌入自己的 LLM。Mnemon 采用不同路线:**你的宿主 LLM 就是监督者。** 二进制处理确定性计算(存储、图索引、搜索、衰减);LLM 做判断(记什么、怎么关联、何时遗忘)。没有中间人,没有额外推理开销。
3222

3323
| 模式 | LLM 角色 | 代表项目 |
3424
|---|---|---|
3525
| **LLM-Embedded** | 管线内部的执行者 | Mem0, Letta |
3626
| **MCP Server** | 通过 MCP 协议提供工具 | claude-mem |
37-
| **LLM-Supervised** | 独立二进制的外部监督者 | Mnemon |
27+
| **LLM-Supervised** | 独立二进制的外部监督者 | **Mnemon** |
3828

3929
<p align="center">
4030
<img src="../diagrams/llm-supervised-concept.jpg" width="720" alt="LLM 监督式架构 — 三种模式对比,及 Mnemon 实现细节:钩子、大脑/器官分离、Sub-agent 委派" />
4131
<br />
4232
<sub>LLM 监督式模式:钩子驱动生命周期,宿主 LLM 做判断,二进制处理确定性计算。</sub>
4333
</p>
4434

35+
记忆具有**复利效应** — 积累越久,价值越大。LLM 引擎不断迭代,技能文件几乎零成本编写,但记忆是随用户一起增长的私有资产。它是智能体生态中唯一值得深度投入的组件。
36+
37+
<p align="center">
38+
<img src="../diagrams/10-knowledge-graph.jpg" width="720" alt="知识图谱 — 87 条洞察通过时序、实体、语义和因果边连接" />
39+
<br />
40+
<sub>Mnemon 构建的真实知识图谱 — 87 条洞察,2150 条边,横跨四种图类型。</sub>
41+
</p>
42+
4543
详见 [设计与架构](DESIGN.md)
4644

4745
## 快速开始

0 commit comments

Comments
 (0)