You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix Magma import/call extraction, resolve file-path imports in linker
- Add field('path', ...) to Magma load_statement grammar rule so
parse_generic_imports() finds the import path via field lookup
instead of the broken text fallback (which only extracted 1 per file)
- Fix passImports() to resolve file-path imports (e.g. "utils.mag",
"lib/helpers.h") via fqn.ModuleQN() when raw path doesn't match
any node QN — general fix benefiting any file-path-based import
- Add TestMagmaImport_Regression and TestMagmaCall_Regression
- Update language count 59 → 63 in README, docs/index.html, marketing
-**Architecture overview**: `get_architecture` returns languages, packages, entry points, routes, hotspots, boundaries, layers, and clusters in a single call — instant codebase orientation
13
13
-**Architecture Decision Records**: `manage_adr` persists architectural decisions (PURPOSE, STACK, ARCHITECTURE, PATTERNS, TRADEOFFS, PHILOSOPHY) across sessions with section filtering and validation
14
14
-**Louvain community detection**: Discovers hidden functional modules across packages by clustering CALLS, HTTP_CALLS, and ASYNC_CALLS edges
@@ -57,7 +57,7 @@ Claude Code formats and explains the results.
57
57
58
58
**Why no built-in LLM?** Other code graph tools embed an LLM to translate natural language into graph queries. This means extra API keys, extra cost per query, and another model to configure. With MCP, the AI assistant you're already talking to *is* the query translator — no duplication needed.
59
59
60
-
**Token efficiency**: Compared to having an AI agent grep through your codebase file by file, graph queries return precise results in a single tool call. In benchmarks across 59 real-world repos (78 to 49K nodes), five structural queries consumed ~3,400 tokens via codebase-memory-mcp versus ~412,000 tokens via file-by-file exploration — a **99.2% reduction**. All 59 supported languages use the same efficient graph backend.
60
+
**Token efficiency**: Compared to having an AI agent grep through your codebase file by file, graph queries return precise results in a single tool call. In benchmarks across 63 real-world repos (78 to 49K nodes), five structural queries consumed ~3,400 tokens via codebase-memory-mcp versus ~412,000 tokens via file-by-file exploration — a **99.2% reduction**. All 63 supported languages use the same efficient graph backend.
61
61
62
62
## Performance
63
63
@@ -663,7 +663,7 @@ make install # go install
663
663
664
664
## Language Benchmark
665
665
666
-
59 languages supported. Benchmarked against 59 real open-source repositories (78 to 49K nodes). 12 standardized questions per language. Grading: HIGH (1.0) / MEDIUM (0.5) / LOW (0.1). Overall: **76%** average MCP score across all languages (97% for explorer-based agents).
666
+
63 languages supported. Benchmarked against 63 real open-source repositories (78 to 49K nodes). 12 standardized questions per language. Grading: HIGH (1.0) / MEDIUM (0.5) / LOW (0.1). Overall: **76%** average MCP score across all languages (97% for explorer-based agents).
667
667
668
668
| Tier | Score | Languages |
669
669
|------|-------|-----------|
@@ -682,8 +682,8 @@ See [`BENCHMARK.md`](BENCHMARK.md) for the full 35-language benchmark with per-q
682
682
cmd/codebase-memory-mcp/ Entry point (MCP stdio server + CLI mode + install/update commands)
<title>codebase-memory-mcp — Code Knowledge Graph for AI Assistants</title>
7
-
<metaname="description" content="MCP server that indexes codebases into a persistent knowledge graph. 59 languages, 120x fewer tokens, single Go binary. Works with Claude Code, Codex CLI, Cursor, Windsurf, Gemini CLI, VS Code, Zed.">
7
+
<metaname="description" content="MCP server that indexes codebases into a persistent knowledge graph. 63 languages, 120x fewer tokens, single Go binary. Works with Claude Code, Codex CLI, Cursor, Windsurf, Gemini CLI, VS Code, Zed.">
0 commit comments