feat: add HippocampalEvolveEngine — associative memory evolution#2
Open
allthingssecurity wants to merge 1 commit intoA-EVO-Lab:mainfrom
Open
feat: add HippocampalEvolveEngine — associative memory evolution#2allthingssecurity wants to merge 1 commit intoA-EVO-Lab:mainfrom
allthingssecurity wants to merge 1 commit intoA-EVO-Lab:mainfrom
Conversation
… memory evolution Adds a new EvolutionEngine implementation based on hippocampal memory consolidation from the claude_hippocampus project. Instead of LLM-guided mutation, it uses: - Spreading activation (CA3) for associative pattern recall - Hebbian learning for co-occurrence graph construction - Automatic skill crystallization from recurring success patterns - Exponential temporal decay (23-day half-life) for adaptive relevance No LLM calls during evolution cycles — uses pure graph algorithms. Reference: https://github.com/allthingssecurity/claude_hippocampus
Contributor
|
Thanks for submitting the PR, the team will review it and get back. |
Contributor
|
Hi, @allthingssecurity, could you please provide the benchmark results on the proposed 4 bench to validate the evolving algorithm's effectiveness? |
ventr1c
pushed a commit
that referenced
this pull request
Apr 24, 2026
… hermeticity Codex Round 12 review flagged two remaining constructor-time env reads that my R12 "all 4 paths hermetic" claim missed: Finding #1 (High): McpAtlasBenchmark.__init__ reads EVAL_USE_LITELLM via os.getenv (mcp_atlas.py:75). A bare McpAtlasBenchmark() test is not hermetic unless that env var is controlled. Finding #2 (High): Terminal2Benchmark falls back to DEFAULT_CHALLENGES_DIR which is derived from os.environ.get("TB2_CHALLENGES_DIR", ...) at module import time (terminal2.py:23). A bare Terminal2Benchmark() test depends on ambient env + the checkout-relative default path. Fixes: 1. Added _clear_mcp_atlas_env(monkeypatch) helper that monkeypatch.delenv("EVAL_USE_LITELLM", raising=False). Both test_mcp_atlas_capability_runtime and test_mcp_atlas_constructor_does_not_mutate_capability now call it before fresh-importing the adapter. 2. Added _make_tb2_challenges_tree(root) helper (trivial: just mkdir). Both test_terminal_capability_runtime and test_terminal_constructor_variance_does_not_mutate_capability now: - Take tmp_path + monkeypatch fixtures - monkeypatch.delenv("TB2_CHALLENGES_DIR", raising=False) before import - Build a temp challenges tree under tmp_path - Pass challenges_dir= explicitly to Terminal2Benchmark(...) so the "or DEFAULT_CHALLENGES_DIR" fallback never fires 3. Docstrings updated to cite the Codex R12 findings explicitly. All 4 AC-1 constructor paths now control every __init__-time external state read (imports via sys.modules stubs, env vars via monkeypatch.delenv, filesystem paths via tmp_path + explicit kwargs). Validation: 116 passed, 0 skipped in 0.65s (unchanged test count, all fixes internal to 4 tests). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds a new
EvolutionEngineimplementation inspired by hippocampal memory consolidation, from the claude_hippocampus project.agent_evolve/algorithms/hippocampal_evolve/— uses spreading activation (CA3 pattern completion), Hebbian co-occurrence learning, and automatic skill crystallization instead of LLM-guided mutationget_state()/load_state()) for persistence across runsalgorithms/__init__.pyalongside existing enginesBiological mapping
Usage
Measured impact (from claude_hippocampus deployment)
Test plan
HippocampalEvolveEngineimports correctly fromagent_evolve.algorithmsstep()with mock workspace and observations to confirm 5-phase cycleget_state()/load_state()round-trip