Hi from HN!
I'd be happy to have my agent create a PR for you, but I wanted to propose the following from our agent (https://github.com/safety-quotient-lab/psychology-agent) before spending the tokens:
Summary
The antiregression-setup addresses code regression through hooks, subagents, and CLAUDE.md conventions. This PR extends it with three capabilities the current system
lacks: memory that survives across sessions, self-healing recovery when state disappears, and epistemic quality triggers that prevent a different class of regression —
not broken code, but broken reasoning.
Proposed Changes
- Memory persistence layer (new files)
- docs/MEMORY-snapshot.md — committed snapshot of volatile session state (active thread, decisions, working context). Updated at session end. Serves as recovery source
when auto-memory disappears.
- /cycle step or equivalent in WORKFLOW.md — end-of-session checklist that propagates changes from working memory to the snapshot. Without propagation, CLAUDE.md and
working state drift apart silently.
- Rationale: Their current system relies entirely on CLAUDE.md surviving compaction. CLAUDE.md holds stable conventions, but volatile state (what was I working on? what
did I decide last session? what's next?) has nowhere to live. This fills that gap.
- Self-healing bootstrap (new file)
- bootstrap-check.sh — health-check script that verifies auto-memory exists, validates content (line-count guards against empty/corrupt files), restores from committed
snapshots with provenance headers, and reports status.
- Add to Quick Start: run bootstrap-check.sh after clone.
- Rationale: Their setup assumes CLAUDE.md is always present and correct. Auto-memory files can silently disappear (new machine, path change, fresh clone). The bootstrap
script detects and recovers from this automatically.
- Cognitive triggers for the planner agent (modify planner.md)
- Recommend-against scan — before finalizing a plan, scan for a specific reason NOT to proceed. Surface if found. Prevents the planner from defaulting to the obvious
approach when a less obvious one dominates.
- Process vs. substance classification — the planner should resolve sequencing decisions autonomously (which file to edit first) but surface strategic decisions (which
approach to take) to the user. Currently the planner treats all decisions equally.
- Epistemic flags in code-reviewer output (modify code-reviewer.md)
- Add an ⚑ EPISTEMIC FLAGS section to the review output format. The reviewer should surface: assumptions treated as facts, conclusions that exceed available evidence,
scope overreach (reviewer making claims outside reviewed code), and confidence miscalibration.
- Rationale: Code review catches bugs. Epistemic review catches reasoning failures — a different class of regression that hooks and tests cannot detect.
- Provenance tracking on CLAUDE.md template (modify CLAUDE.md.template)
- Add a ## Known Patterns section with dates and session numbers, not just content. When patterns accumulate without provenance, stale ones persist indefinitely.
- Add a ## Gotchas section with the same treatment.
- Rationale: Their template includes placeholder sections for patterns and gotchas but no convention for when entries were added or whether they remain current.
- Content guard for hooks (modify settings.json)
- Add a PostToolUse hook on Write/Edit for CLAUDE.md itself — if CLAUDE.md drops below a line-count threshold, warn that critical rules may have been accidentally
deleted. Prevents the "ghost context" problem at the mechanical level.
What This Does NOT Change
- Their subagent architecture (planner/tester/reviewer) — sound as-is
- Their glob-scoped rules pattern — well-designed
- Their pre-commit test hook — the right enforcement mechanism for code regression
- Their workflow document — practical and well-structured
Test Plan
- Verify bootstrap-check.sh runs on Linux/macOS (their primary platforms)
- Verify new hooks fire correctly alongside existing hooks
- Verify MEMORY-snapshot.md round-trips through a session (create → compact → recover)
- Verify planner output includes recommend-against when applicable
- Verify code-reviewer output includes epistemic flags section
Hi from HN!
I'd be happy to have my agent create a PR for you, but I wanted to propose the following from our agent (https://github.com/safety-quotient-lab/psychology-agent) before spending the tokens:
Summary
The antiregression-setup addresses code regression through hooks, subagents, and CLAUDE.md conventions. This PR extends it with three capabilities the current system
lacks: memory that survives across sessions, self-healing recovery when state disappears, and epistemic quality triggers that prevent a different class of regression —
not broken code, but broken reasoning.
Proposed Changes
when auto-memory disappears.
working state drift apart silently.
did I decide last session? what's next?) has nowhere to live. This fills that gap.
snapshots with provenance headers, and reports status.
script detects and recovers from this automatically.
approach when a less obvious one dominates.
approach to take) to the user. Currently the planner treats all decisions equally.
scope overreach (reviewer making claims outside reviewed code), and confidence miscalibration.
deleted. Prevents the "ghost context" problem at the mechanical level.
What This Does NOT Change
Test Plan