You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Refactor state management: internalize recurrent state into model (v2.4.1)
Previously, callers were responsible for carrying `final_state` between
train_batch calls and passing it back as `initial_state`. This created
fragile boilerplate in experiment_llm.py and leaked an implementation
detail into the public API.
Changes:
- OdyssNet now always persists `self.state` after every forward pass,
not only when Hebbian learning is active
- `train_batch` drops `initial_state` / `return_state` in favour of a
single `keep_state` flag; callers no longer hold state tensors
- experiment_llm.py TBPTT loop updated to use `keep_state=(t_start > 0)`
- generate() uses `model.reset_state()` before warm-up instead of
threading a state variable through the function
- Tests updated to assert `model.state` directly and cover the new API
- Add `.claude/` to .gitignore
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
0 commit comments