Self-improving starter skill and operator toolkit for running Codex or Claude Code as the orchestrator over Symphony workers with Linear-managed execution.
-
Updated
Apr 28, 2026 - Python
Self-improving starter skill and operator toolkit for running Codex or Claude Code as the orchestrator over Symphony workers with Linear-managed execution.
repo for reusable plugins and skills
Skill Forge — turn an LLM agent's repeated failures into reviewed, signed-installation skills. Telegram-approved HMAC tokens, replay regression gate, and a full Forge Console covering doctor → demo → forge → install → evolve.
Tirami — distributed LLM inference where compute is currency. 1 TRM = 10^9 FLOPs. 21B supply cap, yield halving, staking, collusion resistance. 100% Rust, OpenAI-compatible, no token, no ICO. "tira mi su" = pull me up.
Planner-Worker-Judge-Curator multi-agent system with self-improvement (Gene/Capsule store) over a local A2A bus
Give your AI agent memory. Convenience wrapper for agent-episodic-memory.
Local-first, model-agnostic workflow optimizer for long-running AI agents: observable JSONL ledgers, deterministic reducers, no-meta gates, and receipt-backed self-improvement without LLM judges or model-weight updates.
Adaptive Verified Iteration Loop (AVIL): A Self-Improving Software Development Lifecycle for Agentic AI Systems — research paper with formal model, algorithms, and simulated evaluation
Self-Improving Agents - Autonomous experiment loop for any measurable artifact
A dual-loop self-evolution system for AI agents: fast fixes for local repeated mistakes, slow promotion for durable capability.
Add a description, image, and links to the self-improving-agents topic page so that developers can more easily learn about it.
To associate your repository with the self-improving-agents topic, visit your repo's landing page and select "manage topics."