CLI proxy that reduces LLM token consumption by 60-90% on common dev commands. Single Rust binary, zero dependencies
-
Updated
Apr 5, 2026 - Rust
CLI proxy that reduces LLM token consumption by 60-90% on common dev commands. Single Rust binary, zero dependencies
The Context Optimization Layer for LLM Applications
Hybrid Context Optimizer — Shell Hook + MCP Server. Reduces LLM token consumption by 89-99%. Single Rust binary, zero dependencies.
Working memory for Claude Code - persistent context and multi-instance coordination
An MCP server that executes Python code in isolated rootless containers with optional MCP server proxying. Implementation of Anthropic's and Cloudflare's ideas for reducing MCP tool definitions context bloat.
Sharper context. Fewer tokens. Open-source middleware for Claude Code.
Production-ready modular Claude Code framework with 30+ commands, token optimization, and MCP server integration. Achieves 2-10x productivity gains through systematic command organization and hierarchical configuration.
Your agents are guessing at APIs. Give them the actual Agent-Native spec. 1500+ API's Ready To-Use skills, Compile any API spec into a lean, agent-native format. 10× smaller. OpenAPI, GraphQL, AsyncAPI, Protobuf, Postman.
Find the ghost tokens. Fix them. Survive compaction. Avoid context quality decay.
Generate a compact codebase index for AI assistants — saves 50K+ tokens per conversation
Stop Claude Code from burning through your quota in 20 minutes. Auto-rotates oversized sessions and preserves context.
Config-driven CLI tool that compresses command output before it reaches an LLM context
TOON encoding for Laravel. Encode data for AI/LLMs with ~50% fewer tokens than JSON.
Independent research on Claude Code internals, Claude Agent SDK, and related tooling.
CLI proxy that reduces LLM token usage by 60-90%. Declarative YAML filters for Claude Code, Cursor, Copilot, Gemini. rtk alternative in Go.
Your AI(Claude Opus, Codex 5.4) sees 5% of your codebase and hallucinates the rest. Entroly fixes this — 95% fewer tokens, 100% code visibility. Works with Cursor, Claude Code, Copilot.
Automatic prompt caching for Claude Code. Cuts token costs by up to 90% on repeated file reads, bug fix sessions, and long coding conversations - zero config.
Stop forcing LLMs to answer in one pass. Give them a runtime. Recursive Language Model that improves any LLM, while reducing token usage up to 4X.
TOON — Laravel AI package for compact, human-readable, token-efficient data format with JSON ⇄ TOON conversion for ChatGPT, OpenAI, and other LLM prompts.
Security hooks and monitoring for Claude Code — quiet overrides, SSRF protection, MCP compression, OTEL tracing
Add a description, image, and links to the token-optimization topic page so that developers can more easily learn about it.
To associate your repository with the token-optimization topic, visit your repo's landing page and select "manage topics."