Thank you for your interest in contributing to this research repository. This project documents reverse-engineering findings about Claude Code internals and the Claude Agent SDK. Contributions that advance collective understanding of these systems are welcome.
- What We Accept
- Research Quality Standards
- Report Structure Template
- How to Submit New Research
- How to Report Inaccuracies
- PR Process
- Writing Guidelines
- New research reports — Original findings from reverse-engineering Claude Code or Claude Agent SDK
- Corrections — Inaccuracies in existing reports backed by evidence
- Updates — Re-analysis of a finding after an SDK version upgrade
- Supplements — Additional evidence, reproductions, or diagrams for existing reports
We do not accept:
- Speculative content without evidence
- Findings based on Claude's responses rather than source code or observable behavior
- Content that reproduces Anthropic's proprietary source code verbatim
Every submission must meet all three criteria:
Claims must be backed by at least one of:
- Direct reference to decompiled/minified source in the npm package (file path + line range)
- A reproducible test showing observable behavior (token counts, injection content, timing)
- Network traces or API logs confirming the behavior
Statements like "Claude seems to..." or "probably because..." without supporting evidence will be rejected.
Include enough information for another researcher to independently verify:
- Exact SDK/package version (e.g.,
@anthropic-ai/claude-code@2.1.71) - Steps to reproduce the observation
- Expected output and actual output
- Environment notes if platform-specific (OS, Node.js version)
Always state which SDK version your findings apply to. The current research baseline is:
@anthropic-ai/claude-code v2.1.71 (March 2026)
If your findings differ from published reports, note whether the difference is due to a version change.
Each new research report should be placed in reports/<slug>/ and follow this structure:
reports/your-report-slug/
README.md # English version (required)
README.zh-TW.md # Chinese Traditional version (encouraged)
diagrams/ # Optional: architecture diagrams, flow charts
evidence/ # Optional: annotated screenshots, log excerpts
The README.md must follow this template:
# [Title]
> **SDK Version:** @anthropic-ai/claude-code vX.Y.Z
> **Date:** YYYY-MM-DD
> **Status:** Draft | Reviewed | Verified
## Summary
One paragraph describing the finding and why it matters.
## Methodology
How you investigated this. Tools used (e.g., `npm pack`, `node --inspect`, token counting),
decompilation approach, test harness design.
## Findings
Structured presentation of what you found. Use subheadings. Include code blocks where relevant.
## Evidence
Direct references to source locations, log excerpts, or test output that supports each finding.
Format source references as: `node_modules/@anthropic-ai/claude-code/dist/...` + line range.
## Impact
Token cost, security implications, behavioral effects, or API misuse potential.
## Mitigation / Workaround
Known workarounds, patches, or SDK-level fixes if applicable.
## References
Links to related GitHub issues, SDK changelogs, or prior research.- Fork the repository
- Create a branch:
research/<your-topic> - Create your report directory under
reports/ - Follow the Report Structure Template
- Update the report index table in
README.md - Open a Pull Request with title:
[Report] Your topic title
Before opening the PR, confirm:
- SDK version is stated
- Evidence section has at least one verifiable reference
- Reproduction steps are included
- No Anthropic proprietary source code is reproduced verbatim
If you find an error in an existing report:
- Open a GitHub Issue with title:
[Correction] Report #N — brief description - In the body, include:
- Which specific claim is incorrect
- What the correct finding is
- Your evidence (SDK version, source reference, or test output)
If the correction is straightforward, you may also submit a PR directly that edits the relevant report and adds a note in the ## Changelog section at the bottom.
- Open PR against
main - Fill in the PR template (title, summary, evidence checklist)
- A maintainer will review within 7 days
- Feedback will focus on: evidence quality, reproducibility, and accuracy — not writing style
- Once approved, the report is merged and the README index is updated
For significant research, expect a discussion period before merge.
- Language: English is required. Chinese Traditional (zh-TW) is strongly encouraged as a parallel version. Both are equally welcome — do not feel obligated to write both if it reduces quality.
- Tone: Neutral and technical. This is documentation, not advocacy.
- Code blocks: Use fenced code blocks with language tags. For minified JS excerpts, use
js. - Diagrams: Welcome and encouraged. Use Mermaid (renders in GitHub), PNG exports, or SVG. Place under
reports/<slug>/diagrams/. - Links: Link to specific GitHub Issues when discussing known problems. Use permalink format for source file references when possible.
- Length: No minimum or maximum. Say what needs to be said. Prefer depth over breadth in a single report.
- Speculation: Clearly mark speculative conclusions with
> **Note:** This is inferred from...— do not present them as facts.
Open a GitHub Issue with the question label, or start a Discussion.