Summary
git-mcp loads content from GitHub repositories (READMEs, documentation, code files) directly into the LLM context. Since any public repository can contain attacker-crafted content, this creates a direct prompt injection vector where malicious repository content can hijack the AI assistant's behavior.
Attack Vector
- Attacker places prompt injection payloads in a public GitHub repository (README.md, documentation files, code comments, or even file names)
- Developer uses git-mcp to load the repository as context for their AI assistant
- Injected content enters the LLM context and hijacks the AI
- If the user has other MCP tools connected (filesystem, terminal, browser), the injection can trigger cross-tool exploitation — using git-mcp as the entry point to execute actions via other tools
Impact
- Cross-Tool Exploitation: The most critical risk. A prompt injection via repository content can instruct the AI to use OTHER connected MCP tools (write files, run terminal commands, navigate browser)
- Code Tampering: AI could be instructed to modify local code in ways that introduce backdoors
- Credential Exfiltration: If the AI has access to environment variables or config files via other tools, injected instructions could exfiltrate secrets
- Supply Chain Attack: Popular repositories could be weaponized to target developers who use git-mcp
OWASP Classification
- OWASP LLM Top 10: LLM01 (Prompt Injection)
- OWASP Agentic Top 10: AG01 (Prompt Injection via Tool Results), AG05 (Privilege Escalation via Cross-Tool Chaining)
Recommendation
- Add a Prompt Injection Warning to the README
- Consider content sanitization before passing repository text to LLM context
- Warn users about the cross-tool exploitation risk when using git-mcp alongside other MCP servers
- Consider implementing content length limits and stripping of suspicious patterns
References
Free compliance check: Run your own prompts through our EU AI Act compliance scanner — instant results, no account required: prompttools.co/report
Best,
Joerg Michno
ClawGuard — Open-Source AI Agent Security | 225 patterns, 15 languages
Summary
git-mcp loads content from GitHub repositories (READMEs, documentation, code files) directly into the LLM context. Since any public repository can contain attacker-crafted content, this creates a direct prompt injection vector where malicious repository content can hijack the AI assistant's behavior.
Attack Vector
Impact
OWASP Classification
Recommendation
References
Free compliance check: Run your own prompts through our EU AI Act compliance scanner — instant results, no account required: prompttools.co/report
Best,
Joerg Michno
ClawGuard — Open-Source AI Agent Security | 225 patterns, 15 languages