Skip to content

Security Advisory: Prompt Injection via Repository Content Loaded into LLM Context #227

@joergmichno

Description

@joergmichno

Summary

git-mcp loads content from GitHub repositories (READMEs, documentation, code files) directly into the LLM context. Since any public repository can contain attacker-crafted content, this creates a direct prompt injection vector where malicious repository content can hijack the AI assistant's behavior.

Attack Vector

  1. Attacker places prompt injection payloads in a public GitHub repository (README.md, documentation files, code comments, or even file names)
  2. Developer uses git-mcp to load the repository as context for their AI assistant
  3. Injected content enters the LLM context and hijacks the AI
  4. If the user has other MCP tools connected (filesystem, terminal, browser), the injection can trigger cross-tool exploitation — using git-mcp as the entry point to execute actions via other tools

Impact

  • Cross-Tool Exploitation: The most critical risk. A prompt injection via repository content can instruct the AI to use OTHER connected MCP tools (write files, run terminal commands, navigate browser)
  • Code Tampering: AI could be instructed to modify local code in ways that introduce backdoors
  • Credential Exfiltration: If the AI has access to environment variables or config files via other tools, injected instructions could exfiltrate secrets
  • Supply Chain Attack: Popular repositories could be weaponized to target developers who use git-mcp

OWASP Classification

  • OWASP LLM Top 10: LLM01 (Prompt Injection)
  • OWASP Agentic Top 10: AG01 (Prompt Injection via Tool Results), AG05 (Privilege Escalation via Cross-Tool Chaining)

Recommendation

  1. Add a Prompt Injection Warning to the README
  2. Consider content sanitization before passing repository text to LLM context
  3. Warn users about the cross-tool exploitation risk when using git-mcp alongside other MCP servers
  4. Consider implementing content length limits and stripping of suspicious patterns

References


Free compliance check: Run your own prompts through our EU AI Act compliance scanner — instant results, no account required: prompttools.co/report

Best,
Joerg Michno
ClawGuard — Open-Source AI Agent Security | 225 patterns, 15 languages

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions