Skip to content

Prompt injection#8

Merged
mbaluda merged 40 commits intomainfrom
prompt-injection
Jan 29, 2026
Merged

Prompt injection#8
mbaluda merged 40 commits intomainfrom
prompt-injection

Conversation

@mbaluda
Copy link
Copy Markdown
Owner

@mbaluda mbaluda commented Jan 29, 2026

No description provided.

knewbury01 and others added 30 commits December 12, 2025 17:41
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Add testcase and coverage for agents sdk runner run with input param

* Rename agent sdk module for clarity

* Add case for unnamed param use in runner run from agent sdk
…ptInjection/openai_test.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@github-actions github-actions bot added documentation Improvements or additions to documentation Python labels Jan 29, 2026
@github-actions
Copy link
Copy Markdown

QHelp previews:

python/ql/src/experimental/Security/CWE-1427/PromptInjection.qhelp

Prompt injection

Prompts can be constructed to bypass the original purposes of an agent and lead to sensitive data leak or operations that were not intended.

Recommendation

Sanitize user input and also avoid using user input in developer or system level prompts.

Example

In the following examples, the cases marked GOOD show secure prompt construction; whereas in the case marked BAD they may be susceptible to prompt injection.

from flask import Flask, request
from agents import Agent
from guardrails import GuardrailAgent

@app.route("/parameter-route")
def get_input():
    input = request.args.get("input")

    goodAgent = GuardrailAgent(  # GOOD: Agent created with guardrails automatically configured.
        config=Path("guardrails_config.json"),
        name="Assistant",
        instructions="This prompt is customized for " + input)

    badAgent = Agent(
        name="Assistant",
        instructions="This prompt is customized for " + input  # BAD: user input in agent instruction.
    )

References

@mbaluda mbaluda merged commit 7a115f3 into main Jan 29, 2026
26 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation Python

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants