This directory contains examples for using the prompt_to_code.py tool.
- Create a prompt file describing your workflow:
cat > my_workflow.txt << 'EOF'
Analyze all Python files in the src/ directory, identify any security
vulnerabilities or code quality issues, create a detailed report in
markdown format, and if there are any critical security issues found,
generate fix suggestions for each one with specific code changes needed.
EOF- Convert the prompt to an SDK program:
python ../prompt_to_code.py my_workflow.txt- Review and run the generated program:
# Review the generated code
cat generated_sdk_program.py
# Run it
python generated_sdk_program.pySee example_prompt.txt for a simple security analysis workflow.
python ../prompt_to_code.py example_prompt.txt --output security_analyzer.pyCreate a file test_generation.txt:
First, scan all TypeScript files in the components/ directory and identify
which ones are missing unit tests. For each file without tests, create a
comprehensive test file with at least 80% coverage. Then run all the new
tests and if any fail, fix the issues. Finally, generate a summary report
of test coverage improvements.
Convert it:
python ../prompt_to_code.py test_generation.txt --output test_generator.pyCreate a file doc_generator.txt:
Analyze all public functions and classes in the src/ directory. For each
one that's missing a docstring or has an incomplete docstring, generate
comprehensive documentation including description, parameters, return
values, and usage examples. Then validate that all docstrings follow
Google style guide format.
Convert it:
python ../prompt_to_code.py doc_generator.txt --output doc_generator.pyCreate a file refactor_pipeline.txt:
Find all functions in the codebase that are longer than 50 lines. For
each one, analyze if it can be broken down into smaller functions. If
yes, refactor it into multiple well-named functions with clear
responsibilities. After each refactoring, run the existing tests to
ensure nothing broke. Keep a log of all refactorings performed.
Convert it:
python ../prompt_to_code.py refactor_pipeline.txt --output refactor_pipeline.pyGood prompts clearly describe:
- Sequential stages: What happens first, second, third?
- Data flow: What information passes between steps?
- Conditions: When should different actions be taken?
- Iterations: What collections need processing?
- Outputs: What should be created?
Example:
First, list all Python files in src/. For each file, check if it has
type hints. If not, add type hints. Then run mypy to validate. If
there are errors, fix them. Finally, create a report of all changes.
This shows:
- Clear stages (list → check → add → validate → fix → report)
- Iteration (for each file)
- Conditions (if not, if errors)
- Output (report)
Avoid vague prompts like:
Make the code better
This doesn't provide enough structure for conversion.
Use a specific model for conversion:
python ../prompt_to_code.py my_prompt.txt --model claude-3-5-sonnet-latestSpecify a workspace directory:
python ../prompt_to_code.py my_prompt.txt --workspace /path/to/projectSave to a specific file:
python ../prompt_to_code.py my_prompt.txt --output ~/my_scripts/workflow.pyThe tool generates a complete Python program with:
- Proper shebang and docstring
- All necessary imports (Agent, dataclasses, typing, etc.)
- Dataclass definitions for structured data
- Agent initialization with appropriate settings
- Workflow implementation using SDK patterns:
- Sessions for context continuity
- Typed results for decision-making
- Loops for iteration
- Error handling
- Main function that can be run directly
- Helpful comments explaining each stage
After generation, you can:
- Add error handling:
try:
result = agent("Some operation", int)
except AugmentCLIError as e:
print(f"Error: {e}")- Add logging:
import logging
logging.basicConfig(level=logging.INFO)
logging.info(f"Processing {len(files)} files")- Add function calling:
def run_tests(file: str) -> dict:
"""Run tests for a file."""
# Your implementation
return {"passed": 10, "failed": 0}
result = agent.run(
"Run tests and analyze results",
return_type=dict,
functions=[run_tests]
)- Add event listeners:
from auggie_sdk import LoggingAgentListener
listener = LoggingAgentListener(verbose=True)
agent = Auggie(listener=listener)