Skip to content

Commit 40167db

Browse files
committed
Update AI workflow with exact user request and correct model info
1 parent 01053b4 commit 40167db

File tree

1 file changed

+98
-3
lines changed

1 file changed

+98
-3
lines changed

AI_WORKFLOW.md

Lines changed: 98 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,104 @@
11
# AI-Assisted Problem Generation Workflow
22

3+
## Motivation
4+
After successful generation, the user asked (verbatim):
5+
> "Ok. In that case, could you please explain your workflow and the steps you took? What model? What prompt(s)? What input? Etc."
6+
>
7+
> "In the end, we want something reproducible and that we can script with one of the models' API. Also, each model needs documentation."
8+
9+
This document addresses that requirement: full reproducibility for API scripting.
10+
311
## Model & Tools Used
4-
- **Model:** Claude Haiku 3.5 (via GitHub Copilot)
5-
- **IDE:** VS Code with Copilot Chat
6-
- **Code tools:** grep_search, file_search, read_file, replace_string_in_file, multi_replace_string_in_file, get_errors
12+
- **Model used in this run:** GPT-5.3-Codex (via GitHub Copilot Chat in VS Code)
13+
- **IDE:** VS Code with GitHub Copilot Chat extension
14+
- **Code tools:** grep_search, file_search, read_file, replace_string_in_file, multi_replace_string_in_file, get_errors, fetch_webpage
15+
16+
## Scripting with Model APIs
17+
18+
### Claude (Anthropic API)
19+
**Model:** `claude-3-5-haiku-20241022` (recommended for code tasks)
20+
21+
**Installation:**
22+
```bash
23+
pip install anthropic
24+
```
25+
26+
**Environment:**
27+
```bash
28+
export ANTHROPIC_API_KEY="sk-ant-..."
29+
```
30+
31+
**Basic usage:**
32+
```python
33+
import anthropic
34+
35+
client = anthropic.Anthropic()
36+
37+
message = client.messages.create(
38+
model="claude-3-5-haiku-20241022",
39+
max_tokens=4096,
40+
messages=[
41+
{"role": "user", "content": "Your prompt here"}
42+
]
43+
)
44+
45+
print(message.content[0].text)
46+
```
47+
48+
### OpenAI GPT API
49+
**Model:** `gpt-4-turbo` or `gpt-3.5-turbo`
50+
51+
**Installation:**
52+
```bash
53+
pip install openai
54+
```
55+
56+
**Environment:**
57+
```bash
58+
export OPENAI_API_KEY="sk-..."
59+
```
60+
61+
**Basic usage:**
62+
```python
63+
from openai import OpenAI
64+
65+
client = OpenAI()
66+
67+
response = client.chat.completions.create(
68+
model="gpt-4-turbo",
69+
messages=[
70+
{"role": "user", "content": "Your prompt here"}
71+
]
72+
)
73+
74+
print(response.choices[0].message.content)
75+
```
76+
77+
### Google Gemini API
78+
**Model:** `gemini-2.0-flash`
79+
80+
**Installation:**
81+
```bash
82+
pip install google-generativeai
83+
```
84+
85+
**Environment:**
86+
```bash
87+
export GOOGLE_API_KEY="AIzaSy..."
88+
```
89+
90+
**Basic usage:**
91+
```python
92+
import google.generativeai as genai
93+
94+
genai.configure(api_key=os.environ.get("GOOGLE_API_KEY"))
95+
model = genai.GenerativeModel("gemini-2.0-flash")
96+
97+
response = model.generate_content("Your prompt here")
98+
print(response.text)
99+
```
100+
101+
---
7102

8103
## User Prompts (2 iterations)
9104

0 commit comments

Comments
 (0)