Skip to content

Commit 1d68eb0

Browse files
fix(workflows): use direct Claude Code action for AI review (#94)
## Summary - Replace `actions/github-script` with direct `anthropics/claude-code-action@v1` call - The previous approach posted `@claude` comments which didn't trigger `util-claude.yml` due to GitHub's token limitation (comments made with `GITHUB_TOKEN` don't trigger other workflows) ## Changes - `bot-ai-review.yml`: Replace github-script step with claude-code-action ## Test - Trigger AI review on one of the existing PRs to verify
1 parent 58567b5 commit 1d68eb0

1 file changed

Lines changed: 26 additions & 37 deletions

File tree

.github/workflows/bot-ai-review.yml

Lines changed: 26 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -168,50 +168,40 @@ jobs:
168168
gh api repos/${{ github.repository }}/issues/${{ steps.metadata.outputs.pr_number }}/reactions \
169169
-f content=eyes
170170
171-
- name: Trigger Claude Quality Check
171+
- name: Run Claude AI Quality Review
172172
if: steps.check.outputs.should_run == 'true' && steps.pr.outputs.skip != 'true' && steps.attempts.outputs.count != '3'
173-
uses: actions/github-script@v8
173+
timeout-minutes: 30
174+
uses: anthropics/claude-code-action@v1
174175
with:
175-
script: |
176-
const specId = '${{ steps.pr.outputs.spec_id }}';
177-
const library = '${{ steps.pr.outputs.library }}';
178-
const attempt = parseInt('${{ steps.attempts.outputs.count }}') + 1;
179-
const prNumber = ${{ steps.metadata.outputs.pr_number }};
180-
const subIssueNumber = '${{ steps.pr.outputs.sub_issue }}';
181-
const mainIssueNumber = '${{ steps.metadata.outputs.issue_number }}';
176+
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
177+
claude_args: "--model opus"
178+
prompt: |
179+
## Task: AI Quality Review for **${{ steps.pr.outputs.library }}** (Attempt ${{ steps.attempts.outputs.count }}/3)
182180
183-
await github.rest.issues.createComment({
184-
owner: context.repo.owner,
185-
repo: context.repo.repo,
186-
issue_number: prNumber,
187-
body: `@claude
188-
189-
## Task: AI Quality Review for **${library}** (Attempt ${attempt}/3)
190-
191-
Tests passed and preview images are ready. Evaluate if the **${library}** implementation matches the specification.
181+
Tests passed and preview images are ready. Evaluate if the **${{ steps.pr.outputs.library }}** implementation matches the specification.
192182
193183
### Your Task
194184
195-
1. **Read the spec file**: \`specs/${specId}.md\`
185+
1. **Read the spec file**: `specs/${{ steps.pr.outputs.spec_id }}.md`
196186
- Note all quality criteria listed
197187
- Understand the expected visual output
198188
199-
2. **Read the ${library} implementation**:
200-
- \`plots/${library}/*/${specId}/default.py\`
189+
2. **Read the ${{ steps.pr.outputs.library }} implementation**:
190+
- `plots/${{ steps.pr.outputs.library }}/*/${{ steps.pr.outputs.spec_id }}/default.py`
201191
202192
3. **Read library-specific rules**:
203-
- \`prompts/library/${library}.md\`
193+
- `prompts/library/${{ steps.pr.outputs.library }}.md`
204194
205-
4. **View the plot images** in \`plot_images/\` directory
195+
4. **View the plot images** in `plot_images/` directory
206196
- Use your vision capabilities to analyze each image
207197
- Compare with the spec requirements
208198
209-
5. **Evaluate against quality criteria** from \`prompts/quality-criteria.md\`
199+
5. **Evaluate against quality criteria** from `prompts/quality-criteria.md`
210200
211-
6. **Post your verdict to Sub-Issue #${subIssueNumber}** using this EXACT format:
201+
6. **Post your verdict to Sub-Issue #${{ steps.pr.outputs.sub_issue }}** using this EXACT format:
212202
213-
\`\`\`markdown
214-
## AI Review - Attempt ${attempt}/3
203+
```markdown
204+
## AI Review - Attempt ${{ steps.attempts.outputs.count }}/3
215205
216206
### Quality Evaluation
217207
| Evaluator | Score | Verdict |
@@ -230,25 +220,24 @@ jobs:
230220
2. **CQ-002 PARTIAL**: Docstring missing return type
231221
232222
### AI Feedback for Next Attempt
233-
> Move legend outside plot area with \\\`bbox_to_anchor=(1.05, 1)\\\`
223+
> Move legend outside plot area with `bbox_to_anchor=(1.05, 1)`
234224
> Add return type to docstring
235225
236226
### Verdict: APPROVED / REJECTED
237-
\`\`\`
227+
```
238228
239229
7. **Take action based on result**:
240230
- **APPROVED** (score >= 85):
241-
- Run: \`gh pr edit ${prNumber} --add-label ai-approved\`
242-
- Run: \`gh issue edit ${subIssueNumber} --remove-label reviewing --add-label ai-approved\`
231+
- Run: `gh pr edit ${{ steps.metadata.outputs.pr_number }} --add-label ai-approved`
232+
- Run: `gh issue edit ${{ steps.pr.outputs.sub_issue }} --remove-label reviewing --add-label ai-approved`
243233
- **REJECTED** (score < 85):
244-
- Run: \`gh pr edit ${prNumber} --add-label ai-rejected\`
245-
- Run: \`gh issue edit ${subIssueNumber} --remove-label reviewing --add-label ai-rejected\`
234+
- Run: `gh pr edit ${{ steps.metadata.outputs.pr_number }} --add-label ai-rejected`
235+
- Run: `gh issue edit ${{ steps.pr.outputs.sub_issue }} --remove-label reviewing --add-label ai-rejected`
246236
247237
**IMPORTANT:**
248-
- This is a **${library}-only** review - focus only on this library
249-
- Post feedback to **Sub-Issue #${subIssueNumber}**, NOT the main issue
250-
- Include the generated code in your review comment for documentation`
251-
});
238+
- This is a **${{ steps.pr.outputs.library }}-only** review - focus only on this library
239+
- Post feedback to **Sub-Issue #${{ steps.pr.outputs.sub_issue }}**, NOT the main issue
240+
- Include the generated code in your review comment for documentation
252241
253242
- name: Mark as failed after 3 attempts
254243
if: steps.check.outputs.should_run == 'true' && steps.pr.outputs.skip != 'true' && steps.attempts.outputs.count == '3'

0 commit comments

Comments
 (0)