Skip to content

Commit c268706

Browse files
Merge branch 'main' into merge-cuda-hip
2 parents 9c69888 + eb3064f commit c268706

File tree

11 files changed

+807
-47
lines changed

11 files changed

+807
-47
lines changed

.github/workflows/tests-nightly.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ jobs:
2020
platform: [linux-x64, linux-aarch64, macos, windows]
2121
# default runners don't have AVX-512 support, but icelake does
2222
cpu_type: ["", icelake]
23-
torch_version: ["2.3.1", "2.8.0", "2.9.1"]
23+
torch_version: ["2.3.1", "2.9.1", "2.10.0"]
2424

2525
exclude:
2626
# aarch64 minimum torch version is 2.5.1
@@ -65,13 +65,13 @@ jobs:
6565
torch_version: "2.3.1"
6666
pypi_index: "https://download.pytorch.org/whl/cu118"
6767
- cuda_version: "12.6.3"
68-
torch_version: "2.7.1"
68+
torch_version: "2.8.0"
6969
pypi_index: "https://download.pytorch.org/whl/cu126"
7070
- cuda_version: "12.8.1"
71-
torch_version: "2.8.0"
71+
torch_version: "2.9.1"
7272
pypi_index: "https://download.pytorch.org/whl/cu128"
7373
- cuda_version: "13.0.2"
74-
torch_version: "2.9.1"
74+
torch_version: "2.10.0"
7575
pypi_index: "https://download.pytorch.org/whl/cu130"
7676

7777
# Windows CUDA Tests - T4 GPU (CUDA 11.8 only, multiple torch versions)

.github/workflows/tests-pr.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ jobs:
3131
platform: [linux-x64, linux-aarch64, macos]
3232
# default runners don't have AVX-512 support, but icelake does
3333
cpu_type: ["", icelake]
34-
torch_version: ["2.3.1", "2.9.1"]
34+
torch_version: ["2.3.1", "2.10.0"]
3535

3636
exclude:
3737
# aarch64 minimum torch version is 2.5.1
@@ -73,10 +73,10 @@ jobs:
7373
torch_version: "2.3.1"
7474
pypi_index: "https://download.pytorch.org/whl/cu118"
7575
- cuda_version: "12.8.1"
76-
torch_version: "2.8.0"
76+
torch_version: "2.9.1"
7777
pypi_index: "https://download.pytorch.org/whl/cu128"
7878
- cuda_version: "13.0.2"
79-
torch_version: "2.9.1"
79+
torch_version: "2.10.0"
8080
pypi_index: "https://download.pytorch.org/whl/cu130"
8181

8282
# Windows CUDA test - single configuration

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -158,3 +158,4 @@ cuda_build
158158
output/
159159
cuda-spec.md
160160
cuda-spec-additions.md
161+
agents/*_issues.json

CLAUDE.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ The full suite will be run separately. Best practices and known issues: `agents/
4242

4343
# Agent Dispatch (the "Dispatcher" role)
4444

45-
To triage open GitHub issues, generate prompt files, and launch parallel worker agents, read `agents/dispatch_guide.md`. If told "you're the Dispatcher" or "please read the Dispatch Guide," that's what this refers to. The dispatch workflow uses the GitHub issue tools in `~/git/lab_tools/github/` — see `agents/github_tools_guide.md` for the bitsandbytes-specific reference.
45+
To triage open GitHub issues, generate prompt files, and launch parallel worker agents, read `agents/dispatch_guide.md`. If told "you're the Dispatcher" or "please read the Dispatch Guide," that's what this refers to. The dispatch workflow uses the GitHub issue tools in `agents/` — see `agents/github_tools_guide.md` for the bitsandbytes-specific reference.
4646

4747
# Issue maintenance and triage
4848

agents/dispatch_guide.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ You are the Dispatcher. Your job is to analyze open GitHub issues for bitsandbyt
77
Before starting, refresh the issue data:
88

99
```bash
10-
python3 ~/git/lab_tools/github/fetch_issues.py
10+
python3 agents/fetch_issues.py
1111
```
1212

1313
Read `agents/github_tools_guide.md` for the full reference on how to use the query tools.
@@ -17,8 +17,8 @@ Read `agents/github_tools_guide.md` for the full reference on how to use the que
1717
Start by getting the landscape of open issues:
1818

1919
```bash
20-
python3 ~/git/lab_tools/github/query_issues.py list
21-
python3 ~/git/lab_tools/github/query_issues.py list --sort reactions
20+
python3 agents/query_issues.py list
21+
python3 agents/query_issues.py list --sort reactions
2222
```
2323

2424
Look for issues that are actionable — see the "Identifying Actionable Issues" section of `agents/github_tools_guide.md`. Good candidates have:
@@ -32,13 +32,13 @@ Also check for low-hanging fruit:
3232

3333
```bash
3434
# Issues with open PRs that may just need review/testing/completion
35-
python3 ~/git/lab_tools/github/query_issues.py search "PR" --state open
35+
python3 agents/query_issues.py search "PR" --state open
3636

3737
# Issues already labeled for external contribution
38-
python3 ~/git/lab_tools/github/query_issues.py list --label "Contributions Welcome"
38+
python3 agents/query_issues.py list --label "Contributions Welcome"
3939

4040
# Issues proposed for closing (may just need verification)
41-
python3 ~/git/lab_tools/github/query_issues.py list --label "Proposing to Close"
41+
python3 agents/query_issues.py list --label "Proposing to Close"
4242
```
4343

4444
## Step 2: Deep-Dive Each Candidate
@@ -47,20 +47,20 @@ For each candidate issue, gather full context. This step is critical — the qua
4747

4848
```bash
4949
# Full issue with all comments
50-
python3 ~/git/lab_tools/github/query_issues.py show <NUMBER>
50+
python3 agents/query_issues.py show <NUMBER>
5151

5252
# Check for existing open PRs that already address this issue
5353
gh pr list --search "<NUMBER>" --state open
5454
gh pr list --search "keyword from issue" --state open
5555

5656
# Find related/duplicate issues (with body previews and last comments)
57-
python3 ~/git/lab_tools/github/query_issues.py related <NUMBER> -v
57+
python3 agents/query_issues.py related <NUMBER> -v
5858

5959
# Check if it was already resolved
60-
python3 ~/git/lab_tools/github/query_issues.py related <NUMBER> --state closed -v
60+
python3 agents/query_issues.py related <NUMBER> --state closed -v
6161

6262
# Targeted searches for specific error messages or terms from the issue
63-
python3 ~/git/lab_tools/github/query_issues.py search "specific error text"
63+
python3 agents/query_issues.py search "specific error text"
6464
```
6565

6666
For each promising related issue that shows up, run `show` on it to get the full context. Don't stop at the `related` output — read the full body and comments of related issues, especially closed ones where the resolution may be documented.

agents/fetch_issues.py

Lines changed: 250 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,250 @@
1+
#!/usr/bin/env python3
2+
"""Fetch all issues (open and closed) from a GitHub repository via GraphQL and store as structured JSON."""
3+
4+
import argparse
5+
import json
6+
import subprocess
7+
import sys
8+
import time
9+
from datetime import datetime, timezone
10+
from pathlib import Path
11+
12+
GRAPHQL_QUERY = """
13+
query($owner: String!, $repo: String!, $cursor: String, $states: [IssueState!]) {
14+
repository(owner: $owner, name: $repo) {
15+
issues(states: $states, first: 100, after: $cursor, orderBy: {field: CREATED_AT, direction: ASC}) {
16+
totalCount
17+
pageInfo {
18+
hasNextPage
19+
endCursor
20+
}
21+
nodes {
22+
number
23+
title
24+
body
25+
state
26+
createdAt
27+
updatedAt
28+
closedAt
29+
author { login }
30+
assignees(first: 10) { nodes { login } }
31+
labels(first: 20) { nodes { name } }
32+
milestone { title number dueOn }
33+
reactionGroups { content users { totalCount } }
34+
comments(first: 100) {
35+
totalCount
36+
nodes {
37+
author { login }
38+
body
39+
createdAt
40+
updatedAt
41+
reactionGroups { content users { totalCount } }
42+
}
43+
}
44+
timelineItems(first: 50, itemTypes: [CROSS_REFERENCED_EVENT, REFERENCED_EVENT, CLOSED_EVENT, REOPENED_EVENT, LABELED_EVENT, UNLABELED_EVENT, CONNECTED_EVENT]) {
45+
nodes {
46+
__typename
47+
... on CrossReferencedEvent {
48+
createdAt
49+
source {
50+
__typename
51+
... on PullRequest { number title state url }
52+
... on Issue { number title state url }
53+
}
54+
}
55+
... on LabeledEvent { label { name } createdAt }
56+
... on UnlabeledEvent { label { name } createdAt }
57+
... on ClosedEvent { createdAt }
58+
... on ReopenedEvent { createdAt }
59+
}
60+
}
61+
}
62+
}
63+
}
64+
rateLimit { cost remaining resetAt }
65+
}
66+
"""
67+
68+
69+
def gh_graphql(query: str, variables: dict) -> dict:
70+
"""Execute a GraphQL query via the gh CLI, passing the full payload as JSON on stdin."""
71+
clean_vars = {k: v for k, v in variables.items() if v is not None}
72+
payload = json.dumps({"query": query, "variables": clean_vars})
73+
result = subprocess.run(
74+
["gh", "api", "graphql", "--input", "-"],
75+
input=payload, capture_output=True, text=True,
76+
)
77+
if result.returncode != 0:
78+
raise RuntimeError(f"gh api graphql failed: {result.stderr}")
79+
return json.loads(result.stdout)
80+
81+
82+
def transform_reactions(reaction_groups: list) -> dict:
83+
"""Convert reactionGroups to a flat dict, dropping zeros."""
84+
reactions = {}
85+
for rg in reaction_groups:
86+
count = rg["users"]["totalCount"]
87+
if count > 0:
88+
reactions[rg["content"]] = count
89+
return reactions
90+
91+
92+
def transform_timeline_event(event: dict) -> dict | None:
93+
"""Flatten a timeline event node."""
94+
typename = event.get("__typename")
95+
if typename == "CrossReferencedEvent":
96+
source = event.get("source", {})
97+
return {
98+
"type": "CrossReferencedEvent",
99+
"created_at": event.get("createdAt"),
100+
"source_type": source.get("__typename"),
101+
"source_number": source.get("number"),
102+
"source_title": source.get("title"),
103+
"source_state": source.get("state"),
104+
"source_url": source.get("url"),
105+
}
106+
elif typename in ("LabeledEvent", "UnlabeledEvent"):
107+
return {
108+
"type": typename,
109+
"label": event.get("label", {}).get("name"),
110+
"created_at": event.get("createdAt"),
111+
}
112+
elif typename in ("ClosedEvent", "ReopenedEvent"):
113+
return {
114+
"type": typename,
115+
"created_at": event.get("createdAt"),
116+
}
117+
return None
118+
119+
120+
def transform_issue(raw: dict) -> dict:
121+
"""Transform a raw GraphQL issue node into our clean structure."""
122+
comments = []
123+
for c in raw["comments"]["nodes"]:
124+
comments.append({
125+
"author": c["author"]["login"] if c.get("author") else None,
126+
"body": c["body"],
127+
"created_at": c["createdAt"],
128+
"updated_at": c["updatedAt"],
129+
"reactions": transform_reactions(c.get("reactionGroups", [])),
130+
})
131+
132+
timeline = []
133+
for t in raw["timelineItems"]["nodes"]:
134+
transformed = transform_timeline_event(t)
135+
if transformed:
136+
timeline.append(transformed)
137+
138+
return {
139+
"number": raw["number"],
140+
"title": raw["title"],
141+
"body": raw["body"],
142+
"state": raw["state"],
143+
"author": raw["author"]["login"] if raw.get("author") else None,
144+
"created_at": raw["createdAt"],
145+
"updated_at": raw["updatedAt"],
146+
"closed_at": raw["closedAt"],
147+
"assignees": [a["login"] for a in raw["assignees"]["nodes"]],
148+
"labels": [l["name"] for l in raw["labels"]["nodes"]],
149+
"milestone": raw.get("milestone"),
150+
"reactions": transform_reactions(raw.get("reactionGroups", [])),
151+
"comment_count": raw["comments"]["totalCount"],
152+
"comments": comments,
153+
"timeline": timeline,
154+
}
155+
156+
157+
def fetch_all_issues(owner: str, repo: str, states: list[str] | None = None) -> list[dict]:
158+
"""Fetch issues with pagination and exponential backoff."""
159+
if states is None:
160+
states = ["OPEN"]
161+
all_issues = []
162+
cursor = None
163+
page = 1
164+
max_retries = 5
165+
label = "/".join(s.lower() for s in states)
166+
167+
while True:
168+
for attempt in range(max_retries):
169+
try:
170+
print(f"Fetching {label} issues page {page}...", file=sys.stderr)
171+
data = gh_graphql(GRAPHQL_QUERY, {
172+
"owner": owner, "repo": repo, "cursor": cursor, "states": states,
173+
})
174+
break
175+
except RuntimeError as e:
176+
wait = min(2 ** attempt, 60)
177+
print(f"Error on attempt {attempt + 1}: {e}", file=sys.stderr)
178+
if attempt < max_retries - 1:
179+
print(f"Retrying in {wait}s...", file=sys.stderr)
180+
time.sleep(wait)
181+
else:
182+
raise
183+
184+
rate = data["data"]["rateLimit"]
185+
print(f" Rate limit: {rate['remaining']} remaining, cost: {rate['cost']}", file=sys.stderr)
186+
187+
if rate["remaining"] < 100:
188+
reset_at = datetime.fromisoformat(rate["resetAt"].replace("Z", "+00:00"))
189+
wait_seconds = (reset_at - datetime.now(timezone.utc)).total_seconds() + 5
190+
if wait_seconds > 0:
191+
print(f" Rate limit low, waiting {wait_seconds:.0f}s until reset...", file=sys.stderr)
192+
time.sleep(wait_seconds)
193+
194+
issues_data = data["data"]["repository"]["issues"]
195+
raw_issues = issues_data["nodes"]
196+
total = issues_data["totalCount"]
197+
198+
for raw in raw_issues:
199+
all_issues.append(transform_issue(raw))
200+
201+
print(f" Fetched {len(all_issues)}/{total} issues", file=sys.stderr)
202+
203+
page_info = issues_data["pageInfo"]
204+
if not page_info["hasNextPage"]:
205+
break
206+
207+
cursor = page_info["endCursor"]
208+
page += 1
209+
210+
return all_issues
211+
212+
213+
def main():
214+
parser = argparse.ArgumentParser(description="Fetch all GitHub issues into a JSON file.")
215+
parser.add_argument("--owner", default="bitsandbytes-foundation", help="Repository owner")
216+
parser.add_argument("--repo", default="bitsandbytes", help="Repository name")
217+
parser.add_argument("--open-only", action="store_true", help="Only fetch open issues")
218+
parser.add_argument("-o", "--output", default=None,
219+
help="Output JSON file path (default: <repo>_issues.json in script dir)")
220+
args = parser.parse_args()
221+
222+
output_path = args.output or str(Path(__file__).parent / f"{args.repo}_issues.json")
223+
224+
open_issues = fetch_all_issues(args.owner, args.repo, ["OPEN"])
225+
print(file=sys.stderr)
226+
227+
if args.open_only:
228+
closed_issues = []
229+
else:
230+
closed_issues = fetch_all_issues(args.owner, args.repo, ["CLOSED"])
231+
print(file=sys.stderr)
232+
233+
result = {
234+
"repository": f"{args.owner}/{args.repo}",
235+
"fetched_at": datetime.now(timezone.utc).isoformat(),
236+
"open_issues": open_issues,
237+
"open_count": len(open_issues),
238+
"closed_issues": closed_issues,
239+
"closed_count": len(closed_issues),
240+
}
241+
242+
with open(output_path, "w") as f:
243+
json.dump(result, f, indent=2, ensure_ascii=False)
244+
245+
print(f"Wrote {len(open_issues)} open + {len(closed_issues)} closed issues to {output_path}",
246+
file=sys.stderr)
247+
248+
249+
if __name__ == "__main__":
250+
main()

0 commit comments

Comments
 (0)