Skip to content

Commit 8bd2f25

Browse files
authored
Merge pull request #683 from meoyushi/feat-time-to-first-review
feat: add time_to_first_review metric for pull requests
2 parents 18503f1 + be94607 commit 8bd2f25

14 files changed

Lines changed: 327 additions & 19 deletions

README.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ Feel free to inquire about its usage by creating an issue in this repository.
2020
| Metric | Description |
2121
| --------------------------------- | ------------------------------------------------------------------------------------------ |
2222
| Time to First Response | The duration from creation to the initial comment or review.\* |
23+
| Time to First Review (PRs Only) | The duration from creation to the first submitted review.\* |
2324
| Time to Close | The period from creation to closure.\* |
2425
| Time to Answer (Discussions Only) | The time from creation to an answer. |
2526
| Time in Label | The duration from label application to removal, requires `LABELS_TO_MEASURE` env variable. |
@@ -108,18 +109,18 @@ All feedback regarding our GitHub Actions, as a whole, should be communicated th
108109
## Use as a GitHub Action
109110
110111
1. Create a repository to host this GitHub Action or select an existing repository. This is easiest if it is the same repository as the one you want to measure metrics on.
111-
2. Select a best fit workflow file from the [examples directory](./docs/example-workflows.md) for your use case.
112-
3. Copy that example into your repository (from step 1) and into the proper directory for GitHub Actions: `.github/workflows/` directory with the file extension `.yml` (ie. `.github/workflows/issue-metrics.yml`)
113-
4. Edit the values (`SEARCH_QUERY`, `assignees`) from the sample workflow with your information. See the [SEARCH_QUERY](./docs/search-query.md) section for more information on how to configure the search query.
114-
5. If you are running metrics on a repository other than the one where the workflow file is going to be, then update the value of `GH_TOKEN`.
112+
1. Select a best fit workflow file from the [examples directory](./docs/example-workflows.md) for your use case.
113+
1. Copy that example into your repository (from step 1) and into the proper directory for GitHub Actions: `.github/workflows/` directory with the file extension `.yml` (ie. `.github/workflows/issue-metrics.yml`)
114+
1. Edit the values (`SEARCH_QUERY`, `assignees`) from the sample workflow with your information. See the [SEARCH_QUERY](./docs/search-query.md) section for more information on how to configure the search query.
115+
1. If you are running metrics on a repository other than the one where the workflow file is going to be, then update the value of `GH_TOKEN`.
115116
- Do this by creating a [GitHub API token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic) with permissions to read the repository and write issues.
116117
- Then take the value of the API token you just created, and [create a repository secret](https://docs.github.com/en/actions/security-guides/encrypted-secrets) where the name of the secret is `GH_TOKEN` and the value of the secret the API token.
117118
- Then finally update the workflow file to use that repository secret by changing `GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}` to `GH_TOKEN: ${{ secrets.GH_TOKEN }}`. The name of the secret can really be anything. It just needs to match between when you create the secret name and when you refer to it in the workflow file.
118119
- Help on verifying your token's access to your repository [in the docs directory](docs/verify-token-access-to-repository.md)
119-
6. If you want the resulting issue with the metrics in it to appear in a different repository other than the one the workflow file runs in, update the line `token: ${{ secrets.GITHUB_TOKEN }}` with your own GitHub API token stored as a repository secret.
120+
1. If you want the resulting issue with the metrics in it to appear in a different repository other than the one the workflow file runs in, update the line `token: ${{ secrets.GITHUB_TOKEN }}` with your own GitHub API token stored as a repository secret.
120121
- This process is the same as described in the step above. More info on creating secrets can be found [in the GitHub docs security guide on encrypted secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets).
121-
7. Commit the workflow file to the default branch (often `master` or `main`)
122-
8. Wait for the action to trigger based on the `schedule` entry or manually trigger the workflow as shown in the [documentation](https://docs.github.com/en/actions/using-workflows/manually-running-a-workflow).
122+
1. Commit the workflow file to the default branch (often `master` or `main`)
123+
1. Wait for the action to trigger based on the `schedule` entry or manually trigger the workflow as shown in the [documentation](https://docs.github.com/en/actions/using-workflows/manually-running-a-workflow).
123124

124125
### Configuration
125126

@@ -157,6 +158,7 @@ This action can be configured to authenticate with GitHub App Installation or Pe
157158
| `HIDE_TIME_TO_ANSWER` | False | False | If set to `true`, the time to answer a discussion will not be displayed in the generated Markdown file. |
158159
| `HIDE_TIME_TO_CLOSE` | False | False | If set to `true`, the time to close will not be displayed in the generated Markdown file. |
159160
| `HIDE_TIME_TO_FIRST_RESPONSE` | False | False | If set to `true`, the time to first response will not be displayed in the generated Markdown file. |
161+
| `HIDE_TIME_TO_FIRST_REVIEW` | False | False | If set to `true`, the time to first review will not be displayed in the generated Markdown file. |
160162
| `HIDE_STATUS` | False | True | If set to `true`, the status column will not be shown |
161163
| `HIDE_CREATED_AT` | False | True | If set to `true`, the creation timestamp will not be displayed in the generated Markdown file. |
162164
| `HIDE_PR_STATISTICS` | False | True | If set to `true`, PR comment statistics (mean, median, 90th percentile, and individual PR comment counts) will not be displayed in the generated Markdown file. |
@@ -173,7 +175,7 @@ This action can be configured to authenticate with GitHub App Installation or Pe
173175
| `REPORT_TITLE` | False | `"Issue Metrics"` | Title to have on the report issue. |
174176
| `SEARCH_QUERY` | True | `""` | The query by which you can filter issues/PRs which must contain a `repo:`, `org:`, `owner:`, or a `user:` entry. For discussions, include `type:discussions` in the query. |
175177
| `GROUP_BY` | False | `""` | Group items in the report by the specified field. Supported values: `author`, `assignee`. When set, items will be grouped into separate sections by the chosen field. |
176-
| `SORT_BY` | False | `""` | Sort items in the report by the specified field. Supported values: `time_to_close`, `time_to_first_response`, `time_to_answer`, `time_in_draft`, `created_at`. When set, items will be sorted by the chosen metric. |
178+
| `SORT_BY` | False | `""` | Sort items in the report by the specified field. Supported values: `time_to_close`, `time_to_first_response`, `time_to_first_review`, `time_to_answer`, `time_in_draft`, `created_at`. When set, items will be sorted by the chosen metric. |
177179
| `SORT_ORDER` | False | `asc` | Sort order for the items. Supported values: `asc` (ascending), `desc` (descending). Only applies when `SORT_BY` is set. |
178180

179181
## Further Documentation

classes.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ def __init__(
5353
self.assignee = assignee
5454
self.assignees = assignees or []
5555
self.time_to_first_response = time_to_first_response
56+
self.time_to_first_review = None
5657
self.time_to_close = time_to_close
5758
self.time_to_answer = time_to_answer
5859
self.time_in_draft = time_in_draft

config.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,7 @@ class EnvVars:
3939
hide_time_to_close (bool): If true, the time to close metric is hidden in the output
4040
hide_time_to_first_response (bool): If true, the time to first response metric is hidden
4141
in the output
42+
hide_time_to_first_review (bool): If true, the time to first review metric is hidden in the output
4243
hide_created_at (bool): If true, the created at timestamp is hidden in the output
4344
hide_status (bool): If true, the status column is hidden in the output
4445
ignore_users (List[str]): List of usernames to ignore when calculating metrics
@@ -79,6 +80,7 @@ def __init__(
7980
hide_time_to_answer: bool,
8081
hide_time_to_close: bool,
8182
hide_time_to_first_response: bool,
83+
hide_time_to_first_review: bool,
8284
hide_created_at: bool,
8385
hide_status: bool,
8486
ignore_user: List[str],
@@ -114,6 +116,7 @@ def __init__(
114116
self.hide_time_to_answer = hide_time_to_answer
115117
self.hide_time_to_close = hide_time_to_close
116118
self.hide_time_to_first_response = hide_time_to_first_response
119+
self.hide_time_to_first_review = hide_time_to_first_review
117120
self.hide_created_at = hide_created_at
118121
self.hide_status = hide_status
119122
self.enable_mentor_count = enable_mentor_count
@@ -148,6 +151,7 @@ def __repr__(self):
148151
f"{self.hide_time_to_answer}, "
149152
f"{self.hide_time_to_close}, "
150153
f"{self.hide_time_to_first_response}, "
154+
f"{self.hide_time_to_first_review}, "
151155
f"{self.hide_created_at}, "
152156
f"{self.hide_status}, "
153157
f"{self.ignore_users}, "
@@ -269,6 +273,7 @@ def get_env_vars(test: bool = False) -> EnvVars:
269273
hide_time_to_answer = get_bool_env_var("HIDE_TIME_TO_ANSWER", False)
270274
hide_time_to_close = get_bool_env_var("HIDE_TIME_TO_CLOSE", False)
271275
hide_time_to_first_response = get_bool_env_var("HIDE_TIME_TO_FIRST_RESPONSE", False)
276+
hide_time_to_first_review = get_bool_env_var("HIDE_TIME_TO_FIRST_REVIEW", False)
272277
hide_created_at = get_bool_env_var("HIDE_CREATED_AT", True)
273278
hide_status = get_bool_env_var("HIDE_STATUS", True)
274279
hide_pr_statistics = get_bool_env_var("HIDE_PR_STATISTICS", True)
@@ -293,6 +298,7 @@ def get_env_vars(test: bool = False) -> EnvVars:
293298
hide_time_to_answer,
294299
hide_time_to_close,
295300
hide_time_to_first_response,
301+
hide_time_to_first_review,
296302
hide_created_at,
297303
hide_status,
298304
ignore_users_list,

issue_metrics.py

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,10 @@
3939
get_stats_time_to_first_response,
4040
measure_time_to_first_response,
4141
)
42+
from time_to_first_review import (
43+
get_stats_time_to_first_review,
44+
measure_time_to_first_review,
45+
)
4246
from time_to_merge import measure_time_to_merge
4347
from time_to_ready_for_review import get_time_to_ready_for_review
4448

@@ -159,7 +163,13 @@ def get_per_issue_metrics(
159163
issue_with_metrics.pr_comment_count = count_pr_comments(
160164
issue, pull_request, ignore_users
161165
)
162-
166+
if not env_vars.hide_time_to_first_review and pull_request:
167+
issue_with_metrics.time_to_first_review = measure_time_to_first_review(
168+
issue,
169+
pull_request,
170+
ready_for_review_at,
171+
ignore_users,
172+
)
163173
if env_vars.hide_time_to_first_response is False:
164174
issue_with_metrics.time_to_first_response = (
165175
measure_time_to_first_response(
@@ -305,6 +315,7 @@ def main(): # pragma: no cover
305315
write_to_markdown(
306316
issues_with_metrics=None,
307317
average_time_to_first_response=None,
318+
average_time_to_first_review=None,
308319
average_time_to_close=None,
309320
average_time_to_answer=None,
310321
average_time_in_draft=None,
@@ -333,6 +344,7 @@ def main(): # pragma: no cover
333344
write_to_markdown(
334345
issues_with_metrics=None,
335346
average_time_to_first_response=None,
347+
average_time_to_first_review=None,
336348
average_time_to_close=None,
337349
average_time_to_answer=None,
338350
average_time_in_draft=None,
@@ -365,6 +377,7 @@ def main(): # pragma: no cover
365377
)
366378

367379
stats_time_to_first_response = get_stats_time_to_first_response(issues_with_metrics)
380+
stats_time_to_first_review = get_stats_time_to_first_review(issues_with_metrics)
368381
stats_time_to_close = None
369382
if num_issues_closed > 0:
370383
stats_time_to_close = get_stats_time_to_close(issues_with_metrics)
@@ -385,6 +398,7 @@ def main(): # pragma: no cover
385398
write_to_json(
386399
issues_with_metrics=issues_with_metrics,
387400
stats_time_to_first_response=stats_time_to_first_response,
401+
stats_time_to_first_review=stats_time_to_first_review,
388402
stats_time_to_close=stats_time_to_close,
389403
stats_time_to_answer=stats_time_to_answer,
390404
stats_time_in_draft=stats_time_in_draft,
@@ -400,6 +414,7 @@ def main(): # pragma: no cover
400414
write_to_markdown(
401415
issues_with_metrics=issues_with_metrics,
402416
average_time_to_first_response=stats_time_to_first_response,
417+
average_time_to_first_review=stats_time_to_first_review,
403418
average_time_to_close=stats_time_to_close,
404419
average_time_to_answer=stats_time_to_answer,
405420
average_time_in_draft=stats_time_in_draft,

json_writer.py

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44
write_to_json(
55
issues_with_metrics: Union[List[IssueWithMetrics], None],
66
stats_time_to_first_response: Union[dict[str, timedelta], None],
7+
stats_time_to_first_review: Union[dict[str, timedelta], None],
78
stats_time_to_close: Union[dict[str, timedelta], None],
89
stats_time_to_answer: Union[dict[str, timedelta], None],
910
stats_time_in_draft: Union[dict[str, timedelta], None],
@@ -29,6 +30,7 @@
2930
def write_to_json(
3031
issues_with_metrics: Union[List[IssueWithMetrics], None],
3132
stats_time_to_first_response: Union[dict[str, timedelta], None],
33+
stats_time_to_first_review: Union[dict[str, timedelta], None],
3234
stats_time_to_close: Union[dict[str, timedelta], None],
3335
stats_time_to_answer: Union[dict[str, timedelta], None],
3436
stats_time_in_draft: Union[dict[str, timedelta], None],
@@ -104,6 +106,15 @@ def write_to_json(
104106
med_time_to_first_response = stats_time_to_first_response["med"]
105107
p90_time_to_first_response = stats_time_to_first_response["90p"]
106108

109+
# time to first review
110+
average_time_to_first_review = None
111+
med_time_to_first_review = None
112+
p90_time_to_first_review = None
113+
if stats_time_to_first_review is not None:
114+
average_time_to_first_review = stats_time_to_first_review["avg"]
115+
med_time_to_first_review = stats_time_to_first_review["med"]
116+
p90_time_to_first_review = stats_time_to_first_review["90p"]
117+
107118
# time to close
108119
average_time_to_close = None
109120
med_time_to_close = None
@@ -155,16 +166,19 @@ def write_to_json(
155166
# Create a dictionary with the metrics
156167
metrics: dict[str, Any] = {
157168
"average_time_to_first_response": str(average_time_to_first_response),
169+
"average_time_to_first_review": str(average_time_to_first_review),
158170
"average_time_to_close": str(average_time_to_close),
159171
"average_time_to_answer": str(average_time_to_answer),
160172
"average_time_in_draft": str(average_time_in_draft),
161173
"average_time_in_labels": average_time_in_labels,
162174
"median_time_to_first_response": str(med_time_to_first_response),
175+
"median_time_to_first_review": str(med_time_to_first_review),
163176
"median_time_to_close": str(med_time_to_close),
164177
"median_time_to_answer": str(med_time_to_answer),
165178
"median_time_in_draft": str(med_time_in_draft),
166179
"median_time_in_labels": med_time_in_labels,
167180
"90_percentile_time_to_first_response": str(p90_time_to_first_response),
181+
"90_percentile_time_to_first_review": str(p90_time_to_first_review),
168182
"90_percentile_time_to_close": str(p90_time_to_close),
169183
"90_percentile_time_to_answer": str(p90_time_to_answer),
170184
"90_percentile_time_in_draft": str(p90_time_in_draft),
@@ -193,6 +207,7 @@ def write_to_json(
193207
"assignee": issue.assignee,
194208
"assignees": issue.assignees,
195209
"time_to_first_response": str(issue.time_to_first_response),
210+
"time_to_first_review": str(issue.time_to_first_review),
196211
"time_to_close": str(issue.time_to_close),
197212
"time_to_answer": str(issue.time_to_answer),
198213
"time_in_draft": str(issue.time_in_draft),

0 commit comments

Comments
 (0)