diff --git a/README.md b/README.md index a7aca17..ff585bc 100644 --- a/README.md +++ b/README.md @@ -20,6 +20,7 @@ Feel free to inquire about its usage by creating an issue in this repository. | Metric | Description | | --------------------------------- | ------------------------------------------------------------------------------------------ | | Time to First Response | The duration from creation to the initial comment or review.\* | +| Time to First Review (PRs Only) | The duration from creation to the first submitted review.\* | | Time to Close | The period from creation to closure.\* | | Time to Answer (Discussions Only) | The time from creation to an answer. | | Time in Label | The duration from label application to removal, requires `LABELS_TO_MEASURE` env variable. | @@ -108,18 +109,18 @@ All feedback regarding our GitHub Actions, as a whole, should be communicated th ## Use as a GitHub Action 1. Create a repository to host this GitHub Action or select an existing repository. This is easiest if it is the same repository as the one you want to measure metrics on. -2. Select a best fit workflow file from the [examples directory](./docs/example-workflows.md) for your use case. -3. Copy that example into your repository (from step 1) and into the proper directory for GitHub Actions: `.github/workflows/` directory with the file extension `.yml` (ie. `.github/workflows/issue-metrics.yml`) -4. Edit the values (`SEARCH_QUERY`, `assignees`) from the sample workflow with your information. See the [SEARCH_QUERY](./docs/search-query.md) section for more information on how to configure the search query. -5. If you are running metrics on a repository other than the one where the workflow file is going to be, then update the value of `GH_TOKEN`. +1. Select a best fit workflow file from the [examples directory](./docs/example-workflows.md) for your use case. +1. Copy that example into your repository (from step 1) and into the proper directory for GitHub Actions: `.github/workflows/` directory with the file extension `.yml` (ie. `.github/workflows/issue-metrics.yml`) +1. Edit the values (`SEARCH_QUERY`, `assignees`) from the sample workflow with your information. See the [SEARCH_QUERY](./docs/search-query.md) section for more information on how to configure the search query. +1. If you are running metrics on a repository other than the one where the workflow file is going to be, then update the value of `GH_TOKEN`. - Do this by creating a [GitHub API token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic) with permissions to read the repository and write issues. - Then take the value of the API token you just created, and [create a repository secret](https://docs.github.com/en/actions/security-guides/encrypted-secrets) where the name of the secret is `GH_TOKEN` and the value of the secret the API token. - Then finally update the workflow file to use that repository secret by changing `GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}` to `GH_TOKEN: ${{ secrets.GH_TOKEN }}`. The name of the secret can really be anything. It just needs to match between when you create the secret name and when you refer to it in the workflow file. - Help on verifying your token's access to your repository [in the docs directory](docs/verify-token-access-to-repository.md) -6. If you want the resulting issue with the metrics in it to appear in a different repository other than the one the workflow file runs in, update the line `token: ${{ secrets.GITHUB_TOKEN }}` with your own GitHub API token stored as a repository secret. +1. If you want the resulting issue with the metrics in it to appear in a different repository other than the one the workflow file runs in, update the line `token: ${{ secrets.GITHUB_TOKEN }}` with your own GitHub API token stored as a repository secret. - This process is the same as described in the step above. More info on creating secrets can be found [in the GitHub docs security guide on encrypted secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets). -7. Commit the workflow file to the default branch (often `master` or `main`) -8. Wait for the action to trigger based on the `schedule` entry or manually trigger the workflow as shown in the [documentation](https://docs.github.com/en/actions/using-workflows/manually-running-a-workflow). +1. Commit the workflow file to the default branch (often `master` or `main`) +1. Wait for the action to trigger based on the `schedule` entry or manually trigger the workflow as shown in the [documentation](https://docs.github.com/en/actions/using-workflows/manually-running-a-workflow). ### Configuration @@ -157,6 +158,7 @@ This action can be configured to authenticate with GitHub App Installation or Pe | `HIDE_TIME_TO_ANSWER` | False | False | If set to `true`, the time to answer a discussion will not be displayed in the generated Markdown file. | | `HIDE_TIME_TO_CLOSE` | False | False | If set to `true`, the time to close will not be displayed in the generated Markdown file. | | `HIDE_TIME_TO_FIRST_RESPONSE` | False | False | If set to `true`, the time to first response will not be displayed in the generated Markdown file. | +| `HIDE_TIME_TO_FIRST_REVIEW` | False | False | If set to `true`, the time to first review will not be displayed in the generated Markdown file. | | `HIDE_STATUS` | False | True | If set to `true`, the status column will not be shown | | `HIDE_CREATED_AT` | False | True | If set to `true`, the creation timestamp will not be displayed in the generated Markdown file. | | `HIDE_PR_STATISTICS` | False | True | If set to `true`, PR comment statistics (mean, median, 90th percentile, and individual PR comment counts) will not be displayed in the generated Markdown file. | @@ -173,7 +175,7 @@ This action can be configured to authenticate with GitHub App Installation or Pe | `REPORT_TITLE` | False | `"Issue Metrics"` | Title to have on the report issue. | | `SEARCH_QUERY` | True | `""` | The query by which you can filter issues/PRs which must contain a `repo:`, `org:`, `owner:`, or a `user:` entry. For discussions, include `type:discussions` in the query. | | `GROUP_BY` | False | `""` | Group items in the report by the specified field. Supported values: `author`, `assignee`. When set, items will be grouped into separate sections by the chosen field. | -| `SORT_BY` | False | `""` | Sort items in the report by the specified field. Supported values: `time_to_close`, `time_to_first_response`, `time_to_answer`, `time_in_draft`, `created_at`. When set, items will be sorted by the chosen metric. | +| `SORT_BY` | False | `""` | Sort items in the report by the specified field. Supported values: `time_to_close`, `time_to_first_response`, `time_to_first_review`, `time_to_answer`, `time_in_draft`, `created_at`. When set, items will be sorted by the chosen metric. | | `SORT_ORDER` | False | `asc` | Sort order for the items. Supported values: `asc` (ascending), `desc` (descending). Only applies when `SORT_BY` is set. | ## Further Documentation diff --git a/classes.py b/classes.py index bc2df19..fc575ef 100644 --- a/classes.py +++ b/classes.py @@ -53,6 +53,7 @@ def __init__( self.assignee = assignee self.assignees = assignees or [] self.time_to_first_response = time_to_first_response + self.time_to_first_review = None self.time_to_close = time_to_close self.time_to_answer = time_to_answer self.time_in_draft = time_in_draft diff --git a/config.py b/config.py index d756290..d2fe279 100644 --- a/config.py +++ b/config.py @@ -39,6 +39,7 @@ class EnvVars: hide_time_to_close (bool): If true, the time to close metric is hidden in the output hide_time_to_first_response (bool): If true, the time to first response metric is hidden in the output + hide_time_to_first_review (bool): If true, the time to first review metric is hidden in the output hide_created_at (bool): If true, the created at timestamp is hidden in the output hide_status (bool): If true, the status column is hidden in the output ignore_users (List[str]): List of usernames to ignore when calculating metrics @@ -79,6 +80,7 @@ def __init__( hide_time_to_answer: bool, hide_time_to_close: bool, hide_time_to_first_response: bool, + hide_time_to_first_review: bool, hide_created_at: bool, hide_status: bool, ignore_user: List[str], @@ -114,6 +116,7 @@ def __init__( self.hide_time_to_answer = hide_time_to_answer self.hide_time_to_close = hide_time_to_close self.hide_time_to_first_response = hide_time_to_first_response + self.hide_time_to_first_review = hide_time_to_first_review self.hide_created_at = hide_created_at self.hide_status = hide_status self.enable_mentor_count = enable_mentor_count @@ -148,6 +151,7 @@ def __repr__(self): f"{self.hide_time_to_answer}, " f"{self.hide_time_to_close}, " f"{self.hide_time_to_first_response}, " + f"{self.hide_time_to_first_review}, " f"{self.hide_created_at}, " f"{self.hide_status}, " f"{self.ignore_users}, " @@ -269,6 +273,7 @@ def get_env_vars(test: bool = False) -> EnvVars: hide_time_to_answer = get_bool_env_var("HIDE_TIME_TO_ANSWER", False) hide_time_to_close = get_bool_env_var("HIDE_TIME_TO_CLOSE", False) hide_time_to_first_response = get_bool_env_var("HIDE_TIME_TO_FIRST_RESPONSE", False) + hide_time_to_first_review = get_bool_env_var("HIDE_TIME_TO_FIRST_REVIEW", False) hide_created_at = get_bool_env_var("HIDE_CREATED_AT", True) hide_status = get_bool_env_var("HIDE_STATUS", True) hide_pr_statistics = get_bool_env_var("HIDE_PR_STATISTICS", True) @@ -293,6 +298,7 @@ def get_env_vars(test: bool = False) -> EnvVars: hide_time_to_answer, hide_time_to_close, hide_time_to_first_response, + hide_time_to_first_review, hide_created_at, hide_status, ignore_users_list, diff --git a/issue_metrics.py b/issue_metrics.py index 59ca338..5a45815 100755 --- a/issue_metrics.py +++ b/issue_metrics.py @@ -39,6 +39,10 @@ get_stats_time_to_first_response, measure_time_to_first_response, ) +from time_to_first_review import ( + get_stats_time_to_first_review, + measure_time_to_first_review, +) from time_to_merge import measure_time_to_merge from time_to_ready_for_review import get_time_to_ready_for_review @@ -159,7 +163,13 @@ def get_per_issue_metrics( issue_with_metrics.pr_comment_count = count_pr_comments( issue, pull_request, ignore_users ) - + if not env_vars.hide_time_to_first_review and pull_request: + issue_with_metrics.time_to_first_review = measure_time_to_first_review( + issue, + pull_request, + ready_for_review_at, + ignore_users, + ) if env_vars.hide_time_to_first_response is False: issue_with_metrics.time_to_first_response = ( measure_time_to_first_response( @@ -305,6 +315,7 @@ def main(): # pragma: no cover write_to_markdown( issues_with_metrics=None, average_time_to_first_response=None, + average_time_to_first_review=None, average_time_to_close=None, average_time_to_answer=None, average_time_in_draft=None, @@ -333,6 +344,7 @@ def main(): # pragma: no cover write_to_markdown( issues_with_metrics=None, average_time_to_first_response=None, + average_time_to_first_review=None, average_time_to_close=None, average_time_to_answer=None, average_time_in_draft=None, @@ -365,6 +377,7 @@ def main(): # pragma: no cover ) stats_time_to_first_response = get_stats_time_to_first_response(issues_with_metrics) + stats_time_to_first_review = get_stats_time_to_first_review(issues_with_metrics) stats_time_to_close = None if num_issues_closed > 0: stats_time_to_close = get_stats_time_to_close(issues_with_metrics) @@ -385,6 +398,7 @@ def main(): # pragma: no cover write_to_json( issues_with_metrics=issues_with_metrics, stats_time_to_first_response=stats_time_to_first_response, + stats_time_to_first_review=stats_time_to_first_review, stats_time_to_close=stats_time_to_close, stats_time_to_answer=stats_time_to_answer, stats_time_in_draft=stats_time_in_draft, @@ -400,6 +414,7 @@ def main(): # pragma: no cover write_to_markdown( issues_with_metrics=issues_with_metrics, average_time_to_first_response=stats_time_to_first_response, + average_time_to_first_review=stats_time_to_first_review, average_time_to_close=stats_time_to_close, average_time_to_answer=stats_time_to_answer, average_time_in_draft=stats_time_in_draft, diff --git a/json_writer.py b/json_writer.py index 5dcd288..67834f9 100644 --- a/json_writer.py +++ b/json_writer.py @@ -4,6 +4,7 @@ write_to_json( issues_with_metrics: Union[List[IssueWithMetrics], None], stats_time_to_first_response: Union[dict[str, timedelta], None], + stats_time_to_first_review: Union[dict[str, timedelta], None], stats_time_to_close: Union[dict[str, timedelta], None], stats_time_to_answer: Union[dict[str, timedelta], None], stats_time_in_draft: Union[dict[str, timedelta], None], @@ -29,6 +30,7 @@ def write_to_json( issues_with_metrics: Union[List[IssueWithMetrics], None], stats_time_to_first_response: Union[dict[str, timedelta], None], + stats_time_to_first_review: Union[dict[str, timedelta], None], stats_time_to_close: Union[dict[str, timedelta], None], stats_time_to_answer: Union[dict[str, timedelta], None], stats_time_in_draft: Union[dict[str, timedelta], None], @@ -104,6 +106,15 @@ def write_to_json( med_time_to_first_response = stats_time_to_first_response["med"] p90_time_to_first_response = stats_time_to_first_response["90p"] + # time to first review + average_time_to_first_review = None + med_time_to_first_review = None + p90_time_to_first_review = None + if stats_time_to_first_review is not None: + average_time_to_first_review = stats_time_to_first_review["avg"] + med_time_to_first_review = stats_time_to_first_review["med"] + p90_time_to_first_review = stats_time_to_first_review["90p"] + # time to close average_time_to_close = None med_time_to_close = None @@ -155,16 +166,19 @@ def write_to_json( # Create a dictionary with the metrics metrics: dict[str, Any] = { "average_time_to_first_response": str(average_time_to_first_response), + "average_time_to_first_review": str(average_time_to_first_review), "average_time_to_close": str(average_time_to_close), "average_time_to_answer": str(average_time_to_answer), "average_time_in_draft": str(average_time_in_draft), "average_time_in_labels": average_time_in_labels, "median_time_to_first_response": str(med_time_to_first_response), + "median_time_to_first_review": str(med_time_to_first_review), "median_time_to_close": str(med_time_to_close), "median_time_to_answer": str(med_time_to_answer), "median_time_in_draft": str(med_time_in_draft), "median_time_in_labels": med_time_in_labels, "90_percentile_time_to_first_response": str(p90_time_to_first_response), + "90_percentile_time_to_first_review": str(p90_time_to_first_review), "90_percentile_time_to_close": str(p90_time_to_close), "90_percentile_time_to_answer": str(p90_time_to_answer), "90_percentile_time_in_draft": str(p90_time_in_draft), @@ -193,6 +207,7 @@ def write_to_json( "assignee": issue.assignee, "assignees": issue.assignees, "time_to_first_response": str(issue.time_to_first_response), + "time_to_first_review": str(issue.time_to_first_review), "time_to_close": str(issue.time_to_close), "time_to_answer": str(issue.time_to_answer), "time_in_draft": str(issue.time_in_draft), diff --git a/markdown_writer.py b/markdown_writer.py index 4963987..49b0048 100644 --- a/markdown_writer.py +++ b/markdown_writer.py @@ -78,6 +78,10 @@ def get_non_hidden_columns(labels) -> List[str]: if not hide_time_to_first_response: columns.append("Time to first response") + hide_time_to_first_review = env_vars.hide_time_to_first_review + if not hide_time_to_first_review: + columns.append("Time to first review") + hide_time_to_close = env_vars.hide_time_to_close if not hide_time_to_close: columns.append("Time to close") @@ -129,6 +133,7 @@ def sort_issues( valid_fields = { "time_to_close", "time_to_first_response", + "time_to_first_review", "time_to_answer", "time_in_draft", "created_at", @@ -200,6 +205,7 @@ def group_issues( def write_to_markdown( issues_with_metrics: Union[List[IssueWithMetrics], None], average_time_to_first_response: Union[dict[str, timedelta], None], + average_time_to_first_review: Union[dict[str, timedelta], None], average_time_to_close: Union[dict[str, timedelta], None], average_time_to_answer: Union[dict[str, timedelta], None], average_time_in_draft: Union[dict[str, timedelta], None], @@ -268,6 +274,7 @@ def write_to_markdown( write_overall_metrics_tables( issues_with_metrics, average_time_to_first_response, + average_time_to_first_review, average_time_to_close, average_time_to_answer, average_time_in_draft, @@ -345,6 +352,8 @@ def write_to_markdown( ) if "Time to first response" in columns: file.write(f" {issue.time_to_first_response} |") + if "Time to first review" in columns: + file.write(f" {issue.time_to_first_review} |") if "Time to close" in columns: file.write(f" {issue.time_to_close} |") if "Time to answer" in columns: @@ -374,6 +383,7 @@ def write_to_markdown( def write_overall_metrics_tables( issues_with_metrics, stats_time_to_first_response, + stats_time_to_first_review, stats_time_to_close, stats_time_to_answer, average_time_in_draft, @@ -397,6 +407,7 @@ def write_overall_metrics_tables( column in columns for column in [ "Time to first response", + "Time to first review", "Time to close", "Time to answer", "Time in draft", @@ -417,6 +428,16 @@ def write_overall_metrics_tables( ) else: file.write("| Time to first response | None | None | None |\n") + if "Time to first review" in columns: + if stats_time_to_first_review is not None: + file.write( + f"| Time to first review " + f"| {stats_time_to_first_review['avg']} " + f"| {stats_time_to_first_review['med']} " + f"| {stats_time_to_first_review['90p']} |\n" + ) + else: + file.write("| Time to first review | None | None | None |\n") if "Time to close" in columns: if stats_time_to_close is not None: file.write( diff --git a/test_assignee_integration.py b/test_assignee_integration.py index 1af28e6..ab7f3ff 100644 --- a/test_assignee_integration.py +++ b/test_assignee_integration.py @@ -54,6 +54,7 @@ def test_assignee_in_markdown_output(self): try: write_to_markdown( issues_with_metrics=issues_with_metrics, + average_time_to_first_review=None, average_time_to_first_response={ "avg": timedelta(hours=3), "med": timedelta(hours=3), @@ -132,6 +133,7 @@ def test_assignee_in_json_output(self): try: json_output = write_to_json( issues_with_metrics=issues_with_metrics, + stats_time_to_first_review=None, stats_time_to_first_response={ "avg": timedelta(hours=3), "med": timedelta(hours=3), diff --git a/test_column_order_fix.py b/test_column_order_fix.py index 45fcfc6..54418b8 100644 --- a/test_column_order_fix.py +++ b/test_column_order_fix.py @@ -55,6 +55,7 @@ def test_status_and_created_at_columns_alignment(self): write_to_markdown( issues_with_metrics=issues_with_metrics, average_time_to_first_response=None, + average_time_to_first_review=None, average_time_to_close=None, average_time_to_answer=None, average_time_in_draft=None, @@ -80,7 +81,7 @@ def test_status_and_created_at_columns_alignment(self): # The table should have the columns in the correct order # and the data should be properly aligned expected_header = ( - "| Title | URL | Assignee | Author | Time to first response | " + "| Title | URL | Assignee | Author | Time to first response | Time to first review | " "Time to close | Time to answer | Created At | Status |" ) self.assertIn(expected_header, content) @@ -92,7 +93,7 @@ def test_status_and_created_at_columns_alignment(self): "| Test Issue | https://github.com/user/repo/issues/1 | " "[assignee1](https://github.com/assignee1) | " "[testuser](https://github.com/testuser) | 1 day, 0:00:00 | " - "2 days, 0:00:00 | 3 days, 0:00:00 | 2023-01-01T00:00:00Z | open |" + "None | 2 days, 0:00:00 | 3 days, 0:00:00 | 2023-01-01T00:00:00Z | open |" ) self.assertIn(expected_row, content) diff --git a/test_config.py b/test_config.py index 280d588..bbb84e6 100644 --- a/test_config.py +++ b/test_config.py @@ -131,6 +131,7 @@ def test_get_env_vars_with_github_app(self): hide_time_to_answer=False, hide_time_to_close=False, hide_time_to_first_response=False, + hide_time_to_first_review=False, hide_created_at=True, hide_status=True, ignore_user=[], @@ -187,6 +188,7 @@ def test_get_env_vars_with_token(self): hide_time_to_answer=False, hide_time_to_close=False, hide_time_to_first_response=False, + hide_time_to_first_review=False, hide_created_at=True, hide_status=True, ignore_user=[], @@ -292,6 +294,7 @@ def test_get_env_vars_optional_values(self): hide_time_to_answer=True, hide_time_to_close=True, hide_time_to_first_response=True, + hide_time_to_first_review=False, hide_created_at=True, hide_status=True, ignore_user=[], @@ -339,6 +342,7 @@ def test_get_env_vars_optionals_are_defaulted(self): hide_time_to_answer=False, hide_time_to_close=False, hide_time_to_first_response=False, + hide_time_to_first_review=False, hide_created_at=True, hide_status=True, ignore_user=[], diff --git a/test_json_writer.py b/test_json_writer.py index 3a6a24f..5924316 100644 --- a/test_json_writer.py +++ b/test_json_writer.py @@ -77,16 +77,19 @@ def test_write_to_json(self): expected_output = { "average_time_to_first_response": "2 days, 12:00:00", + "average_time_to_first_review": "None", "average_time_to_close": "5 days, 0:00:00", "average_time_to_answer": "1 day, 0:00:00", "average_time_in_draft": "1 day, 0:00:00", "average_time_in_labels": {"bug": "1 day, 16:24:12"}, "median_time_to_first_response": "2 days, 12:00:00", + "median_time_to_first_review": "None", "median_time_to_close": "4 days, 0:00:00", "median_time_to_answer": "2 days, 0:00:00", "median_time_in_draft": "1 day, 0:00:00", "median_time_in_labels": {"bug": "1 day, 16:24:12"}, "90_percentile_time_to_first_response": "1 day, 12:00:00", + "90_percentile_time_to_first_review": "None", "90_percentile_time_to_close": "3 days, 0:00:00", "90_percentile_time_to_answer": "3 days, 0:00:00", "90_percentile_time_in_draft": "1 day, 0:00:00", @@ -106,6 +109,7 @@ def test_write_to_json(self): "assignee": "charlie", "assignees": ["charlie"], "time_to_first_response": "3 days, 0:00:00", + "time_to_first_review": "None", "time_to_close": "6 days, 0:00:00", "time_to_answer": "None", "time_in_draft": "1 day, 0:00:00", @@ -120,6 +124,7 @@ def test_write_to_json(self): "assignee": None, "assignees": [], "time_to_first_response": "2 days, 0:00:00", + "time_to_first_review": "None", "time_to_close": "4 days, 0:00:00", "time_to_answer": "1 day, 0:00:00", "time_in_draft": "None", @@ -136,6 +141,7 @@ def test_write_to_json(self): write_to_json( issues_with_metrics=issues_with_metrics, stats_time_to_first_response=stats_time_to_first_response, + stats_time_to_first_review=None, stats_time_to_close=stats_time_to_close, stats_time_to_answer=stats_time_to_answer, stats_time_in_draft=stats_time_in_draft, @@ -194,16 +200,19 @@ def test_write_to_json_with_no_response(self): expected_output = { "average_time_to_first_response": "None", + "average_time_to_first_review": "None", "average_time_to_close": "None", "average_time_to_answer": "None", "average_time_in_draft": "None", "average_time_in_labels": {}, "median_time_to_first_response": "None", + "median_time_to_first_review": "None", "median_time_to_close": "None", "median_time_to_answer": "None", "median_time_in_draft": "None", "median_time_in_labels": {}, "90_percentile_time_to_first_response": "None", + "90_percentile_time_to_first_review": "None", "90_percentile_time_to_close": "None", "90_percentile_time_to_answer": "None", "90_percentile_time_in_draft": "None", @@ -223,6 +232,7 @@ def test_write_to_json_with_no_response(self): "assignee": None, "assignees": [], "time_to_first_response": "None", + "time_to_first_review": "None", "time_to_close": "None", "time_to_answer": "None", "time_in_draft": "None", @@ -237,6 +247,7 @@ def test_write_to_json_with_no_response(self): "assignee": None, "assignees": [], "time_to_first_response": "None", + "time_to_first_review": "None", "time_to_close": "None", "time_to_answer": "None", "time_in_draft": "None", @@ -253,6 +264,7 @@ def test_write_to_json_with_no_response(self): write_to_json( issues_with_metrics=issues_with_metrics, stats_time_to_first_response=stats_time_to_first_response, + stats_time_to_first_review=None, stats_time_to_close=stats_time_to_close, stats_time_to_answer=stats_time_to_answer, stats_time_in_draft=stats_time_in_draft, diff --git a/test_markdown_writer.py b/test_markdown_writer.py index 29129f2..46b199f 100644 --- a/test_markdown_writer.py +++ b/test_markdown_writer.py @@ -103,6 +103,7 @@ def test_write_to_markdown(self): write_to_markdown( issues_with_metrics=issues_with_metrics, average_time_to_first_response=time_to_first_response, + average_time_to_first_review=None, average_time_to_close=time_to_close, average_time_to_answer=time_to_answer, average_time_in_draft=time_in_draft, @@ -126,6 +127,7 @@ def test_write_to_markdown(self): "| Metric | Average | Median | 90th percentile |\n" "| --- | --- | --- | ---: |\n" "| Time to first response | 2 days, 0:00:00 | 2 days, 0:00:00 | 2 days, 0:00:00 |\n" + "| Time to first review | None | None | None |\n" "| Time to close | 3 days, 0:00:00 | 3 days, 0:00:00 | 3 days, 0:00:00 |\n" "| Time to answer | 4 days, 0:00:00 | 4 days, 0:00:00 | 4 days, 0:00:00 |\n" "| Time in draft | 1 day, 0:00:00 | 1 day, 0:00:00 | 1 day, 0:00:00 |\n" @@ -137,13 +139,13 @@ def test_write_to_markdown(self): "| Number of items that remain open | 2 |\n" "| Number of items closed | 1 |\n" "| Total number of items created | 2 |\n\n" - "| Title | URL | Assignee | Author | Time to first response | Time to close | " + "| Title | URL | Assignee | Author | Time to first response | Time to first review | Time to close | " "Time to answer | Time in draft | Time spent in bug | Created At | Status |\n" - "| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n" + "| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n" "| Issue 1 | https://github.com/user/repo/issues/1 | [charlie](https://github.com/charlie) | " - "[alice](https://github.com/alice) | 1 day, 0:00:00 | 2 days, 0:00:00 | 3 days, 0:00:00 | " + "[alice](https://github.com/alice) | 1 day, 0:00:00 | None | 2 days, 0:00:00 | 3 days, 0:00:00 | " "1 day, 0:00:00 | 4 days, 0:00:00 | -5 days, 0:00:00 | None |\n" - "| Issue 2 | https://github.com/user/repo/issues/2 | None | [bob](https://github.com/bob) | 3 days, 0:00:00 | " + "| Issue 2 | https://github.com/user/repo/issues/2 | None | [bob](https://github.com/bob) | 3 days, 0:00:00 | None | " "4 days, 0:00:00 | 5 days, 0:00:00 | 1 day, 0:00:00 | 2 days, 0:00:00 | -5 days, 0:00:00 | None |\n\n" "_This report was generated with the [Issue Metrics Action](https://github.com/github-community-projects/issue-metrics)_\n" "Search query used to find these items: `is:issue is:open label:bug`\n" @@ -223,6 +225,7 @@ def test_write_to_markdown_with_vertical_bar_in_title(self): write_to_markdown( issues_with_metrics=issues_with_metrics, average_time_to_first_response=average_time_to_first_response, + average_time_to_first_review=None, average_time_to_close=average_time_to_close, average_time_to_answer=average_time_to_answer, average_time_in_draft=average_time_in_draft, @@ -244,6 +247,7 @@ def test_write_to_markdown_with_vertical_bar_in_title(self): "| Metric | Average | Median | 90th percentile |\n" "| --- | --- | --- | ---: |\n" "| Time to first response | 2 days, 0:00:00 | 2 days, 0:00:00 | 2 days, 0:00:00 |\n" + "| Time to first review | None | None | None |\n" "| Time to close | 3 days, 0:00:00 | 3 days, 0:00:00 | 3 days, 0:00:00 |\n" "| Time to answer | 4 days, 0:00:00 | 4 days, 0:00:00 | 4 days, 0:00:00 |\n" "| Time in draft | 1 day, 0:00:00 | 1 day, 0:00:00 | 1 day, 0:00:00 |\n" @@ -255,14 +259,14 @@ def test_write_to_markdown_with_vertical_bar_in_title(self): "| Number of items that remain open | 2 |\n" "| Number of items closed | 1 |\n" "| Total number of items created | 2 |\n\n" - "| Title | URL | Assignee | Author | Time to first response | Time to close | " + "| Title | URL | Assignee | Author | Time to first response | Time to first review | Time to close | " "Time to answer | Time in draft | Time spent in bug | Created At | Status |\n" - "| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n" + "| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n" "| Issue 1 | https://github.com/user/repo/issues/1 | [charlie](https://github.com/charlie) | " - "[alice](https://github.com/alice) | 1 day, 0:00:00 | 2 days, 0:00:00 | 3 days, 0:00:00 | " + "[alice](https://github.com/alice) | 1 day, 0:00:00 | None | 2 days, 0:00:00 | 3 days, 0:00:00 | " "1 day, 0:00:00 | 1 day, 0:00:00 | -5 days, 0:00:00 | None |\n" "| feat| Issue 2 | https://github.com/user/repo/issues/2 | None | " - "[bob](https://github.com/bob) | 3 days, 0:00:00 | " + "[bob](https://github.com/bob) | 3 days, 0:00:00 | None | " "4 days, 0:00:00 | 5 days, 0:00:00 | None | 2 days, 0:00:00 | -5 days, 0:00:00 | None |\n\n" "_This report was generated with the [Issue Metrics Action](https://github.com/github-community-projects/issue-metrics)_\n" ) @@ -284,6 +288,7 @@ def test_write_to_markdown_no_issues(self): None, None, None, + None, report_title="Issue Metrics", ) @@ -310,6 +315,7 @@ def test_write_to_markdown_no_issues(self): "GH_TOKEN": "test_token", "HIDE_CREATED_AT": "False", "HIDE_TIME_TO_FIRST_RESPONSE": "True", + "HIDE_TIME_TO_FIRST_REVIEW": "True", "HIDE_TIME_TO_CLOSE": "True", "HIDE_TIME_TO_ANSWER": "True", "HIDE_LABEL_METRICS": "True", @@ -379,6 +385,7 @@ def test_writes_markdown_file_with_non_hidden_columns_only(self): write_to_markdown( issues_with_metrics=issues_with_metrics, average_time_to_first_response=average_time_to_first_response, + average_time_to_first_review=None, average_time_to_close=average_time_to_close, average_time_to_answer=average_time_to_answer, average_time_in_draft=average_time_in_draft, @@ -428,6 +435,7 @@ def test_writes_markdown_file_with_non_hidden_columns_only(self): "GH_TOKEN": "test_token", "HIDE_CREATED_AT": "False", "HIDE_TIME_TO_FIRST_RESPONSE": "True", + "HIDE_TIME_TO_FIRST_REVIEW": "True", "HIDE_TIME_TO_CLOSE": "True", "HIDE_TIME_TO_ANSWER": "True", "HIDE_LABEL_METRICS": "True", @@ -490,6 +498,7 @@ def test_writes_markdown_file_with_hidden_status_column(self): write_to_markdown( issues_with_metrics=issues_with_metrics, average_time_to_first_response=average_time_to_first_response, + average_time_to_first_review=None, average_time_to_close=average_time_to_close, average_time_to_answer=average_time_to_answer, average_time_in_draft=average_time_in_draft, @@ -538,6 +547,7 @@ def test_writes_markdown_file_with_hidden_status_column(self): "GH_TOKEN": "test_token", "HIDE_CREATED_AT": "False", "HIDE_TIME_TO_FIRST_RESPONSE": "True", + "HIDE_TIME_TO_FIRST_REVIEW": "True", "HIDE_TIME_TO_CLOSE": "True", "HIDE_TIME_TO_ANSWER": "True", "HIDE_LABEL_METRICS": "True", @@ -601,6 +611,7 @@ def test_writes_markdown_file_with_hidden_items_list(self): write_to_markdown( issues_with_metrics=issues_with_metrics, average_time_to_first_response=average_time_to_first_response, + average_time_to_first_review=None, average_time_to_close=average_time_to_close, average_time_to_answer=average_time_to_answer, average_time_in_draft=average_time_in_draft, diff --git a/test_sorting_grouping.py b/test_sorting_grouping.py index a080750..d50d437 100644 --- a/test_sorting_grouping.py +++ b/test_sorting_grouping.py @@ -297,6 +297,7 @@ def test_write_to_markdown_with_sorting(self): write_to_markdown( issues_with_metrics=issues_with_metrics, average_time_to_first_response=None, + average_time_to_first_review=None, average_time_to_close=None, average_time_to_answer=None, average_time_in_draft=None, @@ -357,6 +358,7 @@ def test_write_to_markdown_with_grouping(self): write_to_markdown( issues_with_metrics=issues_with_metrics, average_time_to_first_response=None, + average_time_to_first_review=None, average_time_to_close=None, average_time_to_answer=None, average_time_in_draft=None, diff --git a/test_time_to_first_review.py b/test_time_to_first_review.py new file mode 100644 index 0000000..1a65df2 --- /dev/null +++ b/test_time_to_first_review.py @@ -0,0 +1,124 @@ +"""Unit tests for the time_to_first_review module.""" + +import unittest +from datetime import datetime, timedelta +from unittest.mock import MagicMock + +from time_to_first_review import ( + get_stats_time_to_first_review, + measure_time_to_first_review, +) + + +class TestMeasureTimeToFirstReview(unittest.TestCase): + """Test the measure_time_to_first_review function.""" + + def test_measure_time_to_first_review_basic(self): + """Test that the function calculates correct review time.""" + mock_issue = MagicMock() + mock_issue.created_at = "2023-01-01T00:00:00Z" + + mock_review = MagicMock() + mock_review.submitted_at = datetime.fromisoformat("2023-01-02T00:00:00Z") + + mock_pull_request = MagicMock() + mock_pull_request.reviews.return_value = [mock_review] + + result = measure_time_to_first_review(mock_issue, mock_pull_request, None, []) + expected = timedelta(days=1) + self.assertEqual(result, expected) + + def test_measure_time_to_first_review_no_reviews(self): + """Test that function returns None if there are no reviews.""" + mock_issue = MagicMock() + mock_issue.created_at = "2023-01-01T00:00:00Z" + + mock_pull_request = MagicMock() + mock_pull_request.reviews.return_value = [] + + result = measure_time_to_first_review(mock_issue, mock_pull_request, None, []) + self.assertEqual(result, None) + + def test_measure_time_to_first_review_ignore_pending(self): + """Test that pending reviews are ignored.""" + mock_issue = MagicMock() + mock_issue.created_at = "2023-01-01T00:00:00Z" + + pending_review = MagicMock() + pending_review.submitted_at = None + + valid_review = MagicMock() + valid_review.submitted_at = datetime.fromisoformat("2023-01-03T00:00:00Z") + + mock_pull_request = MagicMock() + mock_pull_request.reviews.return_value = [pending_review, valid_review] + + result = measure_time_to_first_review(mock_issue, mock_pull_request, None, []) + expected = timedelta(days=2) + self.assertEqual(result, expected) + + def test_get_stats_time_to_first_review_normal(self): + """Test a normal list of issues with review times.""" + issue1 = MagicMock() + issue1.time_to_first_review = timedelta(days=1) + issue2 = MagicMock() + issue2.time_to_first_review = timedelta(days=3) + + stats = get_stats_time_to_first_review([issue1, issue2]) + self.assertIsNotNone(stats) + self.assertEqual(stats["avg"], timedelta(days=2)) + + def test_get_stats_time_to_first_review_all_none(self): + """Test a list where all review times are None.""" + issue = MagicMock() + issue.time_to_first_review = None + self.assertIsNone(get_stats_time_to_first_review([issue])) + + def test_get_stats_time_to_first_review_empty(self): + """Test an empty list.""" + self.assertIsNone(get_stats_time_to_first_review([])) + + def test_measure_time_to_first_review_ready_for_review_path(self): + """Test the ready_for_review_at path (Start time logic).""" + mock_issue = MagicMock() + mock_issue.created_at = "2023-01-01T00:00:00Z" + ready_at = datetime.fromisoformat("2023-01-01T12:00:00Z") + + mock_review = MagicMock() + mock_review.submitted_at = datetime.fromisoformat("2023-01-01T13:00:00Z") + + mock_pr = MagicMock() + mock_pr.reviews.return_value = [mock_review] + + result = measure_time_to_first_review(mock_issue, mock_pr, ready_at, []) + self.assertEqual(result, timedelta(hours=1)) + + def test_measure_time_to_first_review_ignore_users(self): + """Test filtering out a matching reviewer from ignore_users.""" + mock_issue = MagicMock() + mock_issue.created_at = "2023-01-01T10:00:00Z" + + bad_review = MagicMock() + bad_review.user.login = "bot-user" + bad_review.submitted_at = datetime.fromisoformat("2023-01-01T11:00:00Z") + + good_review = MagicMock() + good_review.user.login = "human-user" + good_review.submitted_at = datetime.fromisoformat("2023-01-01T12:00:00Z") + + mock_pr = MagicMock() + mock_pr.reviews.return_value = [bad_review, good_review] + + result = measure_time_to_first_review(mock_issue, mock_pr, None, ["bot-user"]) + self.assertEqual(result, timedelta(hours=2)) + + def test_measure_time_to_first_review_type_error_path(self): + """Test the except TypeError error handling path.""" + mock_issue = MagicMock() + mock_issue.created_at = 12345 + + mock_pr = MagicMock() + mock_pr.reviews.return_value = [MagicMock()] + + result = measure_time_to_first_review(mock_issue, mock_pr, None, []) + self.assertIsNone(result) diff --git a/time_to_first_review.py b/time_to_first_review.py new file mode 100644 index 0000000..6fec968 --- /dev/null +++ b/time_to_first_review.py @@ -0,0 +1,92 @@ +"""Utilities for measuring time to first review for pull requests.""" + +from datetime import datetime, timedelta +from typing import List, Union + +import github3 +import numpy +from classes import IssueWithMetrics +from time_to_first_response import ignore_comment + + +def measure_time_to_first_review( + issue: Union[github3.issues.Issue, None], + pull_request: Union[github3.pulls.PullRequest, None], + ready_for_review_at: Union[datetime, None] = None, + ignore_users: Union[List[str], None] = None, +) -> Union[timedelta, None]: + """Measures duration between pull request creation time and the timestamp when the first review is submitted""" + + if not issue or not pull_request: + return None + + if ignore_users is None: + ignore_users = [] + + first_review_time = None + + try: + reviews = pull_request.reviews(number=50) + for review in reviews: + if ignore_comment( + issue.issue.user, + review.user, + ignore_users, + review.submitted_at, + ready_for_review_at, + ): + continue + + first_review_time = review.submitted_at + break + + except TypeError as e: + print( + f"An error occurred processing review comments. Perhaps the review contains a ghost user. {e}" + ) + return None + + if first_review_time is None: + return None + + if ready_for_review_at: + pr_created_time = ready_for_review_at + else: + pr_created_time = datetime.fromisoformat(issue.created_at) + + return first_review_time - pr_created_time + + +def get_stats_time_to_first_review( + issues: List[IssueWithMetrics], +) -> Union[dict[str, timedelta], None]: + """Compute statistics (average, median, 90th percentile) for time to first review.""" + review_times = [] + none_count = 0 + for issue in issues: + if issue.time_to_first_review: + review_times.append(issue.time_to_first_review.total_seconds()) + else: + none_count += 1 + + if len(issues) - none_count <= 0: + return None + + average_seconds_to_first_review = numpy.round(numpy.average(review_times)) + med_seconds_to_first_review = numpy.round(numpy.median(review_times)) + ninety_percentile_seconds_to_first_review = numpy.round( + numpy.percentile(review_times, 90, axis=0) + ) + + stats = { + "avg": timedelta(seconds=average_seconds_to_first_review), + "med": timedelta(seconds=med_seconds_to_first_review), + "90p": timedelta(seconds=ninety_percentile_seconds_to_first_review), + } + + # Print the average time to first review converting seconds to a readable time format + print( + f"Average time to first review: {timedelta(seconds=average_seconds_to_first_review)}" + ) + + return stats