fix: correct accuracy row alignment in classification report table#951
Merged
leestott merged 3 commits intomicrosoft:mainfrom Apr 11, 2026
Merged
Conversation
Contributor
Author
|
@microsoft-github-policy-service agree |
Contributor
There was a problem hiding this comment.
Pull request overview
This PR updates lesson documentation to align reported metrics/tables with the actual outputs produced by scikit-learn and the code snippets shown in the tutorials.
Changes:
- Fixes the
accuracyrow column alignment in aclassification_report()markdown table (leaving precision/recall blank, placing accuracy underf1-score, andsupportundersupport). - Updates the linear regression lesson text/snippet to describe and name the computed error as RMSE (since it uses
sqrt(mean_squared_error(...))).
Show a summary per file
| File | Description |
|---|---|
| 4-Classification/2-Classifiers-1/README.md | Corrects the accuracy row alignment in the classification report table to match scikit-learn output. |
| 2-Regression/3-Linear/README.md | Adjusts documentation/snippet terminology to RMSE to match the existing computation. |
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Files reviewed: 2/2 changed files
- Comments generated: 2
| The `LinearRegression` object after `fit`-ting contains all the coefficients of the regression, which can be accessed using `.coef_` property. In our case, there is just one coefficient, which should be around `-0.017`. It means that prices seem to drop a bit with time, but not too much, around 2 cents per day. We can also access the intersection point of the regression with Y-axis using `lin_reg.intercept_` - it will be around `21` in our case, indicating the price at the beginning of the year. | ||
|
|
||
| To see how accurate our model is, we can predict prices on a test dataset, and then measure how close our predictions are to the expected values. This can be done using mean square error (MSE) metrics, which is the mean of all squared differences between expected and predicted value. | ||
| To see how accurate our model is, we can predict prices on a test dataset, and then measure how close our predictions are to the expected values. This can be done using root mean square error (RMSE) metrics, which is the root of the mean of all squared differences between expected and predicted value. |
There was a problem hiding this comment.
This README change (switching from MSE wording/variable name to RMSE) is unrelated to the PR title/description about classification report table alignment. Please either update the PR description to include this additional fix or split it into a separate PR to keep changes focused.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The accuracy row in the classification report table had values in the wrong columns.
precisionandrecallcolumns were filled but they should beempty for the accuracy row. The
0.80value belongs underf1-scoreand1199belongs undersupport, which matchesthe actual output of sklearn's
classification_report().