Skip to content

Commit d2ded2a

Browse files
spec: add roc-curve specification (#2278)
## New Specification: `roc-curve` Related to #2273 --- ### specification.md # roc-curve: ROC Curve with AUC ## Description A Receiver Operating Characteristic (ROC) curve visualizes the performance of a binary classifier by plotting the True Positive Rate (TPR) against the False Positive Rate (FPR) at various classification thresholds. The Area Under the Curve (AUC) provides a single metric summarizing model performance, where 1.0 indicates perfect classification and 0.5 represents random guessing. ## Applications - Evaluating binary classification model performance in machine learning pipelines - Comparing multiple models to select the best classifier for production - Selecting optimal classification thresholds based on sensitivity/specificity trade-offs - Assessing diagnostic test accuracy in medical research ## Data - `fpr` (numeric) - False Positive Rate values (0 to 1) - `tpr` (numeric) - True Positive Rate values (0 to 1) - `auc` (numeric) - Area Under the Curve score (0 to 1) - Size: typically 100-1000 threshold points per curve - Example: sklearn.metrics.roc_curve output from binary classification predictions ## Notes - Include a diagonal reference line (y=x) representing random classifier performance - Display AUC score in legend or annotation - Use distinct colors/styles when comparing multiple models - Axes should range from 0 to 1 with equal aspect ratio preferred --- **Next:** Add `approved` label to the issue to merge this PR. --- :robot: *[spec-create workflow](https://github.com/MarkusNeusinger/pyplots/actions/runs/20526502674)* Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
1 parent 95e3f29 commit d2ded2a

2 files changed

Lines changed: 58 additions & 0 deletions

File tree

plots/roc-curve/specification.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# roc-curve: ROC Curve with AUC
2+
3+
## Description
4+
5+
A Receiver Operating Characteristic (ROC) curve visualizes the performance of a binary classifier by plotting the True Positive Rate (TPR) against the False Positive Rate (FPR) at various classification thresholds. The Area Under the Curve (AUC) provides a single metric summarizing model performance, where 1.0 indicates perfect classification and 0.5 represents random guessing.
6+
7+
## Applications
8+
9+
- Evaluating binary classification model performance in machine learning pipelines
10+
- Comparing multiple models to select the best classifier for production
11+
- Selecting optimal classification thresholds based on sensitivity/specificity trade-offs
12+
- Assessing diagnostic test accuracy in medical research
13+
14+
## Data
15+
16+
- `fpr` (numeric) - False Positive Rate values (0 to 1)
17+
- `tpr` (numeric) - True Positive Rate values (0 to 1)
18+
- `auc` (numeric) - Area Under the Curve score (0 to 1)
19+
- Size: typically 100-1000 threshold points per curve
20+
- Example: sklearn.metrics.roc_curve output from binary classification predictions
21+
22+
## Notes
23+
24+
- Include a diagonal reference line (y=x) representing random classifier performance
25+
- Display AUC score in legend or annotation
26+
- Use distinct colors/styles when comparing multiple models
27+
- Axes should range from 0 to 1 with equal aspect ratio preferred

plots/roc-curve/specification.yaml

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
# Specification-level metadata for roc-curve
2+
# Auto-synced to PostgreSQL on push to main
3+
4+
spec_id: roc-curve
5+
title: ROC Curve with AUC
6+
7+
# Specification tracking
8+
created: 2025-12-26T17:27:56Z
9+
updated: 2025-12-26T17:27:56Z
10+
issue: 2273
11+
suggested: MarkusNeusinger
12+
13+
# Classification tags (applies to all library implementations)
14+
# See docs/concepts/tagging-system.md for detailed guidelines
15+
tags:
16+
plot_type:
17+
- line
18+
- roc
19+
- curve
20+
data_type:
21+
- numeric
22+
- continuous
23+
domain:
24+
- statistics
25+
- machine-learning
26+
- healthcare
27+
features:
28+
- basic
29+
- evaluation
30+
- classification
31+
- comparison

0 commit comments

Comments
 (0)