Skip to content

WIP - Accuracy Checkers based on proposal#36

Closed
nv-alicheng wants to merge 12 commits intomainfrom
feature/alicheng-accuracyrun
Closed

WIP - Accuracy Checkers based on proposal#36
nv-alicheng wants to merge 12 commits intomainfrom
feature/alicheng-accuracyrun

Conversation

@nv-alicheng
Copy link
Copy Markdown
Collaborator

What does this PR do?

Type of change

  • Bug fix
  • New feature
  • Documentation update
  • Refactor/cleanup

Related issues

Testing

  • Tests added/updated
  • All tests pass locally
  • Manual testing completed

Checklist

  • Code follows project style
  • Pre-commit hooks pass
  • Documentation updated (if needed)

@nv-alicheng nv-alicheng requested a review from a team as a code owner December 1, 2025 22:34
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Dec 1, 2025

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @nv-alicheng, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request lays the groundwork for comprehensive accuracy checking within the inference endpoint. It introduces a modular system for defining and managing datasets and evaluators, enabling the integration of various benchmarks like GPQA. The changes also adapt the existing load generation and session management to support accuracy tests, ensuring that model outputs can be correctly linked back to their source data for evaluation and reporting.

Highlights

  • Accuracy Dataset Framework: Introduction of AccuracyDataset base class and DatasetFormat enum to standardize how datasets are defined, generated, and loaded for accuracy evaluations.
  • GPQA Dataset Implementation: Added a concrete implementation for the GPQA (Graduate-Level Google-Proof Q&A) benchmark, including data loading from HuggingFace, processing, and saving.
  • Accuracy Evaluator Framework: Established Evaluator and Extractor base classes, along with ABCDExtractor for robust extraction of multiple-choice answers from model outputs using regex patterns.
  • Integration with Load Generator: Modified the load generation system to support accuracy testing by tracking sample UUIDs to original dataset indices and allowing for multiple load generators (performance and accuracy) within a session.
  • Report Generation for Accuracy: Enhanced session reporting to save the mapping between sample UUIDs and their dataset indices, facilitating post-inference accuracy evaluation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the foundational components for accuracy checking, including base classes for datasets and evaluators, along with a concrete implementation for the GPQA dataset. The changes are a good step towards building out the accuracy evaluation framework.

My review includes a few key points:

  • A critical bug fix in session.py for iterating over accuracy test generators.
  • A high-severity bug fix in gpqa.py related to loading dataset variants from Hugging Face.
  • Suggestions to improve the design of dataset loading by making automatic generation more accessible through the base classes.
  • Minor fixes for logging and a typo.

Overall, the structure is well-thought-out, and with these fixes, it will be a solid foundation.

Comment on lines +80 to +82
for _, generator in accuracy_test_generators:
for _ in generator:
pass
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There is a bug in how you iterate over accuracy_test_generators. The expression for _, generator in accuracy_test_generators: attempts to iterate over the keys of the dictionary and unpack each key. This will likely fail or produce incorrect behavior. To iterate over the LoadGenerator instances, you should iterate over the dictionary's values.

Suggested change
for _, generator in accuracy_test_generators:
for _ in generator:
pass
for generator in accuracy_test_generators.values():
for _ in generator:
pass

def generate(self, datasets_dir: Path):
# Load the variant from HuggingFace
try:
raw_ds = hf_datasets.load_dataset("Idavidrein/gpqa", f"gpqa_{self.variant}")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There's a bug when loading the dataset from Hugging Face. When max_samples is provided, self.variant is modified to include the sample count (e.g., "diamond_100"). This modified variant name is then used to load the dataset, which will likely fail as the Hugging Face dataset variant does not include the sample count. You should use self._variant_name, which stores the original variant name ("diamond", "extended", or "main").

Suggested change
raw_ds = hf_datasets.load_dataset("Idavidrein/gpqa", f"gpqa_{self.variant}")
raw_ds = hf_datasets.load_dataset("Idavidrein/gpqa", f"gpqa_{self._variant_name}")

variable.

If <variant> is not specified or not applicable, the default value is 'full'. The
variant should be specied as an instance variable: .variant.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is a typo in the docstring.

Suggested change
variant should be specied as an instance variable: .variant.
variant should be specified as an instance variable: .variant.

raise NotImplementedError

@abstractmethod
def load(self, datasets_dir: Path) -> Any:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The signature of the abstract method load should include the create_if_not_exists parameter to match the implementation in subclasses like GPQA. This ensures consistency and allows callers using the AccuracyDataset interface to leverage this functionality.

Suggested change
def load(self, datasets_dir: Path) -> Any:
def load(self, datasets_dir: Path, create_if_not_exists: bool = False) -> Any:

Comment on lines +81 to +83
print(f"Error loading dataset: {e}")
print("Note: This dataset may require HuggingFace authentication.")
print("Run: huggingface-cli login")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's better to use the configured logger (logger.error or logger.warning) instead of print() for outputting error messages and instructions. This provides more consistent and controllable logging within the application.

Suggested change
print(f"Error loading dataset: {e}")
print("Note: This dataset may require HuggingFace authentication.")
print("Run: huggingface-cli login")
logger.error(f"Error loading dataset: {e}")
logger.error("Note: This dataset may require HuggingFace authentication.")
logger.error("Run: huggingface-cli login")

self.accuracy_dataset = accuracy_dataset
self.dataset_dir = dataset_dir

self.ds = self.accuracy_dataset.load(dataset_dir)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve usability, consider allowing the Evaluator to create datasets that don't exist. You can do this by adding a create_if_not_exists: bool = False parameter to Evaluator.__init__ and passing it to the load method here. This change assumes that the AccuracyDataset.load abstract method is also updated to accept this parameter.

Suggested change
self.ds = self.accuracy_dataset.load(dataset_dir)
self.ds = self.accuracy_dataset.load(dataset_dir, create_if_not_exists=create_if_not_exists)

@viraatc viraatc self-requested a review December 3, 2025 00:00
from typing import Any, ClassVar


class DatasetFormat(Enum):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend that we consolidate the dataset format to some standard, and we provide conversion script to convert to the standard format (otherwise maintaining the behavior of so many formats is tough)

Pickle / parquet / jsonl seem like the most popolar ones.

correct_answer = f"choice{correct_index + 1}"

# Create processed row
processed_row = {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems missing the common f-string for question formatting - plan to add it somewhere else?

@nv-alicheng nv-alicheng force-pushed the feature/alicheng-accuracyrun branch 3 times, most recently from aa7c46d to 067fddd Compare December 11, 2025 21:44
@nv-alicheng nv-alicheng force-pushed the feature/alicheng-accuracyrun branch from d8352d1 to 189ebd3 Compare December 19, 2025 00:13
Comment on lines +42 to +47
evaluator = Evaluator(
extractor=ABCDExtractor,
scorer=PassAt1Scorer(),
accuracy_dataset=mock_dataset,
dataset_dir=Path("/fake/path"),
)
Comment on lines +73 to +78
evaluator = Evaluator(
extractor=ABCDExtractor,
scorer=PassAt1Scorer(),
accuracy_dataset=mock_dataset,
dataset_dir=Path("/fake/path"),
)
Comment on lines +101 to +106
evaluator = Evaluator(
extractor=ABCDExtractor,
scorer=PassAt1Scorer(),
accuracy_dataset=mock_dataset,
dataset_dir=Path("/fake/path"),
)
Comment on lines +127 to +132
evaluator = Evaluator(
extractor=ABCDExtractor,
scorer=PassAt1Scorer(),
accuracy_dataset=mock_dataset,
dataset_dir=Path("/fake/path"),
)
Comment on lines +150 to +155
evaluator = Evaluator(
extractor=ABCDExtractor,
scorer=PassAt1Scorer(),
accuracy_dataset=mock_dataset,
dataset_dir=Path("/fake/path"),
)
Comment on lines +173 to +178
evaluator = Evaluator(
extractor=Mock(extract=lambda x: x), # No-op extractor
scorer=RougeScorer(use_stemmer=True),
accuracy_dataset=mock_dataset,
dataset_dir=Path("/fake/path"),
)
Comment on lines +212 to +217
evaluator = Evaluator(
extractor=Mock(extract=lambda x: x),
scorer=RougeScorer(use_stemmer=True),
accuracy_dataset=mock_dataset,
dataset_dir=Path("/fake/path"),
)
Comment on lines +245 to +250
evaluator = Evaluator(
extractor=ABCDExtractor,
scorer=PassAt1Scorer(),
accuracy_dataset=mock_dataset,
dataset_dir=Path("/fake/path"),
)
Comment on lines +300 to +305
evaluator = Evaluator(
extractor=ABCDExtractor,
scorer=PassAt1Scorer(),
accuracy_dataset=mock_dataset,
dataset_dir=Path("/fake/path"),
)
Comment on lines +340 to +345
evaluator = Evaluator(
extractor=ABCDExtractor,
scorer=PassAt1Scorer(),
accuracy_dataset=mock_dataset,
dataset_dir=Path("/fake/path"),
)
@nv-alicheng
Copy link
Copy Markdown
Collaborator Author

Closing because this was merged in a separate PR.

@github-actions github-actions Bot locked and limited conversation to collaborators Feb 14, 2026
@arekay-nv arekay-nv deleted the feature/alicheng-accuracyrun branch April 2, 2026 03:06
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants