REAL Loss (Rewards as Labels) for GRPO Training#8424
REAL Loss (Rewards as Labels) for GRPO Training#8424hjh0119 merged 12 commits intomodelscope:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates the REAL (Rewards as Labels) loss into the GRPO training framework. By treating rewards directly as labels and formulating policy optimization as a group-wise classification problem, REAL aims to enhance gradient behavior and training stability. This approach specifically addresses common issues in standard GRPO, such as gradient misassignment for positive samples and gradient domination by negative samples, leading to more robust and controlled model updates. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces the REAL (Rewards as Labels) algorithm, including new documentation in both Chinese and English, an example training script, and the implementation of the REAL loss calculation within the GRPO trainer. Key feedback points out critical issues in the REAL loss implementation, specifically that it incorrectly uses advantages instead of raw rewards for classification and that the valid_mask logic improperly filters out groups containing only positive or negative samples, leading to incorrect loss computation. Additionally, documentation issues were noted, including incorrect links to the training script and an inaccurate docstring for real_tau regarding the gradient magnitude's upper bound.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
| | Parameter | Type | Default | Description | | ||
| |-----------|------|---------|--------------------------------------------------------------------| | ||
| | `--loss_type` | `str` | - | Set to `real` | | ||
| | `--scale_rewards` | `str` | - | Set to `none` (disable normalization) | |
There was a problem hiding this comment.
Maybe we should add parameter checks or auto-set scale_rewards to None when loss_type="real"
| pos_input = (-scaled_scores).masked_fill(~batch_pos_mask, float('-inf')) | ||
| pos_loss = torch.logsumexp(torch.cat([pos_input, zeros], dim=1), dim=1) | ||
|
|
||
| loss = (neg_loss + pos_loss).sum() / group_rewards.size(0) |
There was a problem hiding this comment.
should we account for the number of valid samples instead?
There was a problem hiding this comment.
In the current implementation, invalid groups are not dropped but contribute zero loss (via masking + logsumexp). Therefore, the objective can be viewed as an expectation over the full data distribution:
where invalid groups naturally have:
If we instead normalize by the number of valid samples, the objective becomes a conditional expectation over valid groups only, which introduces bias relative to the original sampling distribution.
In addition, since the number of valid samples can vary across batches and training stages, such normalization would lead to unstable gradient scaling (effectively changing the learning rate dynamically).
|
Sorry for the late review. I've left a few minor comments for your reference |
|
|
||
| # disable normalization, REAL https://arxiv.org/abs/2602.05630 | ||
| if self.loss_type == 'real': | ||
| self.scale_rewards = 'none' |
There was a problem hiding this comment.
Perhaps we need to prompt the user that scale_rewards has been overridden (logger.info)
There was a problem hiding this comment.
Thanks for pointing this out
|
cool! Are there any experimental results? |
We are currently reproducing the experimental results using models and datasets that differ from those in the paper. Once we obtain stable results, we will update them in the documentation~ |
|
Sorry for the delayed update on the experimental results. Due to limited GPU resources, it is currently infeasible for us to run full-parameter training on larger models under long-context settings. Therefore, we conducted experiments using Qwen2.5-0.5B and selected NuminaMath-TIR as the training dataset. We sampled 4k instances with a fixed random seed (42). For a fair comparison, the two runs share identical training configurations, with the only difference being the The experimental results are as follows: Additionally, since the base capability of this relatively small model is limited, its performance on several standard evaluation benchmarks is initially quite low. After applying both GRPO and REAL training, we did not observe significant improvements on these benchmarks, so we have not included those results here.
|
|
Thanks for your contribution. Could you also add the corresponding implementation to the Megatron part? |
|
I’m happy to add the corresponding implementation for the Megatron part as well. To keep this PR focused and easier to review, would it be acceptable if I implement the Megatron support in a separate follow-up PR? |
|
sure, let's merge this one first |
PR type
PR information
This PR introduces REAL (Rewards as Labels) as a new loss type for GRPO training, along with corresponding documentation.
REAL reformulates the policy optimization objective into a group-wise classification problem by directly treating rewards as labels, instead of relying on advantage estimation. This approach improves gradient behavior and training stability.
Motivation
The standard GRPO loss suffers from two issues:
REAL addresses these issues by introducing a classification-based objective with bounded and smoother gradient scaling.
Changes
realas a new option forloss_typereal_tauUsage