Skip to content

chore: minor fix doc formula#793

Merged
garrett4wade merged 2 commits intoinclusionAI:mainfrom
ZiyiTsang:fix_doc
Jan 4, 2026
Merged

chore: minor fix doc formula#793
garrett4wade merged 2 commits intoinclusionAI:mainfrom
ZiyiTsang:fix_doc

Conversation

@ZiyiTsang
Copy link
Copy Markdown
Collaborator

@ZiyiTsang ZiyiTsang commented Jan 4, 2026

Description

Fix formula in doc about GRPO and add PPO.md

Please help me to check again the GRPO formula.

Related Issue

Fix issue #784.

Fixes #(issue)

Type of Change

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change that adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not
    work as expected)
  • Documentation update
  • Code refactoring (no functional changes)
  • Performance improvement
  • Test coverage improvement

Checklist

  • I have read the Contributing Guide
  • I have run formatting tools (pre-commit or manual)
  • I have run relevant unit tests and they pass
  • I have added tests for new functionality
  • I have updated documentation if needed
  • My branch is up to date with main
  • This PR introduces breaking changes (if yes, fill out details below)
  • If this PR changes documentation, I have built and previewed it locally with
    jb build docs
  • No critical issues raised by AI reviewers (/gemini review)

Breaking Change Details (if applicable):

Additional Context


Need help? Check the Contributing Guide or ask in
GitHub Discussions!

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @ZiyiTsang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the project's documentation by introducing a dedicated page for the foundational PPO algorithm, correcting a critical mathematical formula in the GRPO documentation, and standardizing parameter headings across various algorithm descriptions. These changes aim to improve the accuracy, clarity, and navigability of the technical documentation for reinforcement learning algorithms.

Highlights

  • New PPO Algorithm Documentation: Comprehensive documentation for the Proximal Policy Optimization (PPO) algorithm has been added, detailing its mathematical formulation, core parameters, and relevant references. This new page is now integrated into the algorithms section of the documentation.
  • GRPO Formula Correction and Parameter Enhancement: The mathematical objective formula for Group Relative Policy Optimization (GRPO) has been corrected. Additionally, the GRPO documentation now includes new parameters for advantage normalization (actor.adv_norm.mean_level, actor.adv_norm.std_level) and clarifies the default value for kl_ctl.
  • Standardized Documentation Headings: The heading 'Algorithm Core Parameters' has been consistently updated to 'Core Parameters' across multiple algorithm documentation files, including Dr.GRPO, GSPO, Lite-PPO, M2PO, Proximal Approximation, RLOO, and SAPO, improving uniformity.
  • Table of Contents Update: The main table of contents (_toc.yml) has been updated to include the newly added PPO documentation, ensuring it is discoverable within the project's algorithm guides.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@ZiyiTsang ZiyiTsang marked this pull request as ready for review January 4, 2026 11:21
Copy link
Copy Markdown
Collaborator Author

@ZiyiTsang ZiyiTsang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request provides several documentation improvements, primarily fixing the GRPO formula and adding a new page for PPO. The changes to the GRPO formula are a good step, and I've suggested one more refinement to improve its clarity and accuracy with respect to the GRPO algorithm. The other changes, such as adding a PPO documentation page and making headings consistent, are also beneficial.

Comment thread docs/algorithms/grpo.md

$$
J_{\text{GRPO}}(\theta) = \mathbb{E}_{\substack{q \sim P(Q),\\ \{o_i\}_{i=1}^G \sim \pi_{\theta_{\text{old}}}(O|q)}} \left[ \frac{1}{G} \sum_{i=1}^G \frac{1}{|o_i|} \sum_{t=1}^{|o_i|} \min\left( r_{i,t}(\theta) \hat{A}_{i,t},\ \text{clip}\left( r_{i,t}(\theta),\ 1-\epsilon,\ 1+\epsilon \right) \hat{A}_{i,t} \right) - \beta D_{\mathrm{KL}}\left[ \pi_\theta \middle| \pi_{\text{ref}} \right] \right]
J_{\text{GRPO}}(\theta) = \mathbb{E}_{\substack{q \sim P(Q),\\ \{o_i\}_{i=1}^G \sim \pi_{\theta_{\text{old}}}(O|q)}} \left[ \frac{1}{G} \sum_{i=1}^G \sum_{t=1}^{|o_i|} \min\left( r_{i,t}(\theta) \hat{A}_{i,t},\ \text{clip}\left( r_{i,t}(\theta),\ 1-\epsilon,\ 1+\epsilon \right) \hat{A}_{i,t} \right) - \beta D_{\mathrm{KL}}\left[ \pi_\theta \middle| \pi_{\text{ref}} \right] \right]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While the KL divergence term can be part of a PPO-style objective, it's not a core component of GRPO itself. As noted in the parameters section, kl_ctl defaults to 0.0 for GRPO. To make the formula more specific to GRPO and less confusing, I suggest removing the KL divergence term.

Suggested change
J_{\text{GRPO}}(\theta) = \mathbb{E}_{\substack{q \sim P(Q),\\ \{o_i\}_{i=1}^G \sim \pi_{\theta_{\text{old}}}(O|q)}} \left[ \frac{1}{G} \sum_{i=1}^G \sum_{t=1}^{|o_i|} \min\left( r_{i,t}(\theta) \hat{A}_{i,t},\ \text{clip}\left( r_{i,t}(\theta),\ 1-\epsilon,\ 1+\epsilon \right) \hat{A}_{i,t} \right) - \beta D_{\mathrm{KL}}\left[ \pi_\theta \middle| \pi_{\text{ref}} \right] \right]
J_{\text{GRPO}}(\theta) = \mathbb{E}_{\substack{q \sim P(Q),\\ \{o_i\}_{i=1}^G \sim \pi_{\theta_{\text{old}}}(O|q)}} \left[ \frac{1}{G} \sum_{i=1}^G \sum_{t=1}^{|o_i|} \min\left( r_{i,t}(\theta) \hat{A}_{i,t},\ \text{clip}\left( r_{i,t}(\theta),\ 1-\epsilon,\ 1+\epsilon \right) \hat{A}_{i,t} \right) \right]

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ignore

@garrett4wade garrett4wade merged commit ed55c51 into inclusionAI:main Jan 4, 2026
1 check passed
@ZiyiTsang ZiyiTsang deleted the fix_doc branch February 19, 2026 12:51
leandermaben pushed a commit to leandermaben/AReaL that referenced this pull request Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants