Skip to content

feat(archon): add EP-aware padding wrapper for MoE grouped_mm#836

Merged
garrett4wade merged 1 commit intomainfrom
rchardx/ep_pad
Jan 19, 2026
Merged

feat(archon): add EP-aware padding wrapper for MoE grouped_mm#836
garrett4wade merged 1 commit intomainfrom
rchardx/ep_pad

Conversation

@rchardx
Copy link
Copy Markdown
Collaborator

@rchardx rchardx commented Jan 19, 2026

Description

Apply indices_padding_wrapper automatically in GroupedExperts.forward() when Expert Parallelism (EP) is not used. This follows Torchtitan's approach where the padding wrapper handles token alignment for torch._grouped_mm, while EP hooks handle padding when EP is enabled.

The wrapper is applied when:

  • Weights are not DTensor, OR
  • DTensor doesn't have "ep" in device_mesh.mesh_dim_names

Type of Change

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change that adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not
    work as expected)
  • Documentation update
  • Code refactoring (no functional changes)
  • Performance improvement
  • Test coverage improvement

Checklist

  • I have read the Contributing Guide
  • I have run formatting tools (pre-commit or manual)
  • I have run relevant unit tests and they pass
  • I have added tests for new functionality
  • I have updated documentation if needed
  • My branch is up to date with main
  • This PR introduces breaking changes (if yes, fill out details below)
  • If this PR changes documentation, I have built and previewed it locally with
    jb build docs
  • No critical issues raised by AI reviewers (/gemini review)

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @rchardx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the GroupedExperts module by introducing an intelligent padding mechanism for grouped_mm operations. It automatically applies a token alignment wrapper when Expert Parallelism (EP) is not in use, ensuring robust and efficient processing of tokens regardless of their distribution. This change improves the correctness and performance of Mixture-of-Experts (MoE) models by streamlining the handling of torch._grouped_mm in diverse execution environments.

Highlights

  • Automatic Padding Wrapper for GroupedExperts: Implemented automatic application of indices_padding_wrapper within GroupedExperts.forward() for grouped_mm operations when Expert Parallelism (EP) is not in use. This ensures correct token alignment for torch._grouped_mm.
  • Conditional Wrapper Application: The padding wrapper is conditionally applied based on whether the weights are DTensors and if the DTensor's device mesh includes 'ep' for Expert Parallelism. If EP is enabled, EP hooks handle padding.
  • Enhanced Test Coverage: Added new unit tests to validate gradient flow through the padding wrapper, confirm correct behavior with both aligned and unaligned token distributions, and verify the logic for detecting Expert Parallelism.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an EP-aware padding wrapper for MoE grouped_mm, which is a solid performance enhancement. The core logic change in GroupedExperts.forward is sound. The accompanying tests are thorough and cover many important cases. However, I've identified a few areas for improvement in the tests to make them more robust and accurate. Specifically, one test is incomplete, and another doesn't fully verify the intended behavior. I've also included a minor suggestion to enhance code readability. Overall, great work on this feature.

Comment thread areal/tests/experimental/archon/test_grouped_experts.py Outdated
Comment thread areal/experimental/models/archon/moe/grouped_experts.py
Comment thread areal/tests/experimental/archon/test_grouped_experts.py
Comment thread areal/tests/experimental/archon/test_grouped_experts.py
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds EP-aware automatic padding for MoE grouped matrix multiplication operations. The padding wrapper from utils is now automatically applied in GroupedExperts.forward() when Expert Parallelism is not being used, following Torchtitan's approach for token alignment.

Changes:

  • Import and conditionally apply indices_padding_wrapper to _run_experts_grouped_mm based on EP detection
  • Add comprehensive tests for gradient flow, aligned/unaligned tokens, and EP detection logic
  • Update integration tests to use CUDA devices (required for permute_tokens and grouped_mm)

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
areal/experimental/models/archon/moe/grouped_experts.py Adds EP-aware conditional application of padding wrapper in forward() method
areal/tests/experimental/archon/test_grouped_experts.py Adds tests for padding wrapper functionality and updates integration tests for CUDA compatibility

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread areal/tests/experimental/archon/test_grouped_experts.py Outdated
Comment thread areal/experimental/models/archon/moe/grouped_experts.py
Apply indices_padding_wrapper automatically in GroupedExperts.forward()
when Expert Parallelism (EP) is not used. This follows Torchtitan's
approach where the padding wrapper handles token alignment for
torch._grouped_mm, while EP hooks handle padding when EP is enabled.

The wrapper is applied when:
- Weights are not DTensor, OR
- DTensor doesn't have "ep" in device_mesh.mesh_dim_names
Copy link
Copy Markdown
Collaborator

@garrett4wade garrett4wade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@garrett4wade garrett4wade merged commit 37e4f84 into main Jan 19, 2026
1 check passed
@garrett4wade garrett4wade deleted the rchardx/ep_pad branch January 19, 2026 08:15
leandermaben pushed a commit to leandermaben/AReaL that referenced this pull request Mar 24, 2026
…ionAI#836)

Apply indices_padding_wrapper automatically in GroupedExperts.forward()
when Expert Parallelism (EP) is not used. This follows Torchtitan's
approach where the padding wrapper handles token alignment for
torch._grouped_mm, while EP hooks handle padding when EP is enabled.

The wrapper is applied when:
- Weights are not DTensor, OR
- DTensor doesn't have "ep" in device_mesh.mesh_dim_names
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants