feat(archon): add Expert Parallelism (EP) support for MoE models#833
feat(archon): add Expert Parallelism (EP) support for MoE models#833garrett4wade merged 1 commit intomainfrom
Conversation
Summary of ChangesHello @rchardx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly extends the Archon engine's capabilities by introducing full support for Expert Parallelism (EP) in Mixture-of-Experts (MoE) models. It provides the foundational components for distributed MoE training, including new parallel styles, mesh management, and optimized communication patterns. The changes enable efficient scaling of MoE models across multiple devices and integrate seamlessly with existing parallelism strategies and Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Pull request overview
This PR implements Expert Parallelism (EP) for Mixture-of-Experts models in the Archon engine. It adds comprehensive MoE support including router, grouped experts, token dispatch/combine, and EP-aware parallelization for Qwen3 MoE models.
Changes:
- Add complete MoE module implementation (router, experts, args, utils)
- Implement ExpertParallel style using PyTorch all-to-all collectives
- Add ArchonParallelDims with EP mesh dimension support
- Support Qwen3 MoE model with EP-aware parallelization
- Refactor Ulysses all-to-all to use all_to_all_single_autograd for torch.compile compatibility
- Add comprehensive test coverage for MoE and EP
Reviewed changes
Copilot reviewed 55 out of 55 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| areal/experimental/models/archon/moe/* | New MoE module implementation |
| areal/experimental/distributed/archon.py | EP support in ArchonParallelDims |
| areal/experimental/models/archon/qwen3/* | Qwen3 MoE model support |
| areal/models/fsdp/ulysses.py | Refactored all-to-all implementation |
| areal/tests/experimental/archon/* | Comprehensive MoE/EP tests |
| areal/utils/fsdp/parallel.py | Moved ReplicateParallel to distributed |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Code Review
This pull request introduces significant new functionality by adding Expert Parallelism (EP) support for Mixture-of-Experts (MoE) models within the Archon engine. The changes are extensive, including a new MoE module with a router and grouped experts, a refactored parallel dimension management system (ArchonParallelDims) to handle the new EP dimension, and MoE-aware parallelization logic for Qwen3 models. The implementation is robust, featuring a vectorized attention mask creation for performance, a refactoring of Ulysses all-to-all to be compatible with torch.compile, and a comprehensive suite of new tests. My review found a couple of minor points to address, primarily a documentation inconsistency in the MoE configuration and the removal of some shape assertions. Overall, this is a high-quality contribution that significantly enhances the engine's capabilities.
9079329 to
2b32e1b
Compare
garrett4wade
left a comment
There was a problem hiding this comment.
Amazing work! I've left several comments to be addressed.
Implement EP for Mixture-of-Experts models in Archon engine, tested with PyTorch 2.9.1. Key changes: - Add MoE module with router, grouped experts, and token dispatch/combine - Implement ExpertParallel style using PyTorch all-to-all collectives - Add ArchonParallelDims for managing DP/TP/CP/EP mesh dimensions - Support Qwen3 MoE model with EP-aware parallelization - Add ReplicateParallel style for router gate computation - Refactor Ulysses all-to-all to support both FSDP and Archon backends
…lusionAI#833) Implement EP for Mixture-of-Experts models in Archon engine, tested with PyTorch 2.9.1. Key changes: - Add MoE module with router, grouped experts, and token dispatch/combine - Implement ExpertParallel style using PyTorch all-to-all collectives - Add ArchonParallelDims for managing DP/TP/CP/EP mesh dimensions - Support Qwen3 MoE model with EP-aware parallelization - Add ReplicateParallel style for router gate computation - Refactor Ulysses all-to-all to support both FSDP and Archon backends
Description
Implement EP for Mixture-of-Experts models in Archon engine, tested with PyTorch 2.9.1. Key changes:
Type of Change
work as expected)
Checklist
jb build docs/gemini review)