[FIx] Use gloo group barriers for distributed synchronization#607
[FIx] Use gloo group barriers for distributed synchronization#607garrett4wade merged 6 commits intomainfrom
Conversation
Summary of ChangesHello @fishcrap, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant refactoring of the distributed synchronization logic across the codebase. It transitions all instances of Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request correctly refactors distributed synchronization by replacing deprecated dist.barrier(device_ids=...) calls with dist.barrier(group=...) using a gloo backend, which is a great improvement for robustness. The changes in the core library files also correctly adopt the pattern of calling current_platform.synchronize() before dist.barrier(). However, this improved pattern is not consistently applied across all the example files. I've added comments to address these inconsistencies for correctness and maintainability. Additionally, I've pointed out a few instances of redundant synchronization calls in the examples that can be removed.
|
Warning Gemini encountered an error creating the review. You can try again by commenting |
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request is a great refactoring effort to replace the deprecated dist.barrier(device_ids=...) with the recommended dist.barrier(group=...) using a gloo process group for CPU-level synchronization. This change improves correctness and portability, especially for multi-node environments. The changes are applied consistently across the codebase. I've made a couple of suggestions to use the platform-agnostic current_platform.synchronize() for better consistency and portability, instead of torch.cuda.synchronize().
| # Prepare batch for training | ||
| batch = batch.to('cuda') | ||
| dist.barrier(device_ids=[actor.device.index]) | ||
| torch.cuda.synchronize() |
There was a problem hiding this comment.
For consistency with the rest of the codebase and to promote platform-agnostic code, it would be better to use current_platform.synchronize() here instead of torch.cuda.synchronize(). The user of this guide would also need to import it: from areal.platforms import current_platform.
| torch.cuda.synchronize() | |
| current_platform.synchronize() |
| torch.cuda.synchronize() | ||
| dist.barrier(group=actor.cpu_group) | ||
|
|
||
| # Upload statistics to the logger | ||
| stats = stats_tracker.export_all(reduce_group=actor.data_parallel_group) | ||
| stats_logger.commit(epoch, step, global_step, stats) | ||
|
|
||
| dist.barrier(device_ids=[actor.device.index]) | ||
| torch.cuda.synchronize() | ||
| dist.barrier(group=actor.cpu_group) |
There was a problem hiding this comment.
For consistency and portability across different hardware platforms, it's better to use current_platform.synchronize() instead of torch.cuda.synchronize(). This aligns with the practice in other parts of the codebase. You'll need to add from areal.platforms import current_platform at the top of the file.
| torch.cuda.synchronize() | |
| dist.barrier(group=actor.cpu_group) | |
| # Upload statistics to the logger | |
| stats = stats_tracker.export_all(reduce_group=actor.data_parallel_group) | |
| stats_logger.commit(epoch, step, global_step, stats) | |
| dist.barrier(device_ids=[actor.device.index]) | |
| torch.cuda.synchronize() | |
| dist.barrier(group=actor.cpu_group) | |
| current_platform.synchronize() | |
| dist.barrier(group=actor.cpu_group) | |
| # Upload statistics to the logger | |
| stats = stats_tracker.export_all(reduce_group=actor.data_parallel_group) | |
| stats_logger.commit(epoch, step, global_step, stats) | |
| current_platform.synchronize() | |
| dist.barrier(group=actor.cpu_group) |
…ionAI#607) * use gloo barrier
…ionAI#607) * use gloo barrier
Description
This PR uses
glooasdist.barrierbackend across the codebase.Related Issue
N/A
Type of Change
Checklist
jb build docs/gemini review)Breaking Change Details (if applicable):
N/A
Additional Context
N/A
Need help? Check the Contributing Guide or ask in
GitHub Discussions!