Releases: inclusionAI/AReaL
v1.0.1
Release Note
A patch release that fixes a dependency issue in the docker image and enriches the documentation and testing of the OpenClaw example.
What's Changed
- fix(config): Fix openclaw config typo and increase max_tokens_per_mb by @fishcrap in #959
- docs(openclaw): Replace hardcoded admin key with placeholder in README by @fishcrap in #967
- feat: Fully Support MIS/TIS to stablizing rollout-training mismatch by @ZiyiTsang in #930
- refactor(api): move validation into config post_init methods by @rchardx in #970
- fix(openai-proxy): return None for empty trajectory in online mode by @fishcrap in #971
- Ray placement group refactor and preliminary architecture for multinode inference instances by @hlyli in #966
- feat:Add chinese doc by @ZiyiTsang in #969
- fix(api): replace Literal type with str for SchedulingSpec.ray_placement_strategy by @garrett4wade in #976
- update readme by @xssstory in #974
- test(examples): Add OpenClaw online RL integration test by @fishcrap in #977
- fix: pinning torchao version to 0.15.0 by @garrett4wade in #981
- bump v1.0.1 by @garrett4wade in #982
New Contributors
Full Changelog: v1.0.0...v1.0.1
v1.0.0
🚀 Key Highlights
Release Notes
Online RL Training
- Seamlessly train any agents by configuring a
base_urlandapi_key—no code changes required and no heavy dependencies. - Check out the OpenClaw RL training example for more details.
Archon Engine
- A fully working, PyTorch-native 5D parallel training engine.
- Includes features like:
- Automatic HF format conversion
- Zero-bubble pipelining
- torch.compile
- FSDP (Fully Sharded Data Parallel)
- Selective activation support
AI-Assisted Coding
- Official commands and skills to streamline development and enable easy customization.
Infrastructure Upgrade
- Transition from the previous SPMD architecture to a more efficient single-controller architecture.
uv Installation Support
- Easily set up training environments by running the simple command:
uv sync.
What's Changed
- feat: replace legacy math parsing with math-verify by @rchardx in #739
- Add installation instructions for Ascend NPU by @HwVanICI in #748
- [Bug Fix] Fix Tools compatibility, max_token restrictions, and EOS token issues in Proxy mode by @yulangz in #736
- refactor: modify engine and controllers to support the single-controller mode with the same trainer by @garrett4wade in #753
- VLM Training on NPU by @HwVanICI in #746
- refactor: move device utilities to platform classes and io_struct by @garrett4wade in #757
- Fix: Implement get_device_stats() for train_controller by @HwVanICI in #762
- refactor: single-source task_id generation in submit methods by @garrett4wade in #759
- [Bug Fix] Camel example with wrong and missing agent arguments by @HwVanICI in #766
- feat: use
name_resolvefor worker discovery and fix perf_tracer in the single-controller mode by @garrett4wade in #764 - [Feature] Implement Single-Controller XCCL Weight Update by @HwVanICI in #754
- feat: Implement slurm scheduler by @garrett4wade in #767
- Ray Scheduler Implementation for Single Controller by @HwVanICI in #741
- refactor: use callbacks to implement xccl weigh transfer and avoid busy waiting during rollout by @garrett4wade in #769
- [Testing] Update GCP image to accelerate CI testing by @nuzant in #772
- refactor: unifying launcher, scheduling spec, yaml configs, and training scripts by @garrett4wade in #770
- feat: improve logging by @garrett4wade in #771
- refactor: separate megatron imports and installation from FSDP by @garrett4wade in #773
- minor fix: vLLM LoRA request cleanup for issue #751 by @TinLongYu in #765
- fix: refactoring proximal logp recompute condition by @garrett4wade in #780
- [Feat] Add FP8 training support by @fishcrap in #758
- Enhance host IP detection in areal.utils.network by @HwVanICI in #778
- chore: remove the ad-hoc should_broadcast parameter in rpc servers by @garrett4wade in #774
- feat: support colocated engines in the single-controller mode by @garrett4wade in #779
- chore: update readme by @garrett4wade in #782
- docs: restructure AGENTS.md and add CLAUDE.md symlink by @rchardx in #783
- chore: Expose error when lauch sglang server by @ZiyiTsang in #781
- refactor: simplifying the implementation of customized workflow with context management by @garrett4wade in #785
- chore: Expose error when launching vllm server by @garrett4wade in #790
- refactor: allow dynamic batch size without the
dynamic_filteringfunction by @garrett4wade in #786 - fix: fix ray scheduler in the single-controller mode by @garrett4wade in #791
- fix inference engine addr resolving logic by @garrett4wade in #792
- chore: minor fix doc formula by @ZiyiTsang in #793
- doc: update docs for grpo and related algorithms by @garrett4wade in #794
- refactor: migrate grouped rollout from customized workflows to inference engines by @garrett4wade in #789
- Single-controller LoRA RL fine-tuning with vLLM by @gursimar in #735
- [Feature] Group-level data redistribution by @nuzant in #800
- critical fix: passing
is_evalandgroup_sizefrom rollout controller to engines by @garrett4wade in #801 - Update NPU doc by @HwVanICI in #803
- feat: add Archon Engine - PyTorch native FSDP2 training backend by @rchardx in #799
- [Feature] Tree training support (Megatron Engine) for agentic RL training by @nuzant in #804
- Add NPU RLVR example by @HwVanICI in #798
- [Bug Fix] XCCL weight synchronization fix for the single controller lora by @gursimar in #796
- [Bug Fix] Fix import error introduced by tree training PR by @nuzant in #808
- chore: remove legacy code, config, and documentation by @garrett4wade in #806
- fix: update tree_attn function name to patch_bridge_for_tree_training by @rchardx in #809
- fix: prevent fake PID killing in LocalScheduler tests by @rchardx in #810
- feat(archon): add torch.compile support and profiling tools by @rchardx in #807
- Add RayScheduler to sft.py by @HwVanICI in #814
- fix: add lm_head.weight into index when index file exists by @jwhj in #816
- feat: use subprocess to fork colocated workers by @garrett4wade in #815
- feat(archon): add Context Parallelism (Ulysses SP) support by @rchardx in #817
- refactor(data): simplify pad_mb_list alignment parameters by @rchardx in #820
- refactor: unify HTTP client management in workflow_context by @garrett4wade in #819
- feat(archon): enable TP + AC + compile compatibility with _WaitAsyncWrapper by @rchardx in #821
- [FEAT] Add direct TE FP8-PyTorch FP8 conversion by @fishcrap in #802
- refactor(core): simplify HTTP client lifecycle with event loop cleanup by @garrett4wade in #823
- feat: Add AgentWorkflow API and migrate workflow resolution to RemoteInfEngine by @garrett4wade in #825
- feat(scheduler): refactor fork_workers to public API with custom command support by @garrett4wade in #826
- [FIX] correct vLLM config defaults for chunked prefill and prefix caching by @fishcrap in #827
- refactor(openai): modularize proxy architecture and add inline mode by @garrett4wade in #829
- chore(doc): update readme by @garrett4wade in #830
- fix(test): Fix math-verify tests by @garrett4wade in #831
- feat(archon): add Expert Parallelism (EP) support for MoE models by @rchardx in #833
- fix(moe): correct histc max param by @rchardx in #835
- feat(archon): add explicit FSDP prefetching for EP by @rchardx in #834
- feat(archon): add EP-aware padding wrapper for MoE grouped_mm by @rchardx in #836
- testing: fix CI, skip tests that cannot run on A100 GPUs by @nuzant in #838
- feat(archon): add Expert Tensor Parallelism (ETP) support for MoE models by @rchardx in #839
- feat: support tree training for FSDP engine by @nuzant in #837
- fix: remove duplicate setup in gsm8k_rl by @v3nividiv1ci in #842
- refactor(archon): cleanup parallel dims and FSDP config for pipeline parallelism by @rchardx in #841
- refactor(tree_attn): decouple FSDP and Megatron implementations by @rchardx in https://github.com/inclusionAI/AReaL/pu...
v1.0.0.rc1
Pre-release for 1.0.0.
v0.5.3
Highlights
This is a patch release primarily for delivering the latest docker image for testing.
We will include well-documented features in the next major release.
v0.5.2
Highlights
This is a patch release primarily for delivering the latest docker image with torch 2.9.1, vllm 0.14.0, and sglang 0.5.7 supports.
We will include well-documented features in the next major release.
v0.5.1
Highlights
This is a patched release upon v0.5.0.
- A new docker image with
math-verifyand the latestruff. - Support for PPO critic model support with Megatron engine.
- Refactored FSDP/Megatron engine implementations.
- Implement efficient RPC tensor transfer with
RTensor(aka the originalDistributedBatch). - Beam seach support for vLLM.
What's Changed
- fix: change checkpoint cleanup flag to fix update_weights_from_disk in single-controller mode by @HwVanICI in #711
- fix: prevent port overflow in vLLM server with high data parallelism (fixes #652) by @HsiaoTsan in #653
- refactor: refactor train engine high level APIs by @aaaandychen in #658
- [Fix] Fix the bug that experiments cannot properly exit in the TIR example by @nuzant in #712
- chore: print more information in concat mode and handle empty tool calls for easy debugging by @nuzant in #713
- chore: trim tests in CI by @garrett4wade in #714
- refactor: enforce task_id creation, access, and manipulation in inference engines by @garrett4wade in #715
- refactor: redesign TrainEngine API with cleaner abstractions by @rchardx in #719
- [Testing] Add SFT/GRPO integration test for Megatron train engine. by @nuzant in #726
- [FEAT] VLLM support for VLM training by @HwVanICI in #698
- feat: Support beam_search in vllm backend by @ZiyiTsang in #721
- fix: update multi-turn math test configuration by @rchardx in #727
- fix: fix logic error in beam search support check by @rchardx in #728
- feat: add PPO Critic model support for MegatronEngine by @rchardx in #729
- feat: implement RTensor for metadata transfer in the single-controller mode by @garrett4wade in #731
- fix: fix multi-turn proxy example by @dhh1995 in #733
- minor fix: fix openai cache test, add it in CI test suite, and remove OOD todos/fixmes in Megatron engine by @garrett4wade in #732
- [Feat] XCCL-updates for single LoRA functionality for ascend-vLLM by @gursimar in #679
- fix: use group_size=1 for eval in proxy examples by @dhh1995 in #737
- feat: add ignore_eos and skip_special_tokens generation params by @rchardx in #738
- chore: update datasets to version 3.0.0 or higher for inner API compatibility by @ZiyiTsang in #720
- feat: build the docker image with math-verify and the latest ruff by @garrett4wade in #744
- bump v0.5.1 by @garrett4wade in #745
New Contributors
- @HsiaoTsan made their first contribution in #653
- @aaaandychen made their first contribution in #658
- @gursimar made their first contribution in #679
Full Changelog: v0.5.0...v0.5.1
v0.5.0
Highlights
The newly released v0.5.0 of AReaL introduces two core innovations: Seamless Agentic RL and the Single Controller architecture:
-
Seamless Agentic RL: AReaL provides a seamless intelligent agent training service via OpenAI-compatible APIs. This facilitates seamless collaboration among environment providers, algorithm developers, and system engineers, forming a zero-friction pipeline in complex engineering workflows and significantly boosting development efficiency and system maintainability.
-
Single Controller Architecture: Eliminates long-tail latency and data imbalance issues inherent in SPMD (Single Program, Multiple Data) models. This layered design enhances inference scalability, enables fine-grained system-level control, and preserves algorithmic flexibility while minimizing code migration costs for algorithm developers.
Other changes include:
-
Performance & Scalability: Major refactoring to streamline step detection, assignment logic, and workflow batching. Improved distributed training with fixes for NCCL timeouts, Gloo group barriers, and vocab-parallel logprobs for FSDP.
-
Model & Hardware Support: Added single LoRA functionality for Ascend-vLLM and improved handling for Vision-Language Models (VLMs).
-
Fixes & Refinements: Resolved numerous bugs related to data loading, reward timeouts, interaction caching, process cleanup, and tool call parsing. Significant code refactoring to merge duplicate logic, improve type hints, and centralize asset management. Project-wide code formatting switch to ruff.
Future Work
AReaL currently supports the basic Single Controller mode and Agentic RL training pipeline. Future enhancements include:
- Optimized data flow and distributed launch capabilities under Single Controller mode;
- Automatic scaling, fault recovery, and high-availability training;
- Improved training-inference performance in agent-centric scenarios.
What's Changed
- update readme for qwen3-vl by @garrett4wade in #578
- [FIX] add recipe directory to pre-commit checks by @fishcrap in #580
- [FIX] reduce reward timeout warning by @fishcrap in #579
- [FIX] fix compute logp temperature by @fishcrap in #581
- feat: rebuild step detection around global batches by @rchardx in #583
- chore: extend wait timeout and hardens config checks by @rchardx in #585
- feat: streamline step assignment logic by @rchardx in #584
- fix: Use background threads to commit tasks and fetch results in workflow executor by @garrett4wade in #587
- fix: reuse
aiohttp.ClientSessioninagenerateby @garrett4wade in #589 - chore: automates session tracing context by @rchardx in #591
- [feat] add Serializer for rpc server by @CormickKneey in #566
- doc: improve tracer documentation with custom phase support and improved plotting by @rchardx in #594
- [feature] Support concat export completions in proxy mode by @yulangz in #582
- Fix trainer to use backend information from allocation mode by @dhh1995 in #596
- fix: fix the stucking issue of
rollout_batchby @garrett4wade in #595 - fix: extends NCCL group timeout coverage by @rchardx in #598
- chore: use typevar to type hint loaded config by @dhh1995 in #603
- fix: safely close all ClientSessions with ContextVar by @garrett4wade in #605
- chore: remove requirements.txt by @garrett4wade in #604
- [Feat] Add train/rollout offload support by @fishcrap in #590
- [FIx] Use gloo group barriers for distributed synchronization by @fishcrap in #607
- feat: adds scheduled profiler tracing by @rchardx in #608
- refactor: let WorkflowExecutor.wait return a list with
Noneby @garrett4wade in #612 - [feat] add local scheduler for single controller mode by @daihaowz in #610
- refactor: separate
BatchTaskDispatcherfromWorkflowExecutorby @garrett4wade in #613 - chore: upload paper to the repo by @garrett4wade in #616
- chore: clarifies agent onboarding guide by @rchardx in #617
- refactor: improves async coordination by @rchardx in #618
- [FIX] fix
enable_offloadbreak change and add offload/onload API by @fishcrap in #625 - refact: update gconfig to update stop token ids in workflows instead of in example scripts by @dhh1995 in #626
- chore: improve workflow batching safeguards by @rchardx in #624
- chore: ensures worker threads exit cleanly by @rchardx in #630
- bug fix: correctly shuffling data with distributed sampler by @garrett4wade in #632
- rename CompletionCache to InteractionCache by @dhh1995 in #631
- refactor: merge base_hf_engine with fsdp_engine for code cleanup by @garrett4wade in #629
- chore: format all files under areal/utils with ruff by @garrett4wade in #635
- chore: format all tests with ruff by @garrett4wade in #636
- chore: format remaining files under
areal/with ruff by @garrett4wade in #637 - ci: update ci formatter to ruff by @garrett4wade in #638
- chore: tunes NCCL IB settings by @rchardx in #640
- [feat] implement train controller for single controller by @daihaowz in #614
- fix: modify the default value of "shuffle" and "drop_last" for validation datasets by @garrett4wade in #633
- [Feat] Single LoRA functionality for ascend-vLLM by @HwVanICI in #621
- fix: prevent zombie vLLM processes when Ray launcher kills tasks by @zhshgmail in #623
- refactor: add
export_statsas engine's method by @garrett4wade in #643 - [feat] impl rollout controller for single controller by @dingzhiqiang in #611
- feat: implement proximal log-probability approximation for decoupled PPO by @zhshgmail in #600
- fix: fixes CLI docs import order by @rchardx in #646
- refactor: refines PPO/GRPO loss by @rchardx in #650
- refactor: merge duplicate process termination functions into unified kill_process_tree by @garrett4wade in #648
- feat: simplify openAI agent integration and allow training with any customized agent by @garrett4wade in #657
- fix: tear down local inference servers when caling
destroyby @garrett4wade in #659 - fix: vlm input slicing by @HwVanICI in #651
- refactor: move logprob and value computation into TrainEngine by @rchardx in #663
- fix: fix drop last for data loader with distributed sampler by @dhh1995 in #665
- [FIX] Initialize llm_addrs in Slurm launcher for SFT jobs by @fishcrap in #662
- refactor: apply PPOTrainer and SFTTrainer in example scripts by @garrett4wade in #660
- feature: implement vocab-parallel logprobs for FSDP by @rchardx in #667
- refact: expose workflow executor in inference engine by @dhh1995 in #676
- fix: raise AttributeError instead of returning None in Platform.getattr by @rchardx in #672
- fix: add missing device_control_env_var to CpuPlatform by @rchardx in #681
- fix: override workflow_executor property in MockInferenceEngine by @rchardx in #682
- refactor: make processing multi_modal_input generic by @HwVanICI in #678
- refactor: refactor attention mask generation logic for clarity by @rchardx in #685
- [Feat] Implement GRPO trainer and weight exchange for single-controller mode by @dingzhiqiang in #666
- refact: rename set_final_reward to set_last_reward, also fix openai gen args by excluding lora_name by @dhh1995 in #675
- fix: fix CPU offloading in FSDP grad clipping and weight updates by @rchardx in https://github.com/inclus...
v0.4.1
What's Changed
- feat: add
raise_timeoutparameter to allow quiet waiting for inference results by @garrett4wade in #547 - Fix batch size in example
examples/vlm/clevr_count_70k_grpo.yamlby @wangruohui in #549 - chore: format dataset and reward folders with ruff by @garrett4wade in #551
- refactor: rename the
should_acceptargument inrollout/prepare_batchtoshould_accept_fnby @garrett4wade in #555 - chore: delete not-planned experimental features by @garrett4wade in #554
- feat: add grpo trainer and simplify gsm8k grpo example by @dhh1995 in #552
- feat: add launch_server and teardown_server in inference engine api by @garrett4wade in #550
- [Refactor] refactor
stats_trackerusage in engines and examples by @nuzant in #556 - refactor: allow passing string paths and init kwargs as rollout workflows by @garrett4wade in #525
- feat: introduces session-centric tracing APIs by @rchardx in #539
- doc: Add notes about asynchronous RL training by @garrett4wade in #558
- format: ruff format examples directory by @fishcrap in #559
- feat: support proxy server and client for training openai-compatible agents by @dhh1995 in #500
- chore: change type annotations and minor fixes for single-controller mode by @garrett4wade in #560
- docs: add "Performance Profiling" guide to best practices by @rchardx in #538
- add README for proxy_agent by @yulangz in #561
- chore: extends engine perf instrumentation by @rchardx in #562
- [FEAT] add pause/resume generation for vLLM server by @fishcrap in #563
- doc: update AReaL design doc with the current dev status by @garrett4wade in #568
- doc: update documentation to align the current dev status by @garrett4wade in #570
- refactor: extend allocation mode to support allocation naming and composition by @garrett4wade in #565
- feat: align perf_tracer with task hierarchy by @rchardx in #569
- chore: add hint for the breaking change of allocation mode by @garrett4wade in #572
- [FIX] fix atrace_session_phase in workflow by @fishcrap in #573
- chore: Quick fix for GSPO missing in doc by @ZiyiTsang in #576
- ci: build docker images with GCP by @garrett4wade in #564
- refactor: restrict the usage scope of the
rollout_batchmethod by @garrett4wade in #567 - chore: add issue template for questsions by @garrett4wade in #571
- ci: automatically tag the dev image upon new releases by @garrett4wade in #574
- chore: remove the old script used for validating installation by @garrett4wade in #575
- [FEAT] Add Qwen3-VL model support for fsdp by @fishcrap in #557
- bump v0.4.1 by @garrett4wade in #577
New Contributors
- @wangruohui made their first contribution in #549
Full Changelog: v0.4.0...v0.4.1
v0.4.0
AReaL v0.4.0 Release Notes
We're excited to announce AReaL v0.4.0, a major release that brings stable infrastructure support for RL training of MoE models.
Overview
MoE Training
While we introduced the Megatron backend as an experimental feature last month, several critical issues prevented us from offering it as a stable release. These challenges included:
- Training precision alignment between inference and training
- Weight transfer complications
- Lack of validated end-to-end MoE model training in production
AReaL v0.4.0 comprehensively addresses these issues. In our experiments, we can run fully asynchronous agentic RL training (GRPO) of the Qwen 235B model on merely 6 H200 nodes. We didn't encounter any crashes for several days' training.
Transitioning from FSDP to Megatron requires only 3-5 lines of code changes in your training script. For detailed guidance, see our tutorial on fine-tuning large MoE models.
Agent Framework Support
Beyond stable MoE training, we're expanding native support for agent frameworks like Camel-AI and openai-agents. Previously, AReaL's trainable agent was encapsulated in a RolloutWorkflow object, which required users to manually manipulate token IDs for each LLM interaction. While agent frameworks abstract away this complexity, they cannot capture token IDs or maintain the execution order of LLM interactions.
To solve this, AReaL v0.4.0 introduces ArealOpenAI, a drop-in client that mimics the AsyncOpenAI API. This client acts as an agent proxy that:
- Secretly captures token IDs from your agent
- Maintains execution order to ensure trajectory consistency
- Supports assigning rewards to individual conversations
- Enables reward discounting across conversation turns
While this feature is currently experimental, we encourage usersto explore our latest documentation on agentic RL and give it a try.
Key Highlights
Stable MoE Training
- bf16 Megatron training recipes with aligned precision across components
- NCCL-based weight updates
Agent Framework Integration
- Native support for openai-agents SDK and Camel
Developer Experience
- Integration with modern tooling:
ruffanduv - Simplified installation:
uv pip install -e .[all]installs all dependencies
New Algorithm
- GSPO support added
We're grateful for your continued support and feedback. Happy training!
What's Changed
- chore: add comprehensive agent operations guide to AGENTS.md by @rchardx in #440
- fix boba_grpo bug by @shun001 in #439
- fix KeyError: "full_loss_mask" without ulysses and synchronize the boba GRPO yaml config by @garrett4wade in #441
- Support weights update from distributed sources for Megatron by @rchardx in #413
- [Feature] Add patch to accelerate SGLang weight loading by @nuzant in #324
- fix update weights from disk in FSDP engine by @garrett4wade in #443
- feat. Add more optimizer choice by @ZiyiTsang in #431
- fix: move pause/continue_generation operations into update_weights by @rchardx in #446
- chore: add optimizer content within Doc by @ZiyiTsang in #447
- [Fix] Fix
examples/env/setup-pip-deps.shby @nuzant in #455 - feat: support additional bash cmds before running the training commands when using slurm by @dhh1995 in #452
- fix: Use DistributedSampler for dataloader instead of splitting dataset by @dhh1995 in #456
- fix: update weight format handling for MegatronEngine in recover.py by @rchardx in #458
- [FIX] fix default values of args in cli_args by @fishcrap in #448
- [Feature] Add AEnt support by @hanshen95 in #403
- refactor: migrate experimental Megatron components to main API; fix: fix several bugs in FSDP engine by @rchardx in #459
- fix: fix the deprecated usage of tuple slicing, wandb's start_method, and transformers's dtype by @rchardx in #462
- fix: fix a bug in critic model's forward when Ulysses SP is enabled by @rchardx in #461
- fix: fix the NaN ppl values in FSDP SFT when Ulysses is enabled by @rchardx in #463
- refactor: separate staleness control from workflow execution by @garrett4wade in #444
- refactor: move WorkflowExecutor from areal.api to areal.core by @garrett4wade in #467
- refactor: simplify rollout in training scripts with the
connect_engineAPI by @garrett4wade in #451 - doc: add pull request template and contribution guide by @garrett4wade in #468
- refactor: merge duplicate codes in SGLang/vLLM engines by @garrett4wade in #445
- chore dev: use ruff in pre-commit and remove unused files by @garrett4wade in #471
- chore: fix CI formatting check by @garrett4wade in #472
- Fix ruff formatting in CI and replace isort in CI with ruff by @garrett4wade in #477
- fix: update import paths for math_parser in multiple files by @rchardx in #478
- refactor the async task submission logic in workflow executor into task runner by @garrett4wade in #473
- fix: access weight_update_mode from config.actor for GRPOConfig (#482) by @zhshgmail in #483
- [Feature] Use
uvfor package management and installation by @nuzant in #485 - [Feature] Add SkyPilot examples by @nuzant in #422
- fix: use unfused optimizers for VLM with tensor parallelism by @rchardx in #486
- [FEAT] integrates openai-agents by @fishcrap in #470
- feat: add performance tracing support by @rchardx in #487
- [Bug Fix] Revert generating
requirements.txtstep usinguvin pre-commit by @nuzant in #488 - chore: log request timeout details by @rchardx in #490
- chore: format
areal/apiwith ruff by @garrett4wade in #491 - chore: refreshes agent handbook by @rchardx in #492
- [FEAT] Integrate Camel-AI by @fishcrap in #474
- [Testing] Add auto CI on GCP and fix tests by @nuzant in #494
- [Doc] Add information about CI and tests in CONTRIBUTING.md by @nuzant in #499
- [FEAT] Support PyTorch DCP for FSDP by @fishcrap in #497
- add math_reflection_en notebook by @samjia2000 in #496
- Implement M2PO algorithm by @tsjyma in #480
- feat: add examples which traces performance data by @rchardx in #498
- [Feature] Automatically split layers into pipeline stages in MegatronEngine. by @nuzant in #504
- feat: support request-level tracing by @rchardx in #509
- [FEAT] Add experiment metadata (git commit) tracking and refactor version info by @fishcrap in #511
- fix: use per-rank jsonl instead of file lock in case that NFS does not support it by @rchardx in #513
- fix: remove the usage of _acquire_file_lock by @rchardx in #515
- feat: extract tool output from openai-agents sdk by @CormickKneey in #507
- [FEAT] Add pipeline parallel support for vLLM inference engine by @fishcrap in #510
- fix(launcher): improve error handling and node calculation in ray.py by @dafu-wu in #518
- feat: add metadata extraction and remapping for process and thread IDs in trace events by @rchardx in #519
- fix: enhance rollout statistics tracking with enqueued state by @rchardx in #522
- feat: implement GSPO (Group-level Sequential Policy Optimization) by @zhshgmail in #501
- [Doc] Add Megatron training tutorial doc. by @nuzant in #521
- [Bug Fix] Fix deploy docs CI by @nuzant in #526
- chore: refactor boba GRPO for tracing by @rchardx in #527
- Docs: Add CAMEL training tutorial and improve variable naming by @fishcrap in https://github.com/inc...
v0.3.4.post1
v0.3.4.post1 Patch Fix
- Fixed a "full_loss_mask" KeyError introduced in #434. The original PR was tested with Ulysses enabled but caused errors when Ulysses was disabled.
- Updated configuration and scripts in
boba_grpo.pyto reproduce legacy results.
What's Changed
- chore: add comprehensive agent operations guide to AGENTS.md by @rchardx in #440
- fix boba_grpo bug by @shun001 in #439
- fix KeyError: "full_loss_mask" without ulysses and synchronize the boba GRPO yaml config by @garrett4wade in #441
New Contributors
Full Changelog: v0.3.4...v0.3.4.post1