Skip to content

feat: integrate FP8 weight transfer format#2038

Merged
samsja merged 10 commits intomainfrom
feature/fp8-glm5-weight-transfer
Mar 22, 2026
Merged

feat: integrate FP8 weight transfer format#2038
samsja merged 10 commits intomainfrom
feature/fp8-glm5-weight-transfer

Conversation

@S1ro1
Copy link
Copy Markdown
Collaborator

@S1ro1 S1ro1 commented Mar 17, 2026

Note

Medium Risk
Adds a new NCCL weight transfer mode that changes the on-the-wire weight format and in-place loading behavior in vLLM workers, which could break updates for unsupported models or mismatched tensor shapes. Guardrails/validators reduce misconfiguration risk but correctness depends on model-specific conversion logic.

Overview
Adds an opt-in quantize_in_weight_transfer flag for NCCL weight broadcast across shared RLConfig, TrainerConfig, and OrchestratorConfig, and wires it through orchestrator → HTTP /init_broadcaster → vLLM worker initialization.

When enabled, the trainer broadcasts vLLM kernel-format FP8 (e4m3) weights instead of HF checkpoint-format tensors, using new per-layer conversion (PreTrainedModelPrimeRL.convert_layer_to_vllm_kernel) and FP8 blockwise quantization utilities. The inference-side NCCL worker now selects between checkpoint loading and a new kernel-mode in-place copy loader (weight_transfer.py) with extra post-processing for certain kernels (e.g., MLA absorbed KV weights).

Also tightens config behavior: validates that quantized transfer is only used with weight_broadcast.type='nccl', with inference configured, and with trainer.model.impl='custom', and adjusts shared model name propagation/validation to keep inference and orchestrator model names consistent.

Written by Cursor Bugbot for commit a997302. This will update automatically on new commits. Configure here.

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

"When disabled, uses default HF checkpoint-format transfer."
),
),
] = False
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing CHANGELOG entry for new config field

Low Severity

A new quantize_in_weight_transfer config field is added to NCCLWeightBroadcastConfig in the orchestrator, trainer, and RL config files, as well as SharedWeightBroadcastConfig, but CHANGELOG.md has no corresponding entry. This violates the rule requiring changelog updates when configuration structures are modified.

Additional Locations (2)
Fix in Cursor Fix in Web

Triggered by project rule: BugBot Instructions

@S1ro1 S1ro1 changed the title feat: integrate GLM5 FP8 kernel-format weight transfer feat: integrate FP8 weight transfer format Mar 18, 2026
Comment thread tests/unit/test_configs.py Outdated
Comment thread tests/unit/test_configs.py Outdated
Comment thread tests/unit/test_configs.py Outdated
samsja and others added 2 commits March 21, 2026 23:36
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@samsja samsja merged commit 3dd48be into main Mar 22, 2026
8 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants