Skip to content

Single-controller LoRA RL fine-tuning with vLLM#735

Merged
garrett4wade merged 4 commits intoinclusionAI:mainfrom
gursimar:single_controller_lora
Jan 5, 2026
Merged

Single-controller LoRA RL fine-tuning with vLLM#735
garrett4wade merged 4 commits intoinclusionAI:mainfrom
gursimar:single_controller_lora

Conversation

@gursimar
Copy link
Contributor

Description

This PR adds working, tested examples for running single-controller LoRA training with the vLLM backend.
It builds on the existing LoRA + vLLM support (RFC #609) and demonstrates how to configure and launch a single-controller GRPO workflow.


What’s included

  • New GRPO example for single-controller LoRA with vLLM
  • Corresponding YAML config illustrating minimal wiring
  • No changes to core engine or runtime behavior

Files changed

Kept files in the examples/lora folder on purpose as IMO, all lora exmaples should be under this forder only.

  • examples/lora/gsm8k_grpo_vllm_single_controller.py — single-controller GRPO LoRA example
  • examples/lora/gsm8k_grpo_vllm_single_controller.yaml — config for vLLM backend

Running instructions

python examples/lora/gsm8k_grpo_vllm_single_controller.py --config examples/lora/gsm8k_grpo_vllm_single_controller.yaml

Testing

  • Verified the example runs end-to-end with the vLLM backend.
  • No regressions to existing LoRA or non-LoRA flows.

Type of Change

  • New feature (non-breaking change that adds functionality)
  • Bug fix
  • Breaking change
  • Documentation update
  • Code refactoring
  • Performance improvement
  • Test coverage improvement

Checklist

  • I have read the Contributing Guide
  • I have run formatting tools (pre-commit or manual)
  • I have run relevant unit tests and they pass
  • I have added tests for new functionality
  • I have updated documentation if needed
  • My branch is up to date with main
  • This PR introduces breaking changes
  • No critical issues raised by AI reviewers (/gemini review)

Need help? Check the Contributing Guide or ask in
GitHub Discussions!

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @gursimar, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request delivers a practical and verified example for fine-tuning models using LoRA with a single controller and the vLLM inference engine. It serves as a blueprint for users looking to implement GRPO workflows with these specific technologies, providing both the Python script and the necessary YAML configuration to get started. The primary goal is to expand the existing LoRA + vLLM capabilities with a concrete, runnable demonstration.

Highlights

  • New Example for LoRA Fine-tuning: This pull request introduces a new, fully tested example demonstrating single-controller LoRA (Low-Rank Adaptation) training utilizing the vLLM backend.
  • GRPO Workflow Configuration: It provides a clear example of how to configure and launch a single-controller GRPO (Generalized Reinforcement Learning with Policy Optimization) workflow, building on existing LoRA + vLLM support.
  • Dedicated Configuration File: A corresponding YAML configuration file is included, illustrating the minimal wiring required to set up this specific LoRA fine-tuning scenario.
  • No Core Engine Changes: The changes are confined to examples and configuration, ensuring no modifications to the core engine or runtime behavior of the system.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new example for single-controller LoRA fine-tuning with the vLLM backend. The changes include a Python script for the training workflow and a corresponding YAML configuration file. The code is well-structured for an example script. My review includes a couple of suggestions for the Python script to improve maintainability by removing a magic number and to add a placeholder for an evaluation step, which seems intended by the configuration but is currently missing.

@gursimar gursimar force-pushed the single_controller_lora branch from 12d9725 to fd4fd16 Compare December 17, 2025 00:19
Copy link
Collaborator

@garrett4wade garrett4wade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While the implementation looks great, I'd like to still confirm the details about learning performance.

The previous SPMD LoRA code has an unresolved bug that if multiple infernece engines submit rollout requests concurrently, the learning performance will significantly drop. As a workaround, we only submit requests on rank 0 (code). Only through this way the learning curve can basically match full-parameter tuning.

I wonder whether the bug still exists in the single controller mode. Can you provide learning curves comparing this new script with the default SPMD, full-parameter tuning script? Hopefully there is no performance drop any more.

@gursimar gursimar force-pushed the single_controller_lora branch from fd4fd16 to 90b7da1 Compare December 17, 2025 23:03
@github-actions
Copy link

github-actions bot commented Jan 1, 2026

This pull request has been automatically marked as stale because it has not had recent activity within the last 14 days.

Please add a comment or push new commits to keep it active.

Thank you for your contribution!

@github-actions github-actions bot added the stale label Jan 1, 2026
@gursimar gursimar force-pushed the single_controller_lora branch from 90b7da1 to 3309483 Compare January 2, 2026 19:22
@gursimar
Copy link
Contributor Author

gursimar commented Jan 2, 2026

This shows average training reward vs epoch for full RL finetuning
full_reward_vs_epoch

@gursimar
Copy link
Contributor Author

gursimar commented Jan 2, 2026

Here are the reward vs global epoch for lora fientuning.
lora_reward_vs_epoch

@gursimar
Copy link
Contributor Author

gursimar commented Jan 2, 2026

Due to the updated refactoring like PPOTrainer and the new examples folder, we have made the following adjustments

  1. Add the examples/math/gsm8k_grpo_lora.yaml in the examples/math folder that runs using the universal file
python -u examples/math/gsm8k_rl.py --config examples/math/gsm8k_grpo_lora.yaml

This aligns with the new design of single runner that runs with different yaml files.

  1. For now, we keep examples/lora folder as it still contains some useful examples. We recommend phasing out of lora folder eventually after supporting/ testing lora with sgLang using the same script/ yaml file.

@garrett4wade

@github-actions github-actions bot removed the stale label Jan 3, 2026
Copy link
Collaborator

@garrett4wade garrett4wade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Merging.

@garrett4wade garrett4wade merged commit 7c9e470 into inclusionAI:main Jan 5, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants