-
Notifications
You must be signed in to change notification settings - Fork 108
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge remote-tracking branch 'upstream/main' into zsl/vllm-spmd
- Loading branch information
Showing
73 changed files
with
1,837 additions
and
399 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -16,6 +16,8 @@ on: | |
- "**/*.py" | ||
- .github/workflows/dataset.yml | ||
|
||
|
||
|
||
jobs: | ||
ray: | ||
runs-on: [self-hosted, gpu] | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
name: e2e_lora | ||
|
||
on: | ||
# Trigger the workflow on push or pull request, | ||
# but only for the main branch | ||
push: | ||
branches: | ||
- main | ||
paths: | ||
- "**/*.py" | ||
- .github/workflows/e2e_lora.yml | ||
pull_request: | ||
branches: | ||
- main | ||
paths: | ||
- "**/*.py" | ||
- .github/workflows/e2e_lora.yml | ||
- "tests/e2e/*.sh" | ||
|
||
|
||
|
||
jobs: | ||
e2e_lora: | ||
runs-on: [self-hosted, l20-1] | ||
env: | ||
HTTP_PROXY: ${{ secrets.PROXY_HTTP }} | ||
HTTPS_PROXY: ${{ secrets.PROXY_HTTPS }} | ||
NO_PROXY: "localhost,127.0.0.1" | ||
HF_HUB_ENABLE_HF_TRANSFER: 1 | ||
container: | ||
image: verlai/verl:vemlp-th2.4.0-cu124-vllm0.6.3-ray2.10-te1.7-v0.0.3 | ||
options: --gpus all --shm-size=10g | ||
steps: | ||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 | ||
with: | ||
fetch-depth: 0 | ||
- name: Install the current repository | ||
run: | | ||
pip3 install hf_transfer peft | ||
pip3 install -e .[test] | ||
- name: Prepare gsm8k dataset | ||
run: | | ||
ray stop --force | ||
python3 examples/data_preprocess/gsm8k.py | ||
- name: Running gsm8k e2e training tests with LoRA | ||
run: | | ||
ray stop --force | ||
bash tests/sft/run_sft_qwen05_peft.sh 8 $HOME/ckpts/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,56 @@ | ||
name: e2e_sft | ||
|
||
on: | ||
# Trigger the workflow on push or pull request, | ||
# but only for the main branch | ||
push: | ||
branches: | ||
- main | ||
paths: | ||
- "**/*.py" | ||
- .github/workflows/e2e_sft.yml | ||
pull_request: | ||
branches: | ||
- main | ||
paths: | ||
- "**/*.py" | ||
- .github/workflows/e2e_sft.yml | ||
- "tests/e2e/*.sh" | ||
|
||
|
||
|
||
jobs: | ||
e2e_sft: | ||
runs-on: [self-hosted, l20-1] | ||
env: | ||
HTTP_PROXY: ${{ secrets.PROXY_HTTP }} | ||
HTTPS_PROXY: ${{ secrets.PROXY_HTTPS }} | ||
NO_PROXY: "localhost,127.0.0.1" | ||
HF_HUB_ENABLE_HF_TRANSFER: 1 | ||
container: | ||
image: verlai/verl:vemlp-th2.4.0-cu124-vllm0.6.3-ray2.10-te1.7-v0.0.3 | ||
options: --gpus all --shm-size=10g | ||
steps: | ||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 | ||
with: | ||
fetch-depth: 0 | ||
- name: Install the current repository | ||
run: | | ||
pip3 install hf_transfer | ||
pip3 install -e .[test] | ||
- name: Prepare gsm8k dataset | ||
run: | | ||
ray stop --force | ||
python3 examples/data_preprocess/gsm8k.py | ||
- name: Running gsm8k e2e training tests on 8 L20 GPUs with rmpad using function rm | ||
run: | | ||
ray stop --force | ||
bash tests/sft/run_sft.sh | ||
- name: Running gsm8k e2e training tests on 8 L20 GPUs with sequence parallism | ||
run: | | ||
ray stop --force | ||
bash examples/sft/gsm8k/run_qwen_05_sp2.sh 8 $HOME/ckpts/ | ||
- name: Check loss difference between sequence parallel vs. default implementation | ||
run: | | ||
ray stop --force | ||
bash tests/sft/run_sft_sp_loss_match.sh |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -16,6 +16,8 @@ on: | |
- "**/*.py" | ||
- .github/workflows/model.yml | ||
|
||
|
||
|
||
jobs: | ||
model_rmpad: | ||
runs-on: [self-hosted, l20-1] | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -16,6 +16,8 @@ on: | |
- "**/*.py" | ||
- .github/workflows/ray_test.yml | ||
|
||
|
||
|
||
jobs: | ||
ray: | ||
runs-on: [self-hosted, l20-0] | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -39,12 +39,15 @@ veRL is fast with: | |
- **vLLM** and **TGI** for rollout generation, **SGLang** support coming soon. | ||
- huggingface models support | ||
- Supervised fine-tuning | ||
- Reward model training | ||
- Reinforcement learning from human feedback with PPO | ||
- flash-attention integration, sequence packing, and long context support | ||
- Reinforcement learning from human feedback with [PPO](https://github.com/volcengine/verl/tree/main/examples/ppo_trainer) and [GRPO](https://github.com/volcengine/verl/tree/main/examples/grpo_trainer) | ||
- Support model-based reward and function-based reward (verifiable reward) | ||
- flash-attention integration, sequence packing, and long context support via DeepSpeed Ulysses | ||
- scales up to 70B models and hundreds of GPUs | ||
- experiment tracking with wandb and mlflow | ||
|
||
## Upcoming Features | ||
- Reward model training | ||
- DPO training | ||
|
||
## Getting Started | ||
|
||
|
@@ -54,7 +57,7 @@ Checkout this [Jupyter Notebook](https://github.com/volcengine/verl/tree/main/ex | |
- [Installation](https://verl.readthedocs.io/en/latest/start/install.html) | ||
- [Quickstart](https://verl.readthedocs.io/en/latest/start/quickstart.html) | ||
|
||
**Running an PPO example step-by-step:** | ||
**Running a PPO example step-by-step:** | ||
- Data and Reward Preparation | ||
- [Prepare Data (Parquet) for Post-Training](https://verl.readthedocs.io/en/latest/preparation/prepare_data.html) | ||
- [Implement Reward Function for Dataset](https://verl.readthedocs.io/en/latest/preparation/reward_function.html) | ||
|
@@ -77,6 +80,8 @@ Checkout this [Jupyter Notebook](https://github.com/volcengine/verl/tree/main/ex | |
- [Add models with the FSDP backend](https://verl.readthedocs.io/en/latest/advance/fsdp_extension.html) | ||
- [Add models with the Megatron-LM backend](https://verl.readthedocs.io/en/latest/advance/megatron_extension.html) | ||
|
||
## Performance Tuning Guide | ||
The performance is essential for on-policy RL algorithm. We write a detailed performance tuning guide to allow people tune the performance. See [here](https://verl.readthedocs.io/en/latest/perf/perf_tuning.html) for more details. | ||
|
||
## Citation and acknowledgement | ||
|
||
|
@@ -95,9 +100,10 @@ If you find the project helpful, please cite: | |
|
||
verl is inspired by the design of Nemo-Aligner, Deepspeed-chat and OpenRLHF. The project is adopted and supported by Anyscale, Bytedance, LMSys.org, Shanghai AI Lab, Tsinghua University, UC Berkeley, UCLA, UIUC, and University of Hong Kong. | ||
|
||
## Publications Using veRL | ||
## Awesome work using veRL | ||
- [Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization](https://arxiv.org/abs/2410.09302) | ||
- [Flaming-hot Initiation with Regular Execution Sampling for Large Language Models](https://arxiv.org/abs/2410.21236) | ||
- [Process Reinforcement Through Implicit Rewards](https://github.com/PRIME-RL/PRIME/) | ||
- [TinyZero](https://github.com/Jiayi-Pan/TinyZero): a reproduction of DeepSeek R1 Zero in countdown and multiplication tasks | ||
|
||
We are HIRING! Send us an [email](mailto:[email protected]) if you are interested in internship/FTE opportunities in MLSys/LLM reasoning/multimodal alignment. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.