[VeRL] Multi-Turn RL训练源码走读(2)
在 Part 1 中,我们介绍了 verl 的初始化过程,我们进一步介绍 verl 的训练过程,包括rollout部分、make experience部分以及training部分。 在 GRPO 中,单个 step 包含四个阶段:load data -> rollout -> make experience -> update model。区别于前一节的详述,本节会使用伪代码结合源码的方式进行阐述。 flowchart LR subgraph W2["Initialize"] WP[Process Data] --> A direction TB D1[Data Prepare] --> A A[TaskRunner] --> B1[RayPPOTrainer] B1 --> Workers subgraph Workers["Workers"] direction TB WA[ActorRolloutWorker] --> WD[FSDP Engine] WB[CriticWorker] --> WD WC[RewardModelWorker] --> WD WD --> WE[SGLang Engine] end Workers --> C1[Hybrid Engine] end subgraph W3["Train Loop"] direction TB E[DataLoader] --> RolloutBox subgraph RolloutBox["Rollout"] F1[Prepare Data] --> F2[SGLang Async Rollout] F2 --> F3[Multi-turn Chat Process] end RolloutBox --> ExpBox subgraph ExpBox["Make Experience"] G1[Recompute Log Probs] --> G2[Compute Reward] G2 --> G3[Compute Advantage] end ExpBox --> UpdateBox subgraph UpdateBox["Train The Model"] H1[Load FSDP Model Weight] --> H2[Compute Gradient] H2 --> H3[Weights Update] H3 --> H4[Sync Weights] end UpdateBox --> E end W2 --> W3 数据加载与预处理 verl 通过 DataProto 和 RLHFDataset 来实现数据处理。具体来说,在 main_ppo.py 中,我们观察这个函数: ...