-
Notifications
You must be signed in to change notification settings - Fork 166
Issues: volcengine/verl
Basic Tutorial: Adding a New LLM Inference/Serving Backend
#21
opened Nov 22, 2024 by
PeterSH6
Open
1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
"AssertionError: Expects tensor to be on the compute device cuda:0, was on cpu" for fsdp
#181
opened Feb 1, 2025 by
bebetterest
if rollout.n is doubled, will the samples used for training doubled too?
#180
opened Feb 1, 2025 by
StarDewXXX
[Question] Is vLLMRollout.generate_sequences the right place to implement tool calling?
enhancement
New feature or request
question
Further information is requested
vllm related
#176
opened Jan 31, 2025 by
accupham
Add assertion to ensure the reward in GRPO is generated by ORM
#160
opened Jan 29, 2025 by
vermouth1992
Fused CE loss integration
help wanted
Extra attention is needed
#97
opened Jan 12, 2025 by
eric-haibin-lin
Liger kernel integration
help wanted
Extra attention is needed
#96
opened Jan 12, 2025 by
eric-haibin-lin
Actor model didn't update correctly when upgrade megatron to core-r0.6.0
#64
opened Dec 24, 2024 by
Wodswos
Unexpected Increase in Rollout Time After Reducing num_hidden_layers in deepseek-llm-7b-chat Model
#24
opened Nov 25, 2024 by
metaqiang
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.