vllm-project/vllm projects
Search results
20 open and 0 closed projects found.
PRs and issues related to NVIDIA hardware
#31 updated Dec 20, 2025
torch.compile integration related
#12 updated Dec 20, 2025
Tracking failures that are occurring in CI.
#20 updated Dec 19, 2025
Track CPU related issues & tasks
#42 updated Dec 19, 2025
Backlog for CI feature requests
#35 updated Dec 19, 2025
2025-02-25: DeepSeek V3/R1 is supported with optimized block FP8 kernels, MLA, MTP spec decode, multi-node PP, EP, and W4A16 quantization
#5 updated Dec 18, 2025
Community requests for multi-modal models
#10 updated Dec 17, 2025
Main tasks for the multi-modality workstream (#4194)
#8 updated Dec 16, 2025
Tracks Ray issues and pull requests in vLLM
#7 updated Dec 12, 2025
A list of onboarding tasks for first-time contributors to get started with vLLM.
#6 updated Dec 10, 2025
Tracker of known issues and bugs for serving Llama on vLLM
#14 updated Nov 24, 2025
Enhancement to Llama herd of models. See also https://github.com/vllm-project/vllm/issues/16114
#13 updated Nov 20, 2025
[Testing] Optimize V1 PP efficiency.
#1 updated Oct 6, 2025
You can’t perform that action at this time.