CVE-2025-62426Allocation of Resources Without Limits or Throttling in Vllm

Severity
6.5MEDIUMNVD
EPSS
0.1%
top 80.54%
CISA KEV
Not in KEV
Exploit
No known exploits
Affected products
Timeline
PublishedNov 21

Description

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.

CVSS vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:HExploitability: 2.8 | Impact: 3.6

Affected Packages3 packages

NVDvllm/vllm0.5.50.11.1+1
PyPIvllm/vllm0.5.50.11.1
CVEListV5vllm-project/vllm>= 0.5.5, < 0.11.1

Patches

🔴Vulnerability Details

2
OSV
vLLM vulnerable to DoS via large Chat Completion or Tokenization requests with specially crafted `chat_template_kwargs`2025-11-20
GHSA
vLLM vulnerable to DoS via large Chat Completion or Tokenization requests with specially crafted `chat_template_kwargs`2025-11-20

📋Vendor Advisories

1
Red Hat
vllm: vLLM vulnerable to DoS via large Chat Completion or Tokenization requests with specially crafted `chat_template_kwargs`2025-11-21