CVE-2026-22773Allocation of Resources Without Limits or Throttling in Vllm

Severity
7.5HIGHNVD
EPSS
0.0%
top 94.51%
CISA KEV
Not in KEV
Exploit
No known exploits
Affected products
Timeline
PublishedJan 10
Latest updateJan 13

Description

vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.

CVSS vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:HExploitability: 3.9 | Impact: 3.6

Affected Packages3 packages

NVDvllm/vllm0.6.40.12.0
PyPIvllm/vllm0.6.40.12.0
CVEListV5vllm-project/vllm>= 0.6.4, < 0.12.0

🔴Vulnerability Details

2
OSV
vLLM is vulnerable to DoS in Idefics3 vision models via image payload with ambiguous dimensions2026-01-13
GHSA
vLLM is vulnerable to DoS in Idefics3 vision models via image payload with ambiguous dimensions2026-01-13

📋Vendor Advisories

1
Red Hat
vllm: vLLM: Denial of Service via specially crafted image in multimodal model serving2026-01-10

🕵️Threat Intelligence

1
Wiz
CVE-2026-22773 Impact, Exploitability, and Mitigation Steps | Wiz