CVE-2026-22773 — Allocation of Resources Without Limits or Throttling in Vllm
Severity
7.5HIGHNVD
EPSS
0.0%
top 94.51%
CISA KEV
Not in KEV
Exploit
No known exploits
Affected products
Timeline
PublishedJan 10
Latest updateJan 13
Description
vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.
CVSS vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:HExploitability: 3.9 | Impact: 3.6
Affected Packages3 packages
🔴Vulnerability Details
2📋Vendor Advisories
1Red Hat
▶