Severity
9.8CRITICALNVD
EPSS
0.1%
top 75.06%
CISA KEV
Not in KEV
Exploit
No known exploits
Affected products
Timeline
PublishedFeb 2

Description

vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.

CVSS vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:HExploitability: 3.9 | Impact: 5.9

Affected Packages3 packages

NVDvllm/vllm0.8.30.14.1
PyPIvllm/vllm0.8.30.14.1
CVEListV5vllm-project/vllm>= 0.8.3, < 0.14.1

Patches

🔴Vulnerability Details

2
OSV
vLLM has RCE In Video Processing2026-02-02
GHSA
vLLM has RCE In Video Processing2026-02-02

📋Vendor Advisories

1
Red Hat
vLLM: vLLM: Remote code execution via invalid image processing in the multimodal endpoint.2026-02-02

🕵️Threat Intelligence

1
Wiz
CVE-2026-22778 Impact, Exploitability, and Mitigation Steps | Wiz