Severity
7.3HIGHNVD
EPSS
0.2%
top 54.03%
CISA KEV
Not in KEV
Exploit
No known exploits
Timeline
PublishedMay 29

Description

vLLM is an inference and serving engine for large language models (LLMs). In versions starting from 0.7.0 to before 0.9.0, in the file vllm/multimodal/hasher.py, the MultiModalHasher class has a security and data integrity issue in its image hashing method. Currently, it serializes PIL.Image.Image objects using only obj.tobytes(), which returns only the raw pixel data, without including metadata such as the image’s shape (width, height, mode). As a result, two images of different sizes (e.g., 30

CVSS vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:LExploitability: 3.9 | Impact: 3.4

Affected Packages11 packages

NVDvllm/vllm0.7.00.9.0
PyPIvllm/vllm0.7.00.9.0+2
CVEListV5vllm-project/vllm>= 0.7.0, < 0.9.0

Patches

🔴Vulnerability Details

3
OSV
CVE-2025-46722: vLLM is an inference and serving engine for large language models (LLMs)2025-05-29
OSV
vLLM has a Weakness in MultiModalHasher Image Hashing Implementation2025-05-28
GHSA
vLLM has a Weakness in MultiModalHasher Image Hashing Implementation2025-05-28

📋Vendor Advisories

2
Red Hat
vllm: vLLM has a Weakness in MultiModalHasher Image Hashing Implementation2025-05-29
Microsoft
drm/amdgpu: fix mc_data out-of-bounds read warning2024-09-10