CVE-2025-25183Improper Validation of Integrity Check Value in Vllm

Severity
2.6LOWNVD
EPSS
0.3%
top 44.57%
CISA KEV
Not in KEV
Exploit
No known exploits
Timeline
PublishedFeb 7
Latest updateFeb 11

Description

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of

CVSS vector

CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:N/I:L/A:NExploitability: 1.2 | Impact: 1.4

Affected Packages4 packages

NVDvllm/vllm< 0.7.2
PyPIvllm/vllm< 0.7.2+1
CVEListV5vllm-project/vllm< 0.7.2

🔴Vulnerability Details

3
OSV
CVE-2025-25183: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs2025-02-07
GHSA
vLLM uses Python 3.12 built-in hash() which leads to predictable hash collisions in prefix cache2025-02-06
OSV
vLLM uses Python 3.12 built-in hash() which leads to predictable hash collisions in prefix cache2025-02-06

📋Vendor Advisories

2
Microsoft
vLLM using built-in hash() from Python 3.12 leads to predictable hash collisions in vLLM prefix cache2025-02-11
Red Hat
vllm: vLLM uses Python 3.12 built-in hash() which leads to predictable hash collisions in prefix cache2025-02-06