CVE-2025-62164 — Improper Input Validation in Vllm
Severity
8.8HIGHNVD
EPSS
0.1%
top 67.69%
CISA KEV
Not in KEV
Exploit
No known exploits
Affected products
Timeline
PublishedNov 21
Latest updateJan 8
Description
vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled …
CVSS vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:HExploitability: 2.8 | Impact: 5.9