CVE-2025-24357Deserialization of Untrusted Data in Vllm

Severity
8.8HIGHNVD
EPSS
1.0%
top 22.88%
CISA KEV
Not in KEV
Exploit
No known exploits
Affected products
Timeline
PublishedJan 27
Latest updateApr 23

Description

vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0.

CVSS vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:HExploitability: 2.8 | Impact: 5.9

Affected Packages3 packages

NVDvllm/vllm< 0.7.0
PyPIvllm/vllm< d3d6bb13fb62da3234addf6574922a4ec0513d04+2
CVEListV5vllm-project/vllm< 0.7.0

Patches

🔴Vulnerability Details

5
GHSA
CVE-2025-24357 Malicious model remote code execution fix bypass with PyTorch < 2.6.02025-04-23
OSV
CVE-2025-24357 Malicious model remote code execution fix bypass with PyTorch < 2.6.02025-04-23
OSV
CVE-2025-24357: vLLM is a library for LLM inference and serving2025-01-27
OSV
vllm: Malicious model to RCE by torch.load in hf_model_weights_iterator2025-01-27
GHSA
vllm: Malicious model to RCE by torch.load in hf_model_weights_iterator2025-01-27

📋Vendor Advisories

1
Red Hat
vllm: vLLM allows a malicious model RCE by torch.load in hf_model_weights_iterator2025-01-27