CVE-2025-66448Code Injection in Vllm

Severity
8.8HIGHNVD
EPSS
0.3%
top 47.48%
CISA KEV
Not in KEV
Exploit
No known exploits
Affected products
Timeline
PublishedDec 1
Latest updateMar 27

Description

vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens

CVSS vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:HExploitability: 2.8 | Impact: 5.9

Affected Packages3 packages

NVDvllm/vllm< 0.11.1
PyPIvllm/vllm0.10.10.18.0+1
CVEListV5vllm-project/vllm< 0.11.1

Patches

🔴Vulnerability Details

4
OSV
vLLM has Hardcoded Trust Override in Model Files Enables RCE Despite Explicit User Opt-Out2026-03-27
GHSA
vLLM has Hardcoded Trust Override in Model Files Enables RCE Despite Explicit User Opt-Out2026-03-27
GHSA
vLLM vulnerable to remote code execution via transformers_utils/get_config2025-12-02
OSV
vLLM vulnerable to remote code execution via transformers_utils/get_config2025-12-02

📋Vendor Advisories

1
Red Hat
vllm: vLLM: Remote Code Execution via malicious model configuration2025-12-01