CVE-2026-22807 — Code Injection in Vllm
Severity
9.8CRITICALNVD
GHSA8.8OSV8.8
EPSS
0.0%
top 93.92%
CISA KEV
Not in KEV
Exploit
No known exploits
Affected products
Timeline
PublishedJan 21
Latest updateMar 27
Description
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model l…
CVSS vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:HExploitability: 3.9 | Impact: 5.9