CVE-2026-27893Protection Mechanism Failure in Vllm

Severity
8.8HIGHNVD
EPSS
0.0%
top 90.72%
CISA KEV
Not in KEV
Exploit
No known exploits
Affected products
Timeline
PublishedMar 27

Description

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.

CVSS vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:HExploitability: 2.8 | Impact: 5.9

Affected Packages3 packages

NVDvllm/vllm0.10.10.18.0
PyPIvllm/vllm0.10.10.18.0
CVEListV5vllm-project/vllm>= 0.10.1, < 0.18.0

Patches

🔴Vulnerability Details

2
OSV
vLLM has Hardcoded Trust Override in Model Files Enables RCE Despite Explicit User Opt-Out2026-03-27
GHSA
vLLM has Hardcoded Trust Override in Model Files Enables RCE Despite Explicit User Opt-Out2026-03-27

📋Vendor Advisories

1
Red Hat
vllm: vLLM: Remote code execution due to hardcoded trust_remote_code setting2026-03-26

🕵️Threat Intelligence

6
Wiz
CVE-2026-34755 Impact, Exploitability, and Mitigation Steps | Wiz
Wiz
CVE-2026-25960 Impact, Exploitability, and Mitigation Steps | Wiz
Wiz
CVE-2026-27893 Impact, Exploitability, and Mitigation Steps | Wiz
Wiz
GHSA-mcmc-2m55-j8jj Impact, Exploitability, and Mitigation Steps | Wiz
Wiz
CVE-2026-34753 Impact, Exploitability, and Mitigation Steps | Wiz