You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fix: verify blob digests and sandbox Python backends (#848)
Addresses two security vulnerabilities reported against Model Runner:
- Blob digest verification: after downloading each layer, hash the
complete file and reject it if the SHA256 does not match the digest
declared in the manifest. A malicious registry could previously serve
arbitrary bytes under any digest name and they would be stored without
error. The full file is hashed (not just the streamed bytes) so that
resumed downloads are also verified correctly.
- Python backend sandboxing: add ConfigurationPython to the sandbox
package and apply it to all Python-based inference backends (vLLM,
vllm-metal, SGLang, MLX, Diffusers). On Darwin this uses sandbox-exec
with a deny-by-default seatbelt profile matching the existing
llama.cpp policy; on Windows the same job object limits are applied.
Previously these backends ran completely unsandboxed, meaning
attacker-controlled model files could execute arbitrary code on the
host.
0 commit comments