You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Running a MagneX / AMReX Application with a PyTorch Model (LibTorch + TorchScript)
2
3
3
-
This guide downloads the libtorch CUDA 11.8 C++ distribution and unzips it in the same directory that contains `MagneX`, then renames the extracted folder to `libtorch_cuda`.
4
+
This guide explains how to run an AMReX-based application (e.g., MagneX) that evaluates a pre-trained PyTorch model using the LibTorch C++ API.
4
5
5
-
## Steps
6
+
---
6
7
7
-
1. Change to the parent directory of `MagneX`:
8
+
## Overview
9
+
10
+
The typical workflow is:
11
+
12
+
1. Initialize an AMReX `MultiFab` with input data (e.g., different `dt` values).
13
+
2. Copy the `MultiFab` data into a `torch::Tensor`.
14
+
3. Load a TorchScript model (`model.pt`).
15
+
4. Run inference on CPU or GPU.
16
+
5. Copy the model outputs back into a `MultiFab`.
17
+
6. Write AMReX plotfiles for visualization.
18
+
19
+
The model **must** be exported as TorchScript (`model.pt`).
20
+
Do not use a Python-only `.pth` checkpoint.
21
+
22
+
---
23
+
24
+
## Requirements
25
+
26
+
- MagneX (or another AMReX-based application)
27
+
- A TorchScript model file:
28
+
```
29
+
model.pt
30
+
```
31
+
- LibTorch (CUDA build recommended for GPU runs)
32
+
- CUDA version tested: **13.0**
33
+
- LibTorch version tested: **2.9.1 + cu126**
34
+
35
+
---
36
+
37
+
## Directory Layout (Recommended)
38
+
39
+
Place LibTorch in the same parent directory as MagneX:
0 commit comments