Deploying PyTorch models to embedded edge devices is a critical step in bringing AI applications to life. NVIDIA's Jetson platform, with its powerful GPU computing capabilities and comprehensive AI software stack, has become an ideal choice for running PyTorch models.
However, because Jetson is based on the ARM architecture, which differs from common x86 server environments, setting up a PyTorch environment on it cannot be accomplished with a simple pip install command. Developers often face challenges such as finding the correct version of pre-compiled packages, managing complex dependencies, and performing necessary performance optimizations.
This article aims to provide a clear and practical guide, focusing on how to quickly and correctly configure the PyTorch environment on the Jetson platform, helping you kickstart your PyTorch development journey on Jetson.
Image from:
pypi
-
JetPack 5/6: Make sure you have NVIDIA JetPack 5 or 6 installed on your reComputer. JetPack includes the necessary libraries and tools for developing on NVIDIA Jetson platforms.
-
CUDA: Verify that CUDA is installed and properly configured. PyTorch relies on CUDA for GPU acceleration. Ensure that the CUDA version installed is compatible with the PyTorch version you plan to install.
Type
cat /etc/nv_tegra_releaseandnvcc -Vin your terminal. If the returned content is similar to the screenshot below, it indicates that the corresponding environment has been properly installed in your Jetson.
To install PyTorch on your reComputer with the specified JetPack and CUDA versions, follow these steps:
Choose the correct wheel file based on your JetPack, CUDA and python version:
-
JetPack 7:
-
JetPack 6.1 & 6.2 (L4T R36.4) + CUDA 12.6:
-
if
ImportError: libcusparseLt.so.0: cannot open shared object file: No such file or directory, install new version cuSPARSELt 0.8.1 (Select Linux>arrch64-jetson>Native>Ubuntu>22.04>deb(Local)) and CUDA 12.6 (Select Linux>arrch64-jetson>Native>Ubuntu>22.04>deb(Local)) -
If torchvision reports an error, please uninstall it and refer to the subsequent steps to compile torchvision 0.20.0 via code.
-
JetPack 6.0 (L4T R36.2 / R36.3) + CUDA 12.2:
- PyTorch 2.3 rename to
torch-2.3.0-cp310-cp310-linux_aarch64.whl - torchvision 0.18 rename to
torchvision-0.18.0a0+6043bc2-cp310-cp310-linux_aarch64.whl
- PyTorch 2.3 rename to
-
JetPack 6.0 DP (L4T R36.2.0):
-
JetPack 5.x:
- JetPack 5.1 (L4T R35.2.1) / JetPack 5.1.1 (L4T R35.3.1) / JetPack 5.1.2 (L4T R35.4.1):
- JetPack 5.1 (L4T R35.2.1) / JetPack 5.1.1 (L4T R35.3.1):
- JetPack 5.0 (L4T R34.1) / JetPack 5.0.2 (L4T R35.1) / JetPack 5.1 (L4T R35.2.1) / JetPack 5.1.1 (L4T R35.3.1):
-
Open a Terminal:
- Navigate to the directory where you downloaded the
.whlfile.
- Navigate to the directory where you downloaded the
-
Install:
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev libomp-dev pip3 install 'Cython<3' pip3 install numpy sudo pip3 install <filename>.whl
Replace
<filename>with the name of the downloaded.whlfile.
To verify that PyTorch has been installed correctly on your system, launch an interactive Python interpreter from the terminal and run the following commands:
```python
import torch
print(torch.__version__)
print('CUDA available: ' + str(torch.cuda.is_available()))
print('cuDNN version: ' + str(torch.backends.cudnn.version()))
a = torch.cuda.FloatTensor(2).zero_()
print('Tensor a = ' + str(a))
b = torch.randn(2).cuda()
print('Tensor b = ' + str(b))
c = a + b
print('Tensor c = ' + str(c))
```
```python
import torchvision
print(torchvision.__version__)
```
| Tutorial | Type | Description |
|---|---|---|
| Official PyTorch Tutorial | doc | An official PyTorch tutorial that provides a complete learning path. |
| PyTorch Development Documentation | doc | Official PyTorch development documentation provided by PyTorch. |

