Jetson Nano is a small computer similar in size to a Raspberry Pi, but capable of running neural networks. ONNX is an open-source framework to run artificial intelligence applications across a wide variety of platforms and vendors with hardware acceleration. So running ONNX on a Jetson Nano seems to be a good fit.
For a project, I was looking to run an object detection model with ONNX on a Jetson Nano. Beginning of the project I thought that this would be straightforward. However, it took me a couple of days to figure out how to run it.
First I took the official image from Nvidia and started testing. I was quickly running into issues with Python using Numpy. Same as described here. The suggested workaround for exporting the coretype didn’t work for me.
OPENBLAS_CORETYPE=ARMV8 python
Also upgrading to a newer Python version threw several errors which I didn’t dive deeper into. This cost me several days and finally, I decided to upgrade to Ubuntu 20.04 in order to avoid this issue. This however brought new challenges since only CUDA 10.2 is supported for the Jetson Nano. I found instructions on how to upgrade. They also provide a pre-compiled image which I used as a starting point. I was careful not to upgrade any package since this would probably mean losing compatibility. The application would probably still run but without GPU support since the required CUDA version exceeds 10.2.
Next, it was time to install the ONNX runtime GPU. According to the documentation, ONNX runtime versions 1.5-1.6 are compatible with CUDA 10.2. and ONNX version 1.8 is compatible with ONNX runtime 1.6.
Ubuntu 20.04 has Python 3.8.2 as default and I was planning to stay on this version since previous attempts of upgrading were unsuccessful. ONNX version 1.8 can be installed via pip.
pip install onnx==1.8.1
ONNX runtime GPU 1.6. is not available via pip, but Jetson Zoo has pre-compiled packages for download.
# Download pip wheel from Nvidia
$ wget https://nvidia.box.com/shared/static/jy7nqva7l88mq9i8bw3g3sklzf4kccn2.whl -O onnxruntime_gpu-1.6.0-cp36-cp36m-linux_aarch64.whl
# Install pip wheel
$ pip3 install onnxruntime_gpu-1.6.0-cp36-cp36m-linux_aarch64.whl
After installing ONNX I was able to run my object detection model with ONNX and hardware acceleration on my Jetson Nano.