Shortcuts

Build for Jetson

In this chapter, we introduce how to install MMDeploy on NVIDIA Jetson platforms, which we have verified on the following modules:

  • Jetson Nano

  • Jetson TX2

  • Jetson AGX Xavier

Hardware recommendation:

Prerequisites

  • To equip a Jetson device, JetPack SDK is a must.

  • The Model Converter of MMDeploy requires an environment with PyTorch for converting PyTorch models to ONNX models.

  • Regarding the toolchain, CMake and GCC has to be upgraded to no less than 3.14 and 7.0 respectively.

JetPack SDK

JetPack SDK provides a full development environment for hardware-accelerated AI-at-the-edge development. All Jetson modules and developer kits are supported by JetPack SDK.

There are two major installation methods including,

  1. SD Card Image Method

  2. NVIDIA SDK Manager Method

You can find a very detailed installation guide from NVIDIA official website.

Note

Please select the option to install “Jetson SDK Components” when using NVIDIA SDK Manager as this includes CUDA and TensorRT which are needed for this guide.

Here we have chosen JetPack 4.6.1 as our best practice on setting up Jetson platforms. MMDeploy has been tested on JetPack 4.6 (rev.3) and above and TensorRT 8.0.1.6 and above. Earlier JetPack versions has incompatibilities with TensorRT 7.x

Conda

Install Archiconda instead of Anaconda because the latter does not provide the wheel built for Jetson.

wget https://github.com/Archiconda/build-tools/releases/download/0.2.3/Archiconda3-0.2.3-Linux-aarch64.sh
bash Archiconda3-0.2.3-Linux-aarch64.sh -b

echo -e '\n# set environment variable for conda' >> ~/.bashrc
echo ". ~/archiconda3/etc/profile.d/conda.sh" >> ~/.bashrc
echo 'export PATH=$PATH:~/archiconda3/bin' >> ~/.bashrc

echo -e '\n# set environment variable for pip' >> ~/.bashrc
echo 'export OPENBLAS_CORETYPE=ARMV8' >> ~/.bashrc

source ~/.bashrc
conda --version

After the installation, create a conda environment and activate it.

# get the version of python3 installed by default
export PYTHON_VERSION=`python3 --version | cut -d' ' -f 2 | cut -d'.' -f1,2`
conda create -y -n mmdeploy python=${PYTHON_VERSION}
conda activate mmdeploy

Note

JetPack SDK 4+ provides Python 3.6. We strongly recommend using the default Python. Trying to upgrade it will probably ruin the JetPack environment.

If a higher-version of Python is necessary, you can install JetPack 5+, in which the Python version is 3.8.

PyTorch

Download the PyTorch wheel for Jetson from here and save it to the /home/username directory. Build torchvision from source as there is no prebuilt torchvision for Jetson platforms.

Take torch 1.10.0 and torchvision 0.11.1 for example. You can install them as below:

# pytorch
wget https://nvidia.box.com/shared/static/fjtbno0vpo676a25cgvuqc1wty0fkkg6.whl -O torch-1.10.0-cp36-cp36m-linux_aarch64.whl
pip3 install torch-1.10.0-cp36-cp36m-linux_aarch64.whl

# torchvision
sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev libopenblas-base libopenmpi-dev  libopenblas-dev -y
git clone --branch v0.11.1 https://github.com/pytorch/vision torchvision
cd torchvision
export BUILD_VERSION=0.11.1
pip install -e .

Note

It takes about 30 minutes to install torchvision on a Jetson Nano. So, please be patient until the installation is complete.

If you install other versions of PyTorch and torchvision, make sure the versions are compatible. Refer to the compatibility chart listed here.

CMake

We use the latest cmake v3.23.1 released in April 2022.

# purge existing
sudo apt-get purge cmake -y

# install prebuilt binary
export CMAKE_VER=3.23.1
export ARCH=aarch64
wget https://github.com/Kitware/CMake/releases/download/v${CMAKE_VER}/cmake-${CMAKE_VER}-linux-${ARCH}.sh
chmod +x cmake-${CMAKE_VER}-linux-${ARCH}.sh
sudo ./cmake-${CMAKE_VER}-linux-${ARCH}.sh --prefix=/usr --skip-license
cmake --version

Install Dependencies

The Model Converter of MMDeploy on Jetson platforms depends on MMCV and the inference engine TensorRT. While MMDeploy C/C++ Inference SDK relies on spdlog, OpenCV and ppl.cv and so on, as well as TensorRT. Thus, in the following sections, we will describe how to prepare TensorRT. And then, we will present the way to install dependencies of Model Converter and C/C++ Inference SDK respectively.

Prepare TensorRT

TensorRT is already packed into JetPack SDK. But In order to import it successfully in conda environment, we need to copy the tensorrt package to the conda environment created before.

cp -r /usr/lib/python${PYTHON_VERSION}/dist-packages/tensorrt* ~/archiconda3/envs/mmdeploy/lib/python${PYTHON_VERSION}/site-packages/
conda deactivate
conda activate mmdeploy
python -c "import tensorrt; print(tensorrt.__version__)" # Will print the version of TensorRT

# set environment variable for building mmdeploy later on
export TENSORRT_DIR=/usr/include/aarch64-linux-gnu

# append cuda path and libraries to PATH and LD_LIBRARY_PATH, which is also used for building mmdeploy later on.
# this is not needed if you use NVIDIA SDK Manager with "Jetson SDK Components" for installing JetPack.
# this is only needed if you install JetPack using SD Card Image Method.
export PATH=$PATH:/usr/local/cuda/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64

You can also make the above environment variables permanent by adding them to ~/.bashrc.

echo -e '\n# set environment variable for TensorRT' >> ~/.bashrc
echo 'export TENSORRT_DIR=/usr/include/aarch64-linux-gnu' >> ~/.bashrc

# this is not needed if you use NVIDIA SDK Manager with "Jetson SDK Components" for installing JetPack.
# this is only needed if you install JetPack using SD Card Image Method.
echo -e '\n# set environment variable for CUDA' >> ~/.bashrc
echo 'export PATH=$PATH:/usr/local/cuda/bin' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64' >> ~/.bashrc

source ~/.bashrc
conda activate mmdeploy

Install Dependencies for Model Converter

Install MMCV

MMCV has not provided prebuilt package for Jetson platforms, so we have to build it from source.

sudo apt-get install -y libssl-dev
git clone --branch 2.x https://github.com/open-mmlab/mmcv.git
cd mmcv
MMCV_WITH_OPS=1 pip install -e .

Note

It takes about 1 hour 40 minutes to install MMCV on a Jetson Nano. So, please be patient until the installation is complete.

Install ONNX

Don’t install the latest ONNX. The recommended ONNX version is 1.10.0.

# Execute one of the following commands
pip install onnx==1.10.0
conda install -c conda-forge onnx

If the installation failed and showed the following error:

CMake Error at CMakeLists.txt:299 (message):
      Protobuf compiler not found

Please install dependencies:

sudo apt-get install protobuf-compiler libprotoc-dev

Install ONNX Runtime [Optional]

Go to Jetson_Zoo#ONNX_Runtime to find right version of onnx runtime. Then download and install package.

For example:

# Download pip wheel from location mentioned above
$ wget https://nvidia.box.com/shared/static/jy7nqva7l88mq9i8bw3g3sklzf4kccn2.whl -O onnxruntime_gpu-1.10.0-cp36-cp36m-linux_aarch64.whl

# Install pip wheel
$ pip3 install onnxruntime_gpu-1.10.0-cp36-cp36m-linux_aarch64.whl

Install h5py and pycuda

Model Converter employs HDF5 to save the calibration data for TensorRT INT8 quantization and needs pycuda to copy device memory.

sudo apt-get install -y pkg-config libhdf5-100 libhdf5-dev
pip install versioned-hdf5 pycuda

Note

It takes about 6 minutes to install versioned-hdf5 on a Jetson Nano. So, please be patient until the installation is complete.

Install Dependencies for C/C++ Inference SDK

Install spdlog

spdlog is a very fast, header-only/compiled, C++ logging library

sudo apt-get install -y libspdlog-dev

Install ppl.cv

ppl.cv is a high-performance image processing library of openPPL

git clone https://github.com/openppl-public/ppl.cv.git
cd ppl.cv
export PPLCV_DIR=$(pwd)
echo -e '\n# set environment variable for ppl.cv' >> ~/.bashrc
echo "export PPLCV_DIR=$(pwd)" >> ~/.bashrc
./build.sh cuda

Note

It takes about 15 minutes to install ppl.cv on a Jetson Nano. So, please be patient until the installation is complete.

Install MMDeploy

git clone -b main --recursive https://github.com/open-mmlab/mmdeploy.git
cd mmdeploy
export MMDEPLOY_DIR=$(pwd)

Install Model Converter

Since some operators adopted by OpenMMLab codebases are not supported by TensorRT, we build the custom TensorRT plugins to make it up, such as roi_align, scatternd, etc. You can find a full list of custom plugins from here.

# build TensorRT custom operators
mkdir -p build && cd build
cmake .. -DMMDEPLOY_TARGET_BACKENDS="trt"
make -j$(nproc) && make install

# install model converter
cd ${MMDEPLOY_DIR}
pip install -v -e .
# "-v" means verbose, or more output
# "-e" means installing a project in editable mode,
# thus any local modifications made to the code will take effect without re-installation.

Note

It takes about 5 minutes to install model converter on a Jetson Nano. So, please be patient until the installation is complete.

Install C/C++ Inference SDK

Build SDK Libraries and its demo as below:

mkdir -p build && cd build
cmake .. \
    -DMMDEPLOY_BUILD_SDK=ON \
    -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
    -DMMDEPLOY_BUILD_EXAMPLES=ON \
    -DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \
    -DMMDEPLOY_TARGET_BACKENDS="trt" \
    -DMMDEPLOY_CODEBASES=all \
    -Dpplcv_DIR=${PPLCV_DIR}/cuda-build/install/lib/cmake/ppl
make -j$(nproc) && make install

Note

It takes about 9 minutes to build SDK libraries on a Jetson Nano. So, please be patient until the installation is complete.

Run a Demo

Object Detection demo

Before running this demo, you need to convert model files to be able to use with this SDK.

  1. Install MMDetection which is needed for model conversion

MMDetection is an open source object detection toolbox based on PyTorch

git clone -b 3.x https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -r requirements/build.txt
pip install -v -e .  # or "python setup.py develop"
  1. Follow this document on how to convert model files.

For this example, we have used retinanet_r18_fpn_1x_coco.py as the model config, and this file as the corresponding checkpoint file. Also for deploy config, we have used detection_tensorrt_dynamic-320x320-1344x1344.py

python ./tools/deploy.py \
    configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
    $PATH_TO_MMDET/configs/retinanet/retinanet_r18_fpn_1x_coco.py \
    retinanet_r18_fpn_1x_coco_20220407_171055-614fd399.pth \
    $PATH_TO_MMDET/demo/demo.jpg \
    --work-dir work_dir \
    --show \
    --device cuda:0 \
    --dump-info
  1. Finally run inference on an image

./object_detection cuda ${directory/to/the/converted/models} ${path/to/an/image}

The above inference is done on a Seeed reComputer built with Jetson Nano module

Troubleshooting

Installation

  • pip install throws an error like Illegal instruction (core dumped)

    Check if you are using any mirror, if you did, try this:

    rm .condarc
    conda clean -i
    conda create -n xxx python=${PYTHON_VERSION}
    

Runtime

  • #assertion/root/workspace/mmdeploy/csrc/backend_ops/tensorrt/batched_nms/trt_batched_nms.cpp,98 or pre_top_k need to be reduced for devices with arch 7.2

    1. Set MAX N mode and perform sudo nvpmodel -m 0 && sudo jetson_clocks.

    2. Reduce the number of pre_top_k in deploy config file like mmdet pre_top_k does, e.g., 1000.

    3. Convert the model again and try SDK demo again.

FAQ

  • Error error: cannot import name 'ProcessGroup' from 'torch.distributed'.

Read the Docs v: latest
Versions
latest
stable
1.x
v1.3.0
v1.2.0
v1.1.0
v1.0.0
0.x
v0.14.0
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.