How to convert model¶
This tutorial briefly introduces how to export an OpenMMlab model to a specific backend using MMDeploy tools. Notes:
Supported backends are ONNXRuntime, TensorRT, ncnn, PPLNN, OpenVINO.
Supported codebases are MMClassification, MMDetection, MMSegmentation, MMOCR, MMEditing.
How to convert models from Pytorch to other backends¶
Prerequisite¶
Install and build your target backend. You could refer to ONNXRuntime-install, TensorRT-install, ncnn-install, PPLNN-install, OpenVINO-install for more information.
Install and build your target codebase. You could refer to MMClassification-install, MMDetection-install, MMSegmentation-install, MMOCR-install, MMEditing-install.
Usage¶
python ./tools/deploy.py \
${DEPLOY_CFG_PATH} \
${MODEL_CFG_PATH} \
${MODEL_CHECKPOINT_PATH} \
${INPUT_IMG} \
--test-img ${TEST_IMG} \
--work-dir ${WORK_DIR} \
--calib-dataset-cfg ${CALIB_DATA_CFG} \
--device ${DEVICE} \
--log-level INFO \
--show \
--dump-info
Description of all arguments¶
deploy_cfg
: The deployment configuration of mmdeploy for the model, including the type of inference framework, whether quantize, whether the input shape is dynamic, etc. There may be a reference relationship between configuration files,mmdeploy/mmcls/classification_ncnn_static.py
is an example.model_cfg
: Model configuration for algorithm library, e.g.mmclassification/configs/vision_transformer/vit-base-p32_ft-64xb64_in1k-384.py
, regardless of the path to mmdeploy.checkpoint
: torch model path. It can start with http/https, see the implementation ofmmcv.FileClient
for details.img
: The path to the image or point cloud file used for testing during model conversion.--test-img
: The path of image file that used to test model. If not specified, it will be set toNone
.--work-dir
: The path of work directory that used to save logs and models.--calib-dataset-cfg
: Only valid in int8 mode. Config used for calibration. If not specified, it will be set toNone
and use “val” dataset in model config for calibration.--device
: The device used for model conversion. If not specified, it will be set tocpu
, for trt usecuda:0
format.--log-level
: To set log level which in'CRITICAL', 'FATAL', 'ERROR', 'WARN', 'WARNING', 'INFO', 'DEBUG', 'NOTSET'
. If not specified, it will be set toINFO
.--show
: Whether to show detection outputs.--dump-info
: Whether to output information for SDK.
How to find the corresponding deployment config of a PyTorch model¶
Find model’s codebase folder in
configs/
. Example, convert a yolov3 model you need to findconfigs/mmdet
folder.Find model’s task folder in
configs/codebase_folder/
. Just like yolov3 model, you need to findconfigs/mmdet/detection
folder.Find deployment config file in
configs/codebase_folder/task_folder/
. Just like deploy yolov3 model you can useconfigs/mmdet/detection/detection_onnxruntime_dynamic.py
.
Example¶
python ./tools/deploy.py \
configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
$PATH_TO_MMDET/configs/yolo/yolov3_d53_mstrain-608_273e_coco.py \
$PATH_TO_MMDET/checkpoints/yolo/yolov3_d53_mstrain-608_273e_coco.pth \
$PATH_TO_MMDET/demo/demo.jpg \
--work-dir work_dir \
--show \
--device cuda:0
How to evaluate the exported models¶
You can try to evaluate model, referring to how_to_evaluate_a_model.
List of supported models exportable to other backends¶
Refer to Support model list