How to get partitioned ONNX models

MMDeploy supports exporting PyTorch models to partitioned onnx models. With this feature, users can define their partition policy and get partitioned onnx models at ease. In this tutorial, we will briefly introduce how to support partition a model step by step. In the example, we would break YOLOV3 model into two parts and extract the first part without the post-processing (such as anchor generating and NMS) in the onnx model.

Step 1: Mark inputs/outpupts

To support the model partition, we need to add Mark nodes in the ONNX model. This could be done with mmdeploy’s @mark decorator. Note that to make the mark work, the marking operation should be included in a rewriting function.

At first, we would mark the model input, which could be done by marking the input tensor img in the forward method of BaseDetector class, which is the parent class of all detector classes. Thus we name this marking point as detector_forward and mark the inputs as input. Since there could be three outputs for detectors such as Mask RCNN, the outputs are marked as dets, labels, and masks. The following code shows the idea of adding mark functions and calling the mark functions in the rewrite. For source code, you could refer to mmdeploy/codebase/mmdet/models/detectors/

from mmdeploy.core import FUNCTION_REWRITER, mark

    'detector_forward', inputs=['input'], outputs=['dets', 'labels', 'masks'])
def __forward_impl(self, img, img_metas=None, **kwargs):

def base_detector__forward(self, img, img_metas=None, **kwargs):
    # call the mark function
    return __forward_impl(...)

Then, we have to mark the output feature of YOLOV3Head, which is the input argument pred_maps in get_bboxes method of YOLOV3Head class. We could add a internal function to only mark the pred_maps inside yolov3_head__get_bboxes function as following.

from mmdeploy.core import FUNCTION_REWRITER, mark

def yolov3_head__get_bboxes(self,
    # mark pred_maps
    @mark('yolo_head', inputs=['pred_maps'])
    def __mark_pred_maps(pred_maps):
        return pred_maps
    pred_maps = __mark_pred_maps(pred_maps)

Note that pred_maps is a list of Tensor and it has three elements. Thus, three Mark nodes with op name as pred_maps.0, pred_maps.1, pred_maps.2 would be added in the onnx model.

Step 2: Add partition config

After marking necessary nodes that would be used to split the model, we could add a deployment config file configs/mmdet/detection/ If you are not familiar with how to write config, you could check

In the config file, we need to add partition_config. The key part is partition_cfg, which contains elements of dict that designates the start nodes and end nodes of each model segments. Since we only want to keep YOLOV3 without post-processing, we could set the start as ['detector_forward:input'], and end as ['yolo_head:input']. Note that start and end can have multiple marks.

_base_ = ['./']

onnx_config = dict(input_shape=[608, 608])
partition_config = dict(
    type='yolov3_partition', # the partition policy name
    apply_marks=True, # should always be set to True
            save_file='yolov3.onnx', # filename to save the partitioned onnx model
            start=['detector_forward:input'], # [mark_name:input/output, ...]
            end=['yolo_head:input'],  # [mark_name:input/output, ...]
            output_names=[f'pred_maps.{i}' for i in range(3)]) # output names

Step 3: Get partitioned onnx models

Once we have marks of nodes and the deployment config with parition_config being set properly, we could use the tool torch2onnx to export the model to onnx and get the partition onnx files.

python tools/ \
configs/mmdet/detection/ \
../mmdetection/configs/yolo/ \ \
../mmdetection/demo/demo.jpg \
--work-dir ./work-dirs/mmdet/yolov3/ort/partition

After run the script above, we would have the partitioned onnx file yolov3.onnx in the work-dir. You can use the visualization tool netron to check the model structure.

With the partitioned onnx file, you could refer to to do the following procedures such as mmdeploy_onnx2ncnn, onnx2tensorrt.

Read the Docs v: v1.3.1
On Read the Docs
Project Home

Free document hosting provided by Read the Docs.