Welcome to MMDeploy’s documentation!¶ You can switch between Chinese and English documents in the lower-left corner of the layout. Get Started Get Started Introduction Prerequisites Installation Convert Model Inference Model Evaluate Model Build Build from Source Use Docker Image Build from Script CMake Build Option Spec Run & Test How to convert model How to write config How to evaluate model Quantize model Useful Tools SDK Usage SDK Documentation Benchmark Supported models Benchmark Test on embedded device Test on TVM Quantization test result OpenMMLab Codebase Support MMPretrain Deployment MMDetection Deployment MMSegmentation Deployment MMagic Deployment MMOCR Deployment MMPose Deployment MMDetection3d Deployment MMRotate Deployment MMAction2 Deployment Backend Support Supported ncnn feature onnxruntime Support OpenVINO Support PPLNN Support SNPE feature support TensorRT Support TorchScript support Supported RKNN feature TVM feature support Core ML feature support Custom Ops ONNX Runtime Ops TensorRT Ops ncnn Ops Developer Guide mmdeploy Architecture How to support new models How to support new backends How to add test units for backend ops How to test rewritten models How to get partitioned ONNX models How to do regression test Experimental feature ONNX export Optimizer Appendix Cross compile snpe inference server on Ubuntu 18 FAQ Frequently Asked Questions Switch Language English 简体中文 API Reference apis apis/tensorrt apis/onnxruntime apis/ncnn apis/pplnn Indices and tables¶ Index Search Page