Here is an example of how to deploy and inference Faster R-CNN model of MMDetection from scratch.
Please run the following command in Anaconda environment to install MMDetection.
conda create -n openmmlab python=3.7 -y conda activate openmmlab conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch -y # install mmcv pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.8/index.html # install mmdetection git clone https://github.com/open-mmlab/mmdetection.git cd mmdetection pip install -r requirements/build.txt pip install -v -e .
Download the checkpoint from this link and put it in the {MMDET_ROOT}/checkpoints
where {MMDET_ROOT}
is the root directory of your MMDetection codebase.
Please run the following command in Anaconda environment to install MMDeploy.
conda activate openmmlab git clone https://github.com/open-mmlab/mmdeploy.git cd mmdeploy git submodule update --init --recursive pip install -e . # 安装MMDeploy
Once we have installed the MMDeploy, we should select an inference engine for model inference. Here we take ONNX Runtime as an example. Run the following command to install ONNX Runtime:
pip install onnxruntime==1.8.1
Then download the ONNX Runtime library to build the mmdeploy plugin for ONNX Runtime:
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz tar -zxvf onnxruntime-linux-x64-1.8.1.tgz cd onnxruntime-linux-x64-1.8.1 export ONNXRUNTIME_DIR=$(pwd) export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH # 也可将这两句写进~/.bashrc cd ${MMDEPLOY_DIR} # To MMDeploy root directory mkdir -p build && cd build # build ONNXRuntime custom ops cmake -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} .. make -j$(nproc)
# build MMDeploy SDK cmake -DMMDEPLOY_BUILD_SDK=ON \ -DCMAKE_CXX_COMPILER=g++-7 \ -DOpenCV_DIR=/path/to/OpenCV/lib/cmake/OpenCV \ -Dspdlog_DIR=/path/to/spdlog/lib/cmake/spdlog \ -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \ -DMMDEPLOY_TARGET_BACKENDS=ort \ -DMMDEPLOY_CODEBASES=mmdet .. make -j$(nproc) && make install # build MMDeploy SDK具体案例 cmake -DMMDEPLOY_BUILD_SDK=ON \ -DCMAKE_CXX_COMPILER=g++-7 \ -DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \ # 通过apt-get安装的 -Dspdlog_DIR=/usr/lib/x86_64-linux-gnu/cmake/spdlog \ -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \ -DMMDEPLOY_TARGET_BACKENDS=ort \ -DMMDEPLOY_CODEBASES=mmdet .. # 其中${MMDEPLOY_DIR} ${MMDET_DIR} ${ONNXRUNTIME_DIR}都可以写在 ~/.bashrc里面然后source ~/.bashrc生效
python ${MMDEPLOY_DIR}/tools/check_env.py
Once we have installed MMDetection, MMDeploy, ONNX Runtime and built plugin for ONNX Runtime, we can convert the Faster R-CNN to a .onnx
model file which can be received by ONNX Runtime. Run following commands to use our deploy tools:
# Assume you have installed MMDeploy in ${MMDEPLOY_DIR} and MMDetection in ${MMDET_DIR} # If you do not know where to find the path. Just type `pip show mmdeploy` and `pip show mmdet` in your console. python ${MMDEPLOY_DIR}/tools/deploy.py \ ${MMDEPLOY_DIR}/configs/mmdet/detection/detection_onnxruntime_dynamic.py \ ${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \ ${MMDET_DIR}/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \ ${MMDET_DIR}/demo/demo.jpg \ --work-dir work_dirs \ # 转换好的模型保存目录 --device cpu \ --show \ # 展示使用后端推理框架,和原来pytorch推理的两张图片 --dump-info # 输出方便,可用与SDK # 补充 # ${MMDEPLOY_DIR}和${MMDET_DIR}已经写进了~/.bashrc # 转换好了模型可以通过python接口进行推理 例如: Inference Model Now you can do model inference with the APIs provided by the backend. But what if you want to test the model instantly? We have some backend wrappers for you. from mmdeploy.apis import inference_model result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)
If the script runs successfully, two images will display on the screen one by one. The first image is the infernce result of ONNX Runtime and the second image is the result of PyTorch. At the same time, an onnx model file end2end.onnx
and three json files (SDK config files) will generate on the work directory work_dirs
.
After model conversion, SDK Model is saved in directory ${work_dir}.
Here is a recipe for building & running object detection demo.
cd build/install/example # path to onnxruntime ** libraries ** export LD_LIBRARY_PATH=/path/to/onnxruntime/lib # 例子: export LD_LIBRARY_PATH=/home/zranguai/Deploy/Backend/ONNXRuntime/onnxruntime-linux-x64-1.8.1/lib mkdir -p build && cd build cmake -DOpenCV_DIR=path/to/OpenCV/lib/cmake/OpenCV \ -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy .. make object_detection # 例子: # cmake -DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \ # -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy .. # suppress verbose logs export SPDLOG_LEVEL=warn # running the object detection example ./object_detection cpu ${work_dirs} ${path/to/an/image} # 例子: ./object_detection cpu ${MMDEPLOY_DIR}/work_dirs ${MMDET_DIR}/demo/demo.jpg
If the demo runs successfully, an image named "output_detection.png" is supposed to be found showing detection objects.