q.yao fd40119694
[Feature] TensorRT int8 support (#35)
* fix custom ops support, fix multiple mark bug, add name mapping

* check if the value_info need to be added

* remove unnecessary print

* add nms implement

* two stage split wip

* add two stage split

* add split retinanet visualize

* add two stage split (wip)

* finish two stage split

* fix lint

* move parse string to mmdeploy.utils

* add func mark count dict

* use assert_cfg_valid

* update func count before add Mark

* fix dynamic shape support

* add calib data generator

* create calib dataset

* finish end2end int8

* add split two stage tensorrt visualize
2021-08-19 12:56:00 +08:00
2021-07-13 08:45:42 +00:00
2021-06-11 13:26:05 +08:00
2021-06-11 13:26:05 +08:00
2021-06-11 13:24:18 +08:00
2021-06-16 15:36:58 +08:00
2021-06-11 13:26:05 +08:00

MMDeployment

Installation

  • Build backend ops

    • update submodule

      git submodule update --init
      
    • Build with onnxruntime support

      mkdir build
      cd build
      cmake -DBUILD_ONNXRUNTIME_OPS=ON -DONNXRUNTIME_DIR=${PATH_TO_ONNXRUNTIME} ..
      make -j10
      
    • Build with tensorrt support

      mkdir build
      cd build
      cmake -DBUILD_TENSORRT_OPS=ON -DTENSORRT_DIR=${PATH_TO_TENSORRT} ..
      make -j10
      
    • Build with ncnn support

      mkdir build
      cd build
      cmake -DBUILD_NCNN_OPS=ON -DNCNN_DIR=${PATH_TO_NCNN} ..
      make -j10
      
    • Or you can add multiple flags to build multiple backend ops.

  • Setup project

    python setup.py develop
    

Usage

python ./tools/deploy.py \
    ${DEPLOY_CFG_PATH} \
    ${MODEL_CFG_PATH} \
    ${MODEL_CHECKPOINT_PATH} \
    ${INPUT_IMG} \
    --work-dir ${WORK_DIR} \
    --device ${DEVICE} \
    --log-level INFO
Languages
Python 46.6%
C++ 41.3%
Cuda 4.4%
CMake 2%
C# 1.9%
Other 3.8%