2022-03-28 13:45:08 +08:00
|
|
|
# Build for Windows
|
|
|
|
|
|
|
|
- [Build for Windows](#build-for-windows)
|
|
|
|
- [Build From Source](#build-from-source)
|
|
|
|
- [Install Toolchains](#install-toolchains)
|
|
|
|
- [Install Dependencies](#install-dependencies)
|
|
|
|
- [Install Dependencies for Model Converter](#install-dependencies-for-model-converter)
|
|
|
|
- [Install Dependencies for SDK](#install-dependencies-for-sdk)
|
|
|
|
- [Install Inference Engines for MMDeploy](#install-inference-engines-for-mmdeploy)
|
|
|
|
- [Build MMDeploy](#build-mmdeploy)
|
|
|
|
- [Build Options Spec](#build-options-spec)
|
|
|
|
- [Build Model Converter](#build-model-converter)
|
|
|
|
- [Build Custom Ops](#build-custom-ops)
|
|
|
|
- [Install Model Converter](#install-model-converter)
|
|
|
|
- [Build SDK](#build-sdk)
|
|
|
|
- [Build Demo](#build-demo)
|
|
|
|
- [Note](#note)
|
|
|
|
|
|
|
|
---
|
|
|
|
Currently, MMDeploy only provides build-from-source method for windows platform. Prebuilt package will be released in the future.
|
|
|
|
|
|
|
|
## Build From Source
|
|
|
|
All the commands listed in the following chapters are verified on **Windows 10**.
|
|
|
|
|
|
|
|
### Install Toolchains
|
|
|
|
1. Download and install [Visual Studio 2019](https://visualstudio.microsoft.com)
|
|
|
|
2. Add the path of `cmake` to the environment variable `PATH`, i.e., "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin"
|
|
|
|
3. Install cuda toolkit if NVIDIA gpu is available. You can refer to the official [guide](https://developer.nvidia.com/cuda-downloads).
|
|
|
|
|
|
|
|
### Install Dependencies
|
|
|
|
#### Install Dependencies for Model Converter
|
|
|
|
<table class="docutils">
|
|
|
|
<thead>
|
|
|
|
<tr>
|
|
|
|
<th>NAME </th>
|
|
|
|
<th>INSTALLATION </th>
|
|
|
|
</tr>
|
|
|
|
</thead>
|
|
|
|
<tbody>
|
|
|
|
<tr>
|
|
|
|
<td>conda </td>
|
|
|
|
<td> Please install conda according to the official <a href="https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html">guide</a>. <br>
|
|
|
|
After installation, open <code>anaconda powershell prompt</code> under the Start Menu <b>as the administrator</b>, because: <br>
|
|
|
|
1. <b>All the commands listed in the following text are verified in anaconda powershell </b><br>
|
|
|
|
2. <b>As an administrator, you can install the thirdparty libraries to the system path so as to simplify MMDeploy build command</b><br>
|
|
|
|
Note: if you are familiar with how cmake works, you can also use <code>anaconda powershell prompt</code> as an ordinary user.
|
|
|
|
</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>PyTorch <br>(>=1.8.0) </td>
|
|
|
|
<td>
|
|
|
|
Install PyTorch>=1.8.0 by following the <a href="https://pytorch.org/">official instructions</a>. Be sure the CUDA version PyTorch requires matches that in your host.
|
|
|
|
<pre><code>
|
|
|
|
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
|
|
|
|
</code></pre>
|
|
|
|
</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>mmcv-full </td>
|
|
|
|
<td>Install mmcv-full as follows. Refer to the <a href="https://github.com/open-mmlab/mmcv#installation">guide</a> for details.
|
|
|
|
<pre><code>
|
|
|
|
$env:cu_version="cu111"
|
|
|
|
$env:torch_version="torch1.8.0"
|
|
|
|
pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/${cu_version}/${torch_version}/index.html
|
|
|
|
</code></pre>
|
|
|
|
</td>
|
|
|
|
</tr>
|
|
|
|
</tbody>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
#### Install Dependencies for SDK
|
|
|
|
|
|
|
|
You can skip this chapter if you are only interested in the model converter.
|
|
|
|
<table class="docutils">
|
|
|
|
<thead>
|
|
|
|
<tr>
|
|
|
|
<th>NAME </th>
|
|
|
|
<th>INSTALLATION </th>
|
|
|
|
</tr>
|
|
|
|
</thead>
|
|
|
|
<tbody>
|
|
|
|
<tr>
|
|
|
|
<td>OpenCV<br>(>=3.0) </td>
|
|
|
|
<td>
|
|
|
|
1. Find and download OpenCV 3+ for windows from <a href="https://github.com/opencv/opencv/releases">here</a>.<br>
|
|
|
|
2. You can download the prebuilt package and install it to the target directory. Or you can build OpenCV from its source. <br>
|
|
|
|
3. Find where <code>OpenCVConfig.cmake</code> locates in the installation directory. And export its path to the environment variable <code>PATH</code> like this,
|
|
|
|
<pre><code>$env:path = "\the\path\where\OpenCVConfig.cmake\locates;" + "$env:path"</code></pre>
|
|
|
|
</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>pplcv </td>
|
|
|
|
<td>A high-performance image processing library of openPPL.<br>
|
|
|
|
<b>It is optional which only be needed if <code>cuda</code> platform is required.
|
|
|
|
Now, MMDeploy supports v0.6.2 and has to use <code>git clone</code> to download it.</b><br>
|
|
|
|
<pre><code>
|
|
|
|
git clone https://github.com/openppl-public/ppl.cv.git
|
|
|
|
cd ppl.cv
|
|
|
|
git checkout tags/v0.6.2 -b v0.6.2
|
|
|
|
$env:PPLCV_DIR = "$pwd"
|
|
|
|
mkdir pplcv-build
|
|
|
|
cd pplcv-build
|
|
|
|
cmake .. -G "Visual Studio 16 2019" -T v142 -A x64 -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=install -DHPCC_USE_CUDA=ON -DHPCC_MSVC_MD=ON
|
|
|
|
cmake --build . --config Release -- /m
|
|
|
|
cmake --install . --config Release
|
|
|
|
cd ../..
|
|
|
|
</code></pre>
|
|
|
|
</td>
|
|
|
|
</tr>
|
|
|
|
</tbody>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
#### Install Inference Engines for MMDeploy
|
|
|
|
|
|
|
|
Both MMDeploy's model converter and SDK share the same inference engines.
|
|
|
|
You can select your interested inference engines and do the installation by following the given commands.
|
|
|
|
|
|
|
|
**Currently, MMDeploy only verified ONNXRuntime and TensorRT for windows platform**.
|
|
|
|
As for the rest, MMDeploy will support them in the future.
|
|
|
|
|
|
|
|
<table class="docutils">
|
|
|
|
<thead>
|
|
|
|
<tr>
|
|
|
|
<th>NAME</th>
|
|
|
|
<th>PACKAGE</th>
|
|
|
|
<th>INSTALLATION </th>
|
|
|
|
</tr>
|
|
|
|
</thead>
|
|
|
|
<tbody>
|
|
|
|
<tr>
|
|
|
|
<td>ONNXRuntime</td>
|
|
|
|
<td>onnxruntime<br>(>=1.8.1) </td>
|
|
|
|
<td>
|
|
|
|
1. Install python package
|
|
|
|
<pre><code>pip install onnxruntime==1.8.1</code></pre>
|
|
|
|
2. Download the windows prebuilt binary package from <a href="https://github.com/microsoft/onnxruntime/releases/tag/v1.8.1">here</a>. Extract it and export environment variables as below:
|
|
|
|
<pre><code>
|
|
|
|
Invoke-WebRequest -Uri https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-win-x64-1.8.1.zip -OutFile onnxruntime-win-x64-1.8.1.zip
|
|
|
|
Expand-Archive onnxruntime-win-x64-1.8.1.zip .
|
|
|
|
$env:ONNXRUNTIME_DIR = "$pwd\onnxruntime-win-x64-1.8.1"
|
|
|
|
$env:path = "$env:ONNXRUNTIME_DIR\lib;" + $env:path
|
|
|
|
</code></pre>
|
|
|
|
</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td rowspan="2">TensorRT<br> </td>
|
|
|
|
<td>TensorRT <br> </td>
|
|
|
|
<td>
|
|
|
|
1. Login <a href="https://www.nvidia.com/">NVIDIA</a> and download the TensorRT tar file that matches the CPU architecture and CUDA version you are using from <a href="https://developer.nvidia.com/nvidia-tensorrt-download">here</a>. Follow the <a href="https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-tar">guide</a> to install TensorRT. <br>
|
2022-06-01 15:39:34 +08:00
|
|
|
2. Here is an example of installing TensorRT 8.2 GA Update 2 for Windows x86_64 and CUDA 11.x that you can refer to. <br> First of all, click <a href="https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/8.2.3.0/zip/TensorRT-8.2.3.0.Windows10.x86_64.cuda-11.4.cudnn8.2.zip">here</a> to download CUDA 11.x TensorRT 8.2.3.0 and then install it and other dependency like below:
|
2022-03-28 13:45:08 +08:00
|
|
|
<pre><code>
|
|
|
|
cd \the\path\of\tensorrt\zip\file
|
|
|
|
Expand-Archive TensorRT-8.2.3.0.Windows10.x86_64.cuda-11.4.cudnn8.2.zip .
|
|
|
|
pip install $env:TENSORRT_DIR\python\tensorrt-8.2.3.0-cp37-none-win_amd64.whl
|
|
|
|
$env:TENSORRT_DIR = "$pwd\TensorRT-8.2.3.0"
|
|
|
|
$env:path = "$env:TENSORRT_DIR\lib;" + $env:path
|
2022-06-01 15:39:34 +08:00
|
|
|
pip install pycuda
|
2022-03-28 13:45:08 +08:00
|
|
|
</code></pre>
|
|
|
|
</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>cuDNN </td>
|
|
|
|
<td>
|
|
|
|
1. Download cuDNN that matches the CPU architecture, CUDA version and TensorRT version you are using from <a href="https://developer.nvidia.com/rdp/cudnn-archive"> cuDNN Archive</a>. <br>
|
|
|
|
In the above TensorRT's installation example, it requires cudnn8.2. Thus, you can download <a href="https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.2.1.32/11.3_06072021/cudnn-11.3-windows-x64-v8.2.1.32.zip">CUDA 11.x cuDNN 8.2</a><br>
|
|
|
|
2. Extract the zip file and set the environment variables
|
|
|
|
<pre><code>
|
|
|
|
cd \the\path\of\cudnn\zip\file
|
|
|
|
Expand-Archive cudnn-11.3-windows-x64-v8.2.1.32.zip .
|
|
|
|
$env:CUDNN_DIR="$pwd\cuda"
|
|
|
|
$env:path = "$env:CUDNN_DIR\bin;" + $env:path
|
|
|
|
</code></pre>
|
|
|
|
</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>PPL.NN</td>
|
|
|
|
<td>ppl.nn </td>
|
|
|
|
<td>TODO </td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>OpenVINO</td>
|
|
|
|
<td>openvino </td>
|
|
|
|
<td>TODO </td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>ncnn </td>
|
|
|
|
<td>ncnn </td>
|
|
|
|
<td>TODO </td>
|
|
|
|
</tr>
|
|
|
|
</tbody>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
### Build MMDeploy
|
|
|
|
|
|
|
|
```powershell
|
|
|
|
cd \the\root\path\of\MMDeploy
|
|
|
|
$env:MMDEPLOY_DIR="$pwd"
|
|
|
|
```
|
|
|
|
|
|
|
|
#### Build Options Spec
|
|
|
|
<table class="docutils">
|
|
|
|
<thead>
|
|
|
|
<tr>
|
|
|
|
<th>NAME</th>
|
|
|
|
<th>VALUE</th>
|
|
|
|
<th>DEFAULT</th>
|
|
|
|
<th>REMARK</th>
|
|
|
|
</tr>
|
|
|
|
</thead>
|
|
|
|
<tbody>
|
|
|
|
<tr>
|
|
|
|
<td>MMDEPLOY_BUILD_SDK</td>
|
|
|
|
<td>{ON, OFF}</td>
|
|
|
|
<td>OFF</td>
|
|
|
|
<td>Switch to build MMDeploy SDK</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>MMDEPLOY_BUILD_SDK_PYTHON_API</td>
|
|
|
|
<td>{ON, OFF}</td>
|
|
|
|
<td>OFF</td>
|
|
|
|
<td>switch to build MMDeploy SDK python package</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>MMDEPLOY_BUILD_TEST</td>
|
|
|
|
<td>{ON, OFF}</td>
|
|
|
|
<td>OFF</td>
|
|
|
|
<td>Switch to build MMDeploy SDK unittest cases</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>MMDEPLOY_TARGET_DEVICES</td>
|
|
|
|
<td>{"cpu", "cuda"}</td>
|
|
|
|
<td>cpu</td>
|
|
|
|
<td>Enable target device. You can enable more by
|
|
|
|
passing a semicolon separated list of device names to <code>MMDEPLOY_TARGET_DEVICES</code> variable, e.g. <code>-DMMDEPLOY_TARGET_DEVICES="cpu;cuda"</code> </td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>MMDEPLOY_TARGET_BACKENDS</td>
|
|
|
|
<td>{"trt", "ort", "pplnn", "ncnn", "openvino"}</td>
|
|
|
|
<td>N/A</td>
|
|
|
|
<td>Enabling inference engine. <b>By default, no target inference engine is set, since it highly depends on the use case.</b> When more than one engine are specified, it has to be set with a semicolon separated list of inference backend names, e.g. <pre><code>-DMMDEPLOY_TARGET_BACKENDS="trt;ort;pplnn;ncnn;openvino"</code></pre>
|
|
|
|
After specifying the inference engine, it's package path has to be passed to cmake as follows, <br>
|
|
|
|
1. <b>trt</b>: TensorRT. <code>TENSORRT_DIR</code> and <code>CUDNN_DIR</code> are needed.
|
|
|
|
<pre><code>
|
|
|
|
-DTENSORRT_DIR=$env:TENSORRT_DIR
|
|
|
|
-DCUDNN_DIR=$env:CUDNN_DIR
|
|
|
|
</code></pre>
|
|
|
|
2. <b>ort</b>: ONNXRuntime. <code>ONNXRUNTIME_DIR</code> is needed.
|
|
|
|
<pre><code>-DONNXRUNTIME_DIR=$env:ONNXRUNTIME_DIR</code></pre>
|
|
|
|
3. <b>pplnn</b>: PPL.NN. <code>pplnn_DIR</code> is needed. MMDeploy hasn't verified it yet.
|
|
|
|
4. <b>ncnn</b>: ncnn. <code>ncnn_DIR</code> is needed. MMDeploy hasn't verified it yet.
|
|
|
|
5. <b>openvino</b>: OpenVINO. <code>InferenceEngine_DIR</code> is needed. MMDeploy hasn't verified it yet.
|
|
|
|
</td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
|
|
|
<td>MMDEPLOY_CODEBASES</td>
|
|
|
|
<td>{"mmcls", "mmdet", "mmseg", "mmedit", "mmocr", "all"}</td>
|
|
|
|
<td>all</td>
|
|
|
|
<td>Enable codebase's postprocess modules. You can provide a semicolon separated list of codebase names to enable them. Or you can pass <code>all</code> to enable them all, i.e., <code>-DMMDEPLOY_CODEBASES=all</code></td>
|
|
|
|
</tr>
|
|
|
|
<tr>
|
2022-06-06 11:19:34 +08:00
|
|
|
<td>MMDEPLOY_SHARED_LIBS</td>
|
2022-03-28 13:45:08 +08:00
|
|
|
<td>{ON, OFF}</td>
|
|
|
|
<td>ON</td>
|
|
|
|
<td>Switch to build shared library or static library of MMDeploy SDK</td>
|
|
|
|
</tr>
|
|
|
|
</tbody>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
|
|
|
|
#### Build Model Converter
|
|
|
|
|
|
|
|
##### Build Custom Ops
|
|
|
|
If one of inference engines among ONNXRuntime, TensorRT and ncnn is selected, you have to build the corresponding custom ops.
|
|
|
|
|
|
|
|
- **ONNXRuntime** Custom Ops
|
|
|
|
|
|
|
|
```powershell
|
|
|
|
mkdir build -ErrorAction SilentlyContinue
|
|
|
|
cd build
|
|
|
|
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 -DMMDEPLOY_TARGET_BACKENDS="ort" -DONNXRUNTIME_DIR="$env:ONNXRUNTIME_DIR"
|
|
|
|
cmake --build . --config Release -- /m
|
Dev v0.4.0 (#301)
* bump version to v0.4.0
* [Enhancement] Make rewriter more powerful (#150)
* Finish function tests
* lint
* resolve comments
* Fix tests
* docstring & fix
* Complement informations
* lint
* Add example
* Fix version
* Remove todo
Co-authored-by: RunningLeon <mnsheng@yeah.net>
* Torchscript support (#159)
* support torchscript
* add nms
* add torchscript configs and update deploy process and dump-info
* typescript -> torchscript
* add torchscript custom extension support
* add ts custom ops again
* support mmseg unet
* [WIP] add optimizer for torchscript (#119)
* add passes
* add python api
* Torchscript optimizer python api (#121)
* add passes
* add python api
* use python api instead of executable
* Merge Master, update optimizer (#151)
* [Feature] add yolox ncnn (#29)
* add yolox ncnn
* add ncnn android performance of yolox
* add ut
* fix lint
* fix None bugs for ncnn
* test codecov
* test codecov
* add device
* fix yapf
* remove if-else for img shape
* use channelshuffle optimize
* change benchmark after channelshuffle
* fix yapf
* fix yapf
* fuse continuous reshape
* fix static shape deploy
* fix code
* drop pad
* only static shape
* fix static
* fix docstring
* Added mask overlay to output image, changed fprintf info messages to … (#55)
* Added mask overlay to output image, changed fprintf info messages to stdout
* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds
* clang-format
* Support UNet in mmseg (#77)
* Repeatdataset in train has no CLASSES & PALETTE
* update result for unet
* update docstring for mmdet
* remove ppl for unet in docs
* fix ort wrap about input type (#81)
* Fix memleak (#86)
* delete []
* fix build error when enble MMDEPLOY_ACTIVE_LEVEL
* fix lint
* [Doc] Nano benchmark and tutorial (#71)
* add cls benchmark
* add nano zh-cn benchmark and en tutorial
* add device row
* add doc path to index.rst
* fix typo
* [Fix] fix missing deploy_core (#80)
* fix missing deploy_core
* mv flag to demo
* target link
* [Docs] Fix links in Chinese doc (#84)
* Fix docs in Chinese link
* Fix links
* Delete symbolic link and add links to html
* delete files
* Fix link
* [Feature] Add docker files (#67)
* add gpu and cpu dockerfile
* fix lint
* fix cpu docker and remove redundant
* use pip instead
* add build arg and readme
* fix grammar
* update readme
* add chinese doc for dockerfile and add docker build to build.md
* grammar
* refine dockerfiles
* add FAQs
* update Dpplcv_DIR for SDK building
* remove mmcls
* add sdk demos
* fix typo and lint
* update FAQs
* [Fix]fix check_env (#101)
* fix check_env
* update
* Replace convert_syncbatchnorm in mmseg (#93)
* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv
* change logger
* [Doc] Update FAQ for TensorRT (#96)
* update FAQ
* comment
* [Docs]: Update doc for openvino installation (#102)
* fix docs
* fix docs
* fix docs
* fix mmcv version
* fix docs
* rm blank line
* simplify non batch nms (#99)
* [Enhacement] Allow test.py to save evaluation results (#108)
* Add log file
* Delete debug code
* Rename logger
* resolve comments
* [Enhancement] Support mmocr v0.4+ (#115)
* support mmocr v0.4+
* 0.4.0 -> 0.4.1
* fix onnxruntime wrapper for gpu inference (#123)
* fix ncnn wrapper for ort-gpu
* resolve comment
* fix lint
* Fix typo (#132)
* lock mmcls version (#131)
* [Enhancement] upgrade isort in pre-commit config (#141)
* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87
* fix lint
* remove .isort.cfg and put its known_third_party to setup.cfg
* Fix ci for mmocr (#144)
* fix mmocr unittests
* remove useless
* lock mmdet maximum version to 2.20
* pip install -U numpy
* Fix capture_output (#125)
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* configs for all tasks
* use torchvision roi align
* remote unnecessary code
* fix ut
* fix ut
* export
* det dynamic
* det dynamic
* add ut
* fix ut
* add ut and docs
* fix ut
* skip torchscript ut if no ops available
* add torchscript option to build.md
* update benchmark and resolve comments
* resolve conflicts
* rename configs
* fix mrcnn cuda test
* remove useless
* add version requirements to docs and comments to codes
* enable empty image exporting for torchscript and accelerate ORT inference for MRCNN
* rebase
* update example for torchscript.md
* update FAQs for torchscript.md
* resolve comments
* only use torchvision roi_align for torchscript
* fix ut
* use torchvision roi align when pool model is avg
* resolve comments
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* Update supported mmseg models (#181)
* fix ocrnet cascade decoder
* update mmseg support models
* update mmseg configs
* support emanet and icnet
* set max K of TopK for tensorrt
* update supported models for mmseg in docs
* add test for emamodule
* add configs and update docs
* Update docs
* update benchmark
* [Features]Support mmdet3d (#103)
* add mmdet3d code
* add code
* update code
* [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model
* add tensorrt config
* fix config
* update
* support for tensorrt
* add config
* fix config`
* fix apis about torch2onnx
* update
* mmdet3d deploy version1.0
* map is ok
* fix code
* version1.0
* fix code
* fix visual
* fix bug
* tensorrt support success
* add docstring
* add docs
* fix docs
* fix comments
* fix comment
* fix comment
* fix openvino wrapper
* add unit test
* fix device about cpu
* fix comment
* fix show_result
* fix lint
* fix requirments
* remove ci about det3d
* fix ut
* add ut data
* support for new version pointpillars
* fix comment
* fix support_list
* fix comments
* fix config name
* [Enhancement] Update pad logic in detection heads (#168)
* pad with register
* fix lint
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
* [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178)
* Add mo args.
* [Docs]: update docs and argument descriptions (#196)
* bump version to v0.4.0
* update docs and argument descriptions
* revert version change
* fix unnecessary change of config for dynamic exportation (#199)
* fix mmcls get classes (#215)
* fix mmcls get classes
* resolve comment
* resolve comment
* Add ModelOptimizerOptions.
* Fix merge bugs.
* Update mmpose.md (#224)
* [Dostring]add example in apis docstring (#214)
* add example in apis docstring
* add backend example in docstring
* rm blank line
* Fixed get_mo_options_from_cfg args
* fix l2norm test
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
* [Enhancement] Switch to statically typed Value::Any (#209)
* replace std::any with StaticAny
* fix __compare_typeid
* remove fallback id support
* constraint on traits::TypeId<T>::value
* fix includes
* [Enhancement] TensorRT DCN support (#205)
* add tensorrt dcn support
* fix lint
* remove roi_align plugin for ORT (#258)
* remove roi_align plugin
* remove ut
* skip single_roi_extractor UT for ORT in CI
* move align to symbolic and update docs
* recover UT
* resolve comments
* [Enhancement]: Support fcn_unet deployment with dynamic shape (#251)
* support mmseg fcn+unet dynamic shape
* add test
* fix ci
* fix units
* resolve comments
* [Enhancement] fix-cmake-relocatable (#223)
* require user to specify xxx_dir
* fix line ending
* fix end-of-file-fixer
* try to fix ld cudart cublas
* add ENV var search
* fix CMAKE_CUDA_COMPILER
* cpu, cuda should all work well
* remove commented code
* fix ncnn example find ncnn package (#282)
* table format is wrong (#283)
* update pre-commit (#284)
* update pre-commit
* fix clang-format
* fix mmseg config (#281)
* fix mmseg config
* fix mmpose evaluate outputs
* fix lint
* update pre-commit config
* fix lint
* Revert "update pre-commit config"
This reverts commit c3fd71611f0b79dfa9ad73fc0f4555c1b3563665.
* miss code symbol (#296)
* refactor cmake build (#295)
* add-mmpose-sdk (#259)
* Torchscript support (#159)
* support torchscript
* add nms
* add torchscript configs and update deploy process and dump-info
* typescript -> torchscript
* add torchscript custom extension support
* add ts custom ops again
* support mmseg unet
* [WIP] add optimizer for torchscript (#119)
* add passes
* add python api
* Torchscript optimizer python api (#121)
* add passes
* add python api
* use python api instead of executable
* Merge Master, update optimizer (#151)
* [Feature] add yolox ncnn (#29)
* add yolox ncnn
* add ncnn android performance of yolox
* add ut
* fix lint
* fix None bugs for ncnn
* test codecov
* test codecov
* add device
* fix yapf
* remove if-else for img shape
* use channelshuffle optimize
* change benchmark after channelshuffle
* fix yapf
* fix yapf
* fuse continuous reshape
* fix static shape deploy
* fix code
* drop pad
* only static shape
* fix static
* fix docstring
* Added mask overlay to output image, changed fprintf info messages to … (#55)
* Added mask overlay to output image, changed fprintf info messages to stdout
* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds
* clang-format
* Support UNet in mmseg (#77)
* Repeatdataset in train has no CLASSES & PALETTE
* update result for unet
* update docstring for mmdet
* remove ppl for unet in docs
* fix ort wrap about input type (#81)
* Fix memleak (#86)
* delete []
* fix build error when enble MMDEPLOY_ACTIVE_LEVEL
* fix lint
* [Doc] Nano benchmark and tutorial (#71)
* add cls benchmark
* add nano zh-cn benchmark and en tutorial
* add device row
* add doc path to index.rst
* fix typo
* [Fix] fix missing deploy_core (#80)
* fix missing deploy_core
* mv flag to demo
* target link
* [Docs] Fix links in Chinese doc (#84)
* Fix docs in Chinese link
* Fix links
* Delete symbolic link and add links to html
* delete files
* Fix link
* [Feature] Add docker files (#67)
* add gpu and cpu dockerfile
* fix lint
* fix cpu docker and remove redundant
* use pip instead
* add build arg and readme
* fix grammar
* update readme
* add chinese doc for dockerfile and add docker build to build.md
* grammar
* refine dockerfiles
* add FAQs
* update Dpplcv_DIR for SDK building
* remove mmcls
* add sdk demos
* fix typo and lint
* update FAQs
* [Fix]fix check_env (#101)
* fix check_env
* update
* Replace convert_syncbatchnorm in mmseg (#93)
* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv
* change logger
* [Doc] Update FAQ for TensorRT (#96)
* update FAQ
* comment
* [Docs]: Update doc for openvino installation (#102)
* fix docs
* fix docs
* fix docs
* fix mmcv version
* fix docs
* rm blank line
* simplify non batch nms (#99)
* [Enhacement] Allow test.py to save evaluation results (#108)
* Add log file
* Delete debug code
* Rename logger
* resolve comments
* [Enhancement] Support mmocr v0.4+ (#115)
* support mmocr v0.4+
* 0.4.0 -> 0.4.1
* fix onnxruntime wrapper for gpu inference (#123)
* fix ncnn wrapper for ort-gpu
* resolve comment
* fix lint
* Fix typo (#132)
* lock mmcls version (#131)
* [Enhancement] upgrade isort in pre-commit config (#141)
* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87
* fix lint
* remove .isort.cfg and put its known_third_party to setup.cfg
* Fix ci for mmocr (#144)
* fix mmocr unittests
* remove useless
* lock mmdet maximum version to 2.20
* pip install -U numpy
* Fix capture_output (#125)
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* configs for all tasks
* use torchvision roi align
* remote unnecessary code
* fix ut
* fix ut
* export
* det dynamic
* det dynamic
* add ut
* fix ut
* add ut and docs
* fix ut
* skip torchscript ut if no ops available
* add torchscript option to build.md
* update benchmark and resolve comments
* resolve conflicts
* rename configs
* fix mrcnn cuda test
* remove useless
* add version requirements to docs and comments to codes
* enable empty image exporting for torchscript and accelerate ORT inference for MRCNN
* rebase
* update example for torchscript.md
* update FAQs for torchscript.md
* resolve comments
* only use torchvision roi_align for torchscript
* fix ut
* use torchvision roi align when pool model is avg
* resolve comments
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* Update supported mmseg models (#181)
* fix ocrnet cascade decoder
* update mmseg support models
* update mmseg configs
* support emanet and icnet
* set max K of TopK for tensorrt
* update supported models for mmseg in docs
* add test for emamodule
* add configs and update docs
* Update docs
* update benchmark
* [Features]Support mmdet3d (#103)
* add mmdet3d code
* add code
* update code
* [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model
* add tensorrt config
* fix config
* update
* support for tensorrt
* add config
* fix config`
* fix apis about torch2onnx
* update
* mmdet3d deploy version1.0
* map is ok
* fix code
* version1.0
* fix code
* fix visual
* fix bug
* tensorrt support success
* add docstring
* add docs
* fix docs
* fix comments
* fix comment
* fix comment
* fix openvino wrapper
* add unit test
* fix device about cpu
* fix comment
* fix show_result
* fix lint
* fix requirments
* remove ci about det3d
* fix ut
* add ut data
* support for new version pointpillars
* fix comment
* fix support_list
* fix comments
* fix config name
* [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178)
* Add mo args.
* [Docs]: update docs and argument descriptions (#196)
* bump version to v0.4.0
* update docs and argument descriptions
* revert version change
* fix unnecessary change of config for dynamic exportation (#199)
* fix mmcls get classes (#215)
* fix mmcls get classes
* resolve comment
* resolve comment
* Add ModelOptimizerOptions.
* Fix merge bugs.
* Update mmpose.md (#224)
* [Dostring]add example in apis docstring (#214)
* add example in apis docstring
* add backend example in docstring
* rm blank line
* Fixed get_mo_options_from_cfg args
* fix l2norm test
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
* add-mmpose-codebase
* fix ci
* fix img_shape after TopDownAffine
* rename TopDown module -> XheadDecode & implement regression decode
* align keypoints_from_heatmap
* remove hardcode keypoint_head, need refactor, current only support topdown config
* add mmpose python api
* update mmpose-python code
* can't clip fake box
* fix rebase error
* fix rebase error
* link mspn decoder to base decoder
* fix ci
* compile with gcc7.5
* remove no use code
* fix
* fix prompt
* remove unnecessary cv::parallel_for_
* rewrite TopdownHeatmapMultiStageHead.inference_model
* add comment
* add more detail docstring why use _cs2xyxy in sdk backend
* fix Registry name
* remove no use param & add comment of output result
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
* update faq about WinError 1455 (#297)
* update faq about WinError 1455
* Update faq.md
* Update faq.md
* fix ci
Co-authored-by: chenxin2 <chenxin2@sensetime.com>
* [Feature]Support centerpoint (#252)
* bump version to v0.4.0
* [Enhancement] Make rewriter more powerful (#150)
* Finish function tests
* lint
* resolve comments
* Fix tests
* docstring & fix
* Complement informations
* lint
* Add example
* Fix version
* Remove todo
Co-authored-by: RunningLeon <mnsheng@yeah.net>
* Torchscript support (#159)
* support torchscript
* add nms
* add torchscript configs and update deploy process and dump-info
* typescript -> torchscript
* add torchscript custom extension support
* add ts custom ops again
* support mmseg unet
* [WIP] add optimizer for torchscript (#119)
* add passes
* add python api
* Torchscript optimizer python api (#121)
* add passes
* add python api
* use python api instead of executable
* Merge Master, update optimizer (#151)
* [Feature] add yolox ncnn (#29)
* add yolox ncnn
* add ncnn android performance of yolox
* add ut
* fix lint
* fix None bugs for ncnn
* test codecov
* test codecov
* add device
* fix yapf
* remove if-else for img shape
* use channelshuffle optimize
* change benchmark after channelshuffle
* fix yapf
* fix yapf
* fuse continuous reshape
* fix static shape deploy
* fix code
* drop pad
* only static shape
* fix static
* fix docstring
* Added mask overlay to output image, changed fprintf info messages to … (#55)
* Added mask overlay to output image, changed fprintf info messages to stdout
* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds
* clang-format
* Support UNet in mmseg (#77)
* Repeatdataset in train has no CLASSES & PALETTE
* update result for unet
* update docstring for mmdet
* remove ppl for unet in docs
* fix ort wrap about input type (#81)
* Fix memleak (#86)
* delete []
* fix build error when enble MMDEPLOY_ACTIVE_LEVEL
* fix lint
* [Doc] Nano benchmark and tutorial (#71)
* add cls benchmark
* add nano zh-cn benchmark and en tutorial
* add device row
* add doc path to index.rst
* fix typo
* [Fix] fix missing deploy_core (#80)
* fix missing deploy_core
* mv flag to demo
* target link
* [Docs] Fix links in Chinese doc (#84)
* Fix docs in Chinese link
* Fix links
* Delete symbolic link and add links to html
* delete files
* Fix link
* [Feature] Add docker files (#67)
* add gpu and cpu dockerfile
* fix lint
* fix cpu docker and remove redundant
* use pip instead
* add build arg and readme
* fix grammar
* update readme
* add chinese doc for dockerfile and add docker build to build.md
* grammar
* refine dockerfiles
* add FAQs
* update Dpplcv_DIR for SDK building
* remove mmcls
* add sdk demos
* fix typo and lint
* update FAQs
* [Fix]fix check_env (#101)
* fix check_env
* update
* Replace convert_syncbatchnorm in mmseg (#93)
* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv
* change logger
* [Doc] Update FAQ for TensorRT (#96)
* update FAQ
* comment
* [Docs]: Update doc for openvino installation (#102)
* fix docs
* fix docs
* fix docs
* fix mmcv version
* fix docs
* rm blank line
* simplify non batch nms (#99)
* [Enhacement] Allow test.py to save evaluation results (#108)
* Add log file
* Delete debug code
* Rename logger
* resolve comments
* [Enhancement] Support mmocr v0.4+ (#115)
* support mmocr v0.4+
* 0.4.0 -> 0.4.1
* fix onnxruntime wrapper for gpu inference (#123)
* fix ncnn wrapper for ort-gpu
* resolve comment
* fix lint
* Fix typo (#132)
* lock mmcls version (#131)
* [Enhancement] upgrade isort in pre-commit config (#141)
* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87
* fix lint
* remove .isort.cfg and put its known_third_party to setup.cfg
* Fix ci for mmocr (#144)
* fix mmocr unittests
* remove useless
* lock mmdet maximum version to 2.20
* pip install -U numpy
* Fix capture_output (#125)
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* configs for all tasks
* use torchvision roi align
* remote unnecessary code
* fix ut
* fix ut
* export
* det dynamic
* det dynamic
* add ut
* fix ut
* add ut and docs
* fix ut
* skip torchscript ut if no ops available
* add torchscript option to build.md
* update benchmark and resolve comments
* resolve conflicts
* rename configs
* fix mrcnn cuda test
* remove useless
* add version requirements to docs and comments to codes
* enable empty image exporting for torchscript and accelerate ORT inference for MRCNN
* rebase
* update example for torchscript.md
* update FAQs for torchscript.md
* resolve comments
* only use torchvision roi_align for torchscript
* fix ut
* use torchvision roi align when pool model is avg
* resolve comments
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* Update supported mmseg models (#181)
* fix ocrnet cascade decoder
* update mmseg support models
* update mmseg configs
* support emanet and icnet
* set max K of TopK for tensorrt
* update supported models for mmseg in docs
* add test for emamodule
* add configs and update docs
* Update docs
* update benchmark
* [Features]Support mmdet3d (#103)
* add mmdet3d code
* add code
* update code
* [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model
* add tensorrt config
* fix config
* update
* support for tensorrt
* add config
* fix config`
* fix apis about torch2onnx
* update
* mmdet3d deploy version1.0
* map is ok
* fix code
* version1.0
* fix code
* fix visual
* fix bug
* tensorrt support success
* add docstring
* add docs
* fix docs
* fix comments
* fix comment
* fix comment
* fix openvino wrapper
* add unit test
* fix device about cpu
* fix comment
* fix show_result
* fix lint
* fix requirments
* remove ci about det3d
* fix ut
* add ut data
* support for new version pointpillars
* fix comment
* fix support_list
* fix comments
* fix config name
* [Enhancement] Update pad logic in detection heads (#168)
* pad with register
* fix lint
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
* [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178)
* Add mo args.
* [Docs]: update docs and argument descriptions (#196)
* bump version to v0.4.0
* update docs and argument descriptions
* revert version change
* fix unnecessary change of config for dynamic exportation (#199)
* fix mmcls get classes (#215)
* fix mmcls get classes
* resolve comment
* resolve comment
* Add ModelOptimizerOptions.
* Fix merge bugs.
* Update mmpose.md (#224)
* [Dostring]add example in apis docstring (#214)
* add example in apis docstring
* add backend example in docstring
* rm blank line
* Fixed get_mo_options_from_cfg args
* fix l2norm test
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
* [Enhancement] Switch to statically typed Value::Any (#209)
* replace std::any with StaticAny
* fix __compare_typeid
* remove fallback id support
* constraint on traits::TypeId<T>::value
* fix includes
* support for centerpoint
* [Enhancement] TensorRT DCN support (#205)
* add tensorrt dcn support
* fix lint
* add docstring and dcn model support
* add centerpoint ut and docs
* add config and fix input rank
* fix merge error
* fix a bug
* fix comment
* [Doc] update benchmark add supported-model-list (#286)
* update benchmark add supported-model-list
* fix lint
* fix lint
* loc mmocr maximum version
* fix ut
Co-authored-by: maningsheng <mnsheng@yeah.net>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: lzhangzz <lzhang329@gmail.com>
Co-authored-by: maningsheng <mnsheng@yeah.net>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: lzhangzz <lzhang329@gmail.com>
Co-authored-by: Chen Xin <xinchen.tju@gmail.com>
Co-authored-by: chenxin2 <chenxin2@sensetime.com>
2022-04-01 18:14:23 +08:00
|
|
|
```
|
2022-03-28 13:45:08 +08:00
|
|
|
|
|
|
|
- **TensorRT** Custom Ops
|
|
|
|
|
|
|
|
```powershell
|
|
|
|
mkdir build -ErrorAction SilentlyContinue
|
|
|
|
cd build
|
|
|
|
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 -DMMDEPLOY_TARGET_BACKENDS="trt" -DTENSORRT_DIR="$env:TENSORRT_DIR" -DCUDNN_DIR="$env:CUDNN_DIR"
|
|
|
|
cmake --build . --config Release -- /m
|
Dev v0.4.0 (#301)
* bump version to v0.4.0
* [Enhancement] Make rewriter more powerful (#150)
* Finish function tests
* lint
* resolve comments
* Fix tests
* docstring & fix
* Complement informations
* lint
* Add example
* Fix version
* Remove todo
Co-authored-by: RunningLeon <mnsheng@yeah.net>
* Torchscript support (#159)
* support torchscript
* add nms
* add torchscript configs and update deploy process and dump-info
* typescript -> torchscript
* add torchscript custom extension support
* add ts custom ops again
* support mmseg unet
* [WIP] add optimizer for torchscript (#119)
* add passes
* add python api
* Torchscript optimizer python api (#121)
* add passes
* add python api
* use python api instead of executable
* Merge Master, update optimizer (#151)
* [Feature] add yolox ncnn (#29)
* add yolox ncnn
* add ncnn android performance of yolox
* add ut
* fix lint
* fix None bugs for ncnn
* test codecov
* test codecov
* add device
* fix yapf
* remove if-else for img shape
* use channelshuffle optimize
* change benchmark after channelshuffle
* fix yapf
* fix yapf
* fuse continuous reshape
* fix static shape deploy
* fix code
* drop pad
* only static shape
* fix static
* fix docstring
* Added mask overlay to output image, changed fprintf info messages to … (#55)
* Added mask overlay to output image, changed fprintf info messages to stdout
* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds
* clang-format
* Support UNet in mmseg (#77)
* Repeatdataset in train has no CLASSES & PALETTE
* update result for unet
* update docstring for mmdet
* remove ppl for unet in docs
* fix ort wrap about input type (#81)
* Fix memleak (#86)
* delete []
* fix build error when enble MMDEPLOY_ACTIVE_LEVEL
* fix lint
* [Doc] Nano benchmark and tutorial (#71)
* add cls benchmark
* add nano zh-cn benchmark and en tutorial
* add device row
* add doc path to index.rst
* fix typo
* [Fix] fix missing deploy_core (#80)
* fix missing deploy_core
* mv flag to demo
* target link
* [Docs] Fix links in Chinese doc (#84)
* Fix docs in Chinese link
* Fix links
* Delete symbolic link and add links to html
* delete files
* Fix link
* [Feature] Add docker files (#67)
* add gpu and cpu dockerfile
* fix lint
* fix cpu docker and remove redundant
* use pip instead
* add build arg and readme
* fix grammar
* update readme
* add chinese doc for dockerfile and add docker build to build.md
* grammar
* refine dockerfiles
* add FAQs
* update Dpplcv_DIR for SDK building
* remove mmcls
* add sdk demos
* fix typo and lint
* update FAQs
* [Fix]fix check_env (#101)
* fix check_env
* update
* Replace convert_syncbatchnorm in mmseg (#93)
* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv
* change logger
* [Doc] Update FAQ for TensorRT (#96)
* update FAQ
* comment
* [Docs]: Update doc for openvino installation (#102)
* fix docs
* fix docs
* fix docs
* fix mmcv version
* fix docs
* rm blank line
* simplify non batch nms (#99)
* [Enhacement] Allow test.py to save evaluation results (#108)
* Add log file
* Delete debug code
* Rename logger
* resolve comments
* [Enhancement] Support mmocr v0.4+ (#115)
* support mmocr v0.4+
* 0.4.0 -> 0.4.1
* fix onnxruntime wrapper for gpu inference (#123)
* fix ncnn wrapper for ort-gpu
* resolve comment
* fix lint
* Fix typo (#132)
* lock mmcls version (#131)
* [Enhancement] upgrade isort in pre-commit config (#141)
* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87
* fix lint
* remove .isort.cfg and put its known_third_party to setup.cfg
* Fix ci for mmocr (#144)
* fix mmocr unittests
* remove useless
* lock mmdet maximum version to 2.20
* pip install -U numpy
* Fix capture_output (#125)
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* configs for all tasks
* use torchvision roi align
* remote unnecessary code
* fix ut
* fix ut
* export
* det dynamic
* det dynamic
* add ut
* fix ut
* add ut and docs
* fix ut
* skip torchscript ut if no ops available
* add torchscript option to build.md
* update benchmark and resolve comments
* resolve conflicts
* rename configs
* fix mrcnn cuda test
* remove useless
* add version requirements to docs and comments to codes
* enable empty image exporting for torchscript and accelerate ORT inference for MRCNN
* rebase
* update example for torchscript.md
* update FAQs for torchscript.md
* resolve comments
* only use torchvision roi_align for torchscript
* fix ut
* use torchvision roi align when pool model is avg
* resolve comments
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* Update supported mmseg models (#181)
* fix ocrnet cascade decoder
* update mmseg support models
* update mmseg configs
* support emanet and icnet
* set max K of TopK for tensorrt
* update supported models for mmseg in docs
* add test for emamodule
* add configs and update docs
* Update docs
* update benchmark
* [Features]Support mmdet3d (#103)
* add mmdet3d code
* add code
* update code
* [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model
* add tensorrt config
* fix config
* update
* support for tensorrt
* add config
* fix config`
* fix apis about torch2onnx
* update
* mmdet3d deploy version1.0
* map is ok
* fix code
* version1.0
* fix code
* fix visual
* fix bug
* tensorrt support success
* add docstring
* add docs
* fix docs
* fix comments
* fix comment
* fix comment
* fix openvino wrapper
* add unit test
* fix device about cpu
* fix comment
* fix show_result
* fix lint
* fix requirments
* remove ci about det3d
* fix ut
* add ut data
* support for new version pointpillars
* fix comment
* fix support_list
* fix comments
* fix config name
* [Enhancement] Update pad logic in detection heads (#168)
* pad with register
* fix lint
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
* [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178)
* Add mo args.
* [Docs]: update docs and argument descriptions (#196)
* bump version to v0.4.0
* update docs and argument descriptions
* revert version change
* fix unnecessary change of config for dynamic exportation (#199)
* fix mmcls get classes (#215)
* fix mmcls get classes
* resolve comment
* resolve comment
* Add ModelOptimizerOptions.
* Fix merge bugs.
* Update mmpose.md (#224)
* [Dostring]add example in apis docstring (#214)
* add example in apis docstring
* add backend example in docstring
* rm blank line
* Fixed get_mo_options_from_cfg args
* fix l2norm test
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
* [Enhancement] Switch to statically typed Value::Any (#209)
* replace std::any with StaticAny
* fix __compare_typeid
* remove fallback id support
* constraint on traits::TypeId<T>::value
* fix includes
* [Enhancement] TensorRT DCN support (#205)
* add tensorrt dcn support
* fix lint
* remove roi_align plugin for ORT (#258)
* remove roi_align plugin
* remove ut
* skip single_roi_extractor UT for ORT in CI
* move align to symbolic and update docs
* recover UT
* resolve comments
* [Enhancement]: Support fcn_unet deployment with dynamic shape (#251)
* support mmseg fcn+unet dynamic shape
* add test
* fix ci
* fix units
* resolve comments
* [Enhancement] fix-cmake-relocatable (#223)
* require user to specify xxx_dir
* fix line ending
* fix end-of-file-fixer
* try to fix ld cudart cublas
* add ENV var search
* fix CMAKE_CUDA_COMPILER
* cpu, cuda should all work well
* remove commented code
* fix ncnn example find ncnn package (#282)
* table format is wrong (#283)
* update pre-commit (#284)
* update pre-commit
* fix clang-format
* fix mmseg config (#281)
* fix mmseg config
* fix mmpose evaluate outputs
* fix lint
* update pre-commit config
* fix lint
* Revert "update pre-commit config"
This reverts commit c3fd71611f0b79dfa9ad73fc0f4555c1b3563665.
* miss code symbol (#296)
* refactor cmake build (#295)
* add-mmpose-sdk (#259)
* Torchscript support (#159)
* support torchscript
* add nms
* add torchscript configs and update deploy process and dump-info
* typescript -> torchscript
* add torchscript custom extension support
* add ts custom ops again
* support mmseg unet
* [WIP] add optimizer for torchscript (#119)
* add passes
* add python api
* Torchscript optimizer python api (#121)
* add passes
* add python api
* use python api instead of executable
* Merge Master, update optimizer (#151)
* [Feature] add yolox ncnn (#29)
* add yolox ncnn
* add ncnn android performance of yolox
* add ut
* fix lint
* fix None bugs for ncnn
* test codecov
* test codecov
* add device
* fix yapf
* remove if-else for img shape
* use channelshuffle optimize
* change benchmark after channelshuffle
* fix yapf
* fix yapf
* fuse continuous reshape
* fix static shape deploy
* fix code
* drop pad
* only static shape
* fix static
* fix docstring
* Added mask overlay to output image, changed fprintf info messages to … (#55)
* Added mask overlay to output image, changed fprintf info messages to stdout
* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds
* clang-format
* Support UNet in mmseg (#77)
* Repeatdataset in train has no CLASSES & PALETTE
* update result for unet
* update docstring for mmdet
* remove ppl for unet in docs
* fix ort wrap about input type (#81)
* Fix memleak (#86)
* delete []
* fix build error when enble MMDEPLOY_ACTIVE_LEVEL
* fix lint
* [Doc] Nano benchmark and tutorial (#71)
* add cls benchmark
* add nano zh-cn benchmark and en tutorial
* add device row
* add doc path to index.rst
* fix typo
* [Fix] fix missing deploy_core (#80)
* fix missing deploy_core
* mv flag to demo
* target link
* [Docs] Fix links in Chinese doc (#84)
* Fix docs in Chinese link
* Fix links
* Delete symbolic link and add links to html
* delete files
* Fix link
* [Feature] Add docker files (#67)
* add gpu and cpu dockerfile
* fix lint
* fix cpu docker and remove redundant
* use pip instead
* add build arg and readme
* fix grammar
* update readme
* add chinese doc for dockerfile and add docker build to build.md
* grammar
* refine dockerfiles
* add FAQs
* update Dpplcv_DIR for SDK building
* remove mmcls
* add sdk demos
* fix typo and lint
* update FAQs
* [Fix]fix check_env (#101)
* fix check_env
* update
* Replace convert_syncbatchnorm in mmseg (#93)
* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv
* change logger
* [Doc] Update FAQ for TensorRT (#96)
* update FAQ
* comment
* [Docs]: Update doc for openvino installation (#102)
* fix docs
* fix docs
* fix docs
* fix mmcv version
* fix docs
* rm blank line
* simplify non batch nms (#99)
* [Enhacement] Allow test.py to save evaluation results (#108)
* Add log file
* Delete debug code
* Rename logger
* resolve comments
* [Enhancement] Support mmocr v0.4+ (#115)
* support mmocr v0.4+
* 0.4.0 -> 0.4.1
* fix onnxruntime wrapper for gpu inference (#123)
* fix ncnn wrapper for ort-gpu
* resolve comment
* fix lint
* Fix typo (#132)
* lock mmcls version (#131)
* [Enhancement] upgrade isort in pre-commit config (#141)
* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87
* fix lint
* remove .isort.cfg and put its known_third_party to setup.cfg
* Fix ci for mmocr (#144)
* fix mmocr unittests
* remove useless
* lock mmdet maximum version to 2.20
* pip install -U numpy
* Fix capture_output (#125)
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* configs for all tasks
* use torchvision roi align
* remote unnecessary code
* fix ut
* fix ut
* export
* det dynamic
* det dynamic
* add ut
* fix ut
* add ut and docs
* fix ut
* skip torchscript ut if no ops available
* add torchscript option to build.md
* update benchmark and resolve comments
* resolve conflicts
* rename configs
* fix mrcnn cuda test
* remove useless
* add version requirements to docs and comments to codes
* enable empty image exporting for torchscript and accelerate ORT inference for MRCNN
* rebase
* update example for torchscript.md
* update FAQs for torchscript.md
* resolve comments
* only use torchvision roi_align for torchscript
* fix ut
* use torchvision roi align when pool model is avg
* resolve comments
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* Update supported mmseg models (#181)
* fix ocrnet cascade decoder
* update mmseg support models
* update mmseg configs
* support emanet and icnet
* set max K of TopK for tensorrt
* update supported models for mmseg in docs
* add test for emamodule
* add configs and update docs
* Update docs
* update benchmark
* [Features]Support mmdet3d (#103)
* add mmdet3d code
* add code
* update code
* [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model
* add tensorrt config
* fix config
* update
* support for tensorrt
* add config
* fix config`
* fix apis about torch2onnx
* update
* mmdet3d deploy version1.0
* map is ok
* fix code
* version1.0
* fix code
* fix visual
* fix bug
* tensorrt support success
* add docstring
* add docs
* fix docs
* fix comments
* fix comment
* fix comment
* fix openvino wrapper
* add unit test
* fix device about cpu
* fix comment
* fix show_result
* fix lint
* fix requirments
* remove ci about det3d
* fix ut
* add ut data
* support for new version pointpillars
* fix comment
* fix support_list
* fix comments
* fix config name
* [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178)
* Add mo args.
* [Docs]: update docs and argument descriptions (#196)
* bump version to v0.4.0
* update docs and argument descriptions
* revert version change
* fix unnecessary change of config for dynamic exportation (#199)
* fix mmcls get classes (#215)
* fix mmcls get classes
* resolve comment
* resolve comment
* Add ModelOptimizerOptions.
* Fix merge bugs.
* Update mmpose.md (#224)
* [Dostring]add example in apis docstring (#214)
* add example in apis docstring
* add backend example in docstring
* rm blank line
* Fixed get_mo_options_from_cfg args
* fix l2norm test
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
* add-mmpose-codebase
* fix ci
* fix img_shape after TopDownAffine
* rename TopDown module -> XheadDecode & implement regression decode
* align keypoints_from_heatmap
* remove hardcode keypoint_head, need refactor, current only support topdown config
* add mmpose python api
* update mmpose-python code
* can't clip fake box
* fix rebase error
* fix rebase error
* link mspn decoder to base decoder
* fix ci
* compile with gcc7.5
* remove no use code
* fix
* fix prompt
* remove unnecessary cv::parallel_for_
* rewrite TopdownHeatmapMultiStageHead.inference_model
* add comment
* add more detail docstring why use _cs2xyxy in sdk backend
* fix Registry name
* remove no use param & add comment of output result
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
* update faq about WinError 1455 (#297)
* update faq about WinError 1455
* Update faq.md
* Update faq.md
* fix ci
Co-authored-by: chenxin2 <chenxin2@sensetime.com>
* [Feature]Support centerpoint (#252)
* bump version to v0.4.0
* [Enhancement] Make rewriter more powerful (#150)
* Finish function tests
* lint
* resolve comments
* Fix tests
* docstring & fix
* Complement informations
* lint
* Add example
* Fix version
* Remove todo
Co-authored-by: RunningLeon <mnsheng@yeah.net>
* Torchscript support (#159)
* support torchscript
* add nms
* add torchscript configs and update deploy process and dump-info
* typescript -> torchscript
* add torchscript custom extension support
* add ts custom ops again
* support mmseg unet
* [WIP] add optimizer for torchscript (#119)
* add passes
* add python api
* Torchscript optimizer python api (#121)
* add passes
* add python api
* use python api instead of executable
* Merge Master, update optimizer (#151)
* [Feature] add yolox ncnn (#29)
* add yolox ncnn
* add ncnn android performance of yolox
* add ut
* fix lint
* fix None bugs for ncnn
* test codecov
* test codecov
* add device
* fix yapf
* remove if-else for img shape
* use channelshuffle optimize
* change benchmark after channelshuffle
* fix yapf
* fix yapf
* fuse continuous reshape
* fix static shape deploy
* fix code
* drop pad
* only static shape
* fix static
* fix docstring
* Added mask overlay to output image, changed fprintf info messages to … (#55)
* Added mask overlay to output image, changed fprintf info messages to stdout
* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds
* clang-format
* Support UNet in mmseg (#77)
* Repeatdataset in train has no CLASSES & PALETTE
* update result for unet
* update docstring for mmdet
* remove ppl for unet in docs
* fix ort wrap about input type (#81)
* Fix memleak (#86)
* delete []
* fix build error when enble MMDEPLOY_ACTIVE_LEVEL
* fix lint
* [Doc] Nano benchmark and tutorial (#71)
* add cls benchmark
* add nano zh-cn benchmark and en tutorial
* add device row
* add doc path to index.rst
* fix typo
* [Fix] fix missing deploy_core (#80)
* fix missing deploy_core
* mv flag to demo
* target link
* [Docs] Fix links in Chinese doc (#84)
* Fix docs in Chinese link
* Fix links
* Delete symbolic link and add links to html
* delete files
* Fix link
* [Feature] Add docker files (#67)
* add gpu and cpu dockerfile
* fix lint
* fix cpu docker and remove redundant
* use pip instead
* add build arg and readme
* fix grammar
* update readme
* add chinese doc for dockerfile and add docker build to build.md
* grammar
* refine dockerfiles
* add FAQs
* update Dpplcv_DIR for SDK building
* remove mmcls
* add sdk demos
* fix typo and lint
* update FAQs
* [Fix]fix check_env (#101)
* fix check_env
* update
* Replace convert_syncbatchnorm in mmseg (#93)
* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv
* change logger
* [Doc] Update FAQ for TensorRT (#96)
* update FAQ
* comment
* [Docs]: Update doc for openvino installation (#102)
* fix docs
* fix docs
* fix docs
* fix mmcv version
* fix docs
* rm blank line
* simplify non batch nms (#99)
* [Enhacement] Allow test.py to save evaluation results (#108)
* Add log file
* Delete debug code
* Rename logger
* resolve comments
* [Enhancement] Support mmocr v0.4+ (#115)
* support mmocr v0.4+
* 0.4.0 -> 0.4.1
* fix onnxruntime wrapper for gpu inference (#123)
* fix ncnn wrapper for ort-gpu
* resolve comment
* fix lint
* Fix typo (#132)
* lock mmcls version (#131)
* [Enhancement] upgrade isort in pre-commit config (#141)
* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87
* fix lint
* remove .isort.cfg and put its known_third_party to setup.cfg
* Fix ci for mmocr (#144)
* fix mmocr unittests
* remove useless
* lock mmdet maximum version to 2.20
* pip install -U numpy
* Fix capture_output (#125)
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* configs for all tasks
* use torchvision roi align
* remote unnecessary code
* fix ut
* fix ut
* export
* det dynamic
* det dynamic
* add ut
* fix ut
* add ut and docs
* fix ut
* skip torchscript ut if no ops available
* add torchscript option to build.md
* update benchmark and resolve comments
* resolve conflicts
* rename configs
* fix mrcnn cuda test
* remove useless
* add version requirements to docs and comments to codes
* enable empty image exporting for torchscript and accelerate ORT inference for MRCNN
* rebase
* update example for torchscript.md
* update FAQs for torchscript.md
* resolve comments
* only use torchvision roi_align for torchscript
* fix ut
* use torchvision roi align when pool model is avg
* resolve comments
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* Update supported mmseg models (#181)
* fix ocrnet cascade decoder
* update mmseg support models
* update mmseg configs
* support emanet and icnet
* set max K of TopK for tensorrt
* update supported models for mmseg in docs
* add test for emamodule
* add configs and update docs
* Update docs
* update benchmark
* [Features]Support mmdet3d (#103)
* add mmdet3d code
* add code
* update code
* [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model
* add tensorrt config
* fix config
* update
* support for tensorrt
* add config
* fix config`
* fix apis about torch2onnx
* update
* mmdet3d deploy version1.0
* map is ok
* fix code
* version1.0
* fix code
* fix visual
* fix bug
* tensorrt support success
* add docstring
* add docs
* fix docs
* fix comments
* fix comment
* fix comment
* fix openvino wrapper
* add unit test
* fix device about cpu
* fix comment
* fix show_result
* fix lint
* fix requirments
* remove ci about det3d
* fix ut
* add ut data
* support for new version pointpillars
* fix comment
* fix support_list
* fix comments
* fix config name
* [Enhancement] Update pad logic in detection heads (#168)
* pad with register
* fix lint
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
* [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178)
* Add mo args.
* [Docs]: update docs and argument descriptions (#196)
* bump version to v0.4.0
* update docs and argument descriptions
* revert version change
* fix unnecessary change of config for dynamic exportation (#199)
* fix mmcls get classes (#215)
* fix mmcls get classes
* resolve comment
* resolve comment
* Add ModelOptimizerOptions.
* Fix merge bugs.
* Update mmpose.md (#224)
* [Dostring]add example in apis docstring (#214)
* add example in apis docstring
* add backend example in docstring
* rm blank line
* Fixed get_mo_options_from_cfg args
* fix l2norm test
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
* [Enhancement] Switch to statically typed Value::Any (#209)
* replace std::any with StaticAny
* fix __compare_typeid
* remove fallback id support
* constraint on traits::TypeId<T>::value
* fix includes
* support for centerpoint
* [Enhancement] TensorRT DCN support (#205)
* add tensorrt dcn support
* fix lint
* add docstring and dcn model support
* add centerpoint ut and docs
* add config and fix input rank
* fix merge error
* fix a bug
* fix comment
* [Doc] update benchmark add supported-model-list (#286)
* update benchmark add supported-model-list
* fix lint
* fix lint
* loc mmocr maximum version
* fix ut
Co-authored-by: maningsheng <mnsheng@yeah.net>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: lzhangzz <lzhang329@gmail.com>
Co-authored-by: maningsheng <mnsheng@yeah.net>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: lzhangzz <lzhang329@gmail.com>
Co-authored-by: Chen Xin <xinchen.tju@gmail.com>
Co-authored-by: chenxin2 <chenxin2@sensetime.com>
2022-04-01 18:14:23 +08:00
|
|
|
```
|
2022-03-28 13:45:08 +08:00
|
|
|
|
|
|
|
- **ncnn** Custom Ops
|
|
|
|
|
|
|
|
TODO
|
|
|
|
|
|
|
|
##### Install Model Converter
|
|
|
|
|
|
|
|
```powershell
|
|
|
|
cd $env:MMDEPLOY_DIR
|
|
|
|
pip install -e .
|
|
|
|
```
|
|
|
|
|
|
|
|
**Note**
|
|
|
|
|
|
|
|
- Some dependencies are optional. Simply running `pip install -e .` will only install the minimum runtime requirements.
|
|
|
|
To use optional dependencies, install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -e .[optional]`).
|
|
|
|
Valid keys for the extras field are: `all`, `tests`, `build`, `optional`.
|
|
|
|
|
|
|
|
#### Build SDK
|
|
|
|
|
|
|
|
MMDeploy provides two recipes as shown below for building SDK with ONNXRuntime and TensorRT as inference engines respectively.
|
|
|
|
You can also activate other engines after the model.
|
|
|
|
|
|
|
|
- cpu + ONNXRuntime
|
|
|
|
|
|
|
|
```PowerShell
|
|
|
|
cd $env:MMDEPLOY_DIR
|
|
|
|
mkdir build -ErrorAction SilentlyContinue
|
|
|
|
cd build
|
|
|
|
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
|
|
|
|
-DMMDEPLOY_BUILD_SDK=ON `
|
|
|
|
-DMMDEPLOY_TARGET_DEVICES="cpu" `
|
|
|
|
-DMMDEPLOY_TARGET_BACKENDS="ort" `
|
|
|
|
-DMMDEPLOY_CODEBASES="all" `
|
|
|
|
-DONNXRUNTIME_DIR="$env:ONNXRUNTIME_DIR"
|
|
|
|
|
|
|
|
cmake --build . --config Release -- /m
|
|
|
|
cmake --install . --config Release
|
|
|
|
```
|
|
|
|
|
|
|
|
- cuda + TensorRT
|
|
|
|
|
|
|
|
```PowerShell
|
|
|
|
cd $env:MMDEPLOY_DIR
|
|
|
|
mkdir build -ErrorAction SilentlyContinue
|
|
|
|
cd build
|
|
|
|
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
|
|
|
|
-DMMDEPLOY_BUILD_SDK=ON `
|
|
|
|
-DMMDEPLOY_TARGET_DEVICES="cuda" `
|
|
|
|
-DMMDEPLOY_TARGET_BACKENDS="trt" `
|
|
|
|
-DMMDEPLOY_CODEBASES="all" `
|
|
|
|
-Dpplcv_DIR="$env:PPLCV_DIR/pplcv-build/install/lib/cmake/ppl" `
|
|
|
|
-DTENSORRT_DIR="$env:TENSORRT_DIR" `
|
|
|
|
-DCUDNN_DIR="$env:CUDNN_DIR"
|
|
|
|
|
|
|
|
cmake --build . --config Release -- /m
|
|
|
|
cmake --install . --config Release
|
|
|
|
```
|
|
|
|
|
|
|
|
#### Build Demo
|
|
|
|
|
|
|
|
```PowerShell
|
|
|
|
cd $env:MMDEPLOY_DIR\build\install\example
|
|
|
|
mkdir build -ErrorAction SilentlyContinue
|
|
|
|
cd build
|
|
|
|
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
|
|
|
|
-DMMDeploy_DIR="$env:MMDEPLOY_DIR/build/install/lib/cmake/MMDeploy"
|
|
|
|
|
|
|
|
cmake --build . --config Release -- /m
|
|
|
|
|
|
|
|
$env:path = "$env:MMDEPLOY_DIR/build/install/bin;" + $env:path
|
|
|
|
```
|
|
|
|
|
|
|
|
### Note
|
|
|
|
1. Release / Debug libraries can not be mixed. If MMDeploy is built with Release mode, all its dependent thirdparty libraries have to be built in Release mode too and vice versa.
|