bump version to v1.0.0rc0 (#1469)

* bump version to v1.0.0rc0

* fix typo

* resolve comment

* unchange cmakelists
pull/1800/head v1.0.0rc0
RunningLeon 2022-11-30 19:10:37 +08:00 committed by GitHub
parent e52d7c42ca
commit ab421f82d2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 75 additions and 75 deletions

View File

@ -97,7 +97,7 @@ make -j$(nproc) install
<tr>
<td>OpenJDK </td>
<td>It is necessary for building Java API.</br>
See <a href='https://github.com/open-mmlab/mmdeploy/blob/1.x/csrc/mmdeploy/apis/java/README.md'> Java API build </a> for building tutorials.
See <a href='https://github.com/open-mmlab/mmdeploy/blob/dev-1.x/csrc/mmdeploy/apis/java/README.md'> Java API build </a> for building tutorials.
</td>
</tr>
</tbody>

View File

@ -26,7 +26,7 @@ Note:
- If it fails when `git clone` via `SSH`, you can try the `HTTPS` protocol like this:
```shell
git clone -b 1.x https://github.com/open-mmlab/mmdeploy.git --recursive
git clone -b dev-1.x https://github.com/open-mmlab/mmdeploy.git --recursive
```
## Build

View File

@ -223,7 +223,7 @@ It takes about 15 minutes to install ppl.cv on a Jetson Nano. So, please be pati
## Install MMDeploy
```shell
git clone -b 1.x --recursive https://github.com/open-mmlab/mmdeploy.git
git clone -b dev-1.x --recursive https://github.com/open-mmlab/mmdeploy.git
cd mmdeploy
export MMDEPLOY_DIR=$(pwd)
```

View File

@ -21,7 +21,7 @@
______________________________________________________________________
This tutorial takes `mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1.zip` and `mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip` as examples to show how to use the prebuilt packages.
This tutorial takes `mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1.zip` and `mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip` as examples to show how to use the prebuilt packages.
The directory structure of the prebuilt package is as follows, where the `dist` folder is about model converter, and the `sdk` folder is related to model inference.
@ -47,7 +47,7 @@ In order to use the prebuilt package, you need to install some third-party depen
2. Clone the mmdeploy repository
```bash
git clone -b 1.x https://github.com/open-mmlab/mmdeploy.git
git clone -b dev-1.x https://github.com/open-mmlab/mmdeploy.git
```
:point_right: The main purpose here is to use the configs, so there is no need to compile `mmdeploy`.
@ -80,9 +80,9 @@ In order to use `ONNX Runtime` backend, you should also do the following steps.
5. Install `mmdeploy` (Model Converter) and `mmdeploy_python` (SDK Python API).
```bash
# download mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1.zip
pip install .\mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1\dist\mmdeploy-0.10.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1\sdk\python\mmdeploy_python-0.10.0-cp38-none-win_amd64.whl
# download mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1.zip
pip install .\mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1\dist\mmdeploy-1.0.0rc0-py38-none-win_amd64.whl
pip install .\mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1\sdk\python\mmdeploy_python-1.0.0rc0-cp38-none-win_amd64.whl
```
:point_right: If you have installed it before, please uninstall it first.
@ -107,9 +107,9 @@ In order to use `TensorRT` backend, you should also do the following steps.
5. Install `mmdeploy` (Model Converter) and `mmdeploy_python` (SDK Python API).
```bash
# download mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip
pip install .\mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\dist\mmdeploy-0.10.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\python\mmdeploy_python-0.10.0-cp38-none-win_amd64.whl
# download mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip
pip install .\mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0\dist\mmdeploy-1.0.0rc0-py38-none-win_amd64.whl
pip install .\mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\python\mmdeploy_python-1.0.0rc0-cp38-none-win_amd64.whl
```
:point_right: If you have installed it before, please uninstall it first.
@ -138,7 +138,7 @@ After preparation work, the structure of the current working directory should be
```
..
|-- mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1
|-- mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1
|-- mmclassification
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
@ -186,7 +186,7 @@ After installation of mmdeploy-tensorrt prebuilt package, the structure of the c
```
..
|-- mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmclassification
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
@ -249,8 +249,8 @@ The structure of current working directory
```
.
|-- mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1
|-- mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1
|-- mmclassification
|-- mmdeploy
|-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
@ -311,7 +311,7 @@ The following describes how to use the SDK's C API for inference
1. Build examples
Under `mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1\sdk\example` directory
Under `mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1\sdk\example` directory
```
// Path should be modified according to the actual location
@ -319,7 +319,7 @@ The following describes how to use the SDK's C API for inference
cd build
cmake ..\cpp -A x64 -T v142 `
-DOpenCV_DIR=C:\Deps\opencv\build\x64\vc15\lib `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1\sdk\lib\cmake\MMDeploy `
-DMMDeploy_DIR=C:\workspace\mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1\sdk\lib\cmake\MMDeploy `
-DONNXRUNTIME_DIR=C:\Deps\onnxruntime\onnxruntime-win-gpu-x64-1.8.1
cmake --build . --config Release
@ -329,7 +329,7 @@ The following describes how to use the SDK's C API for inference
:point_right: The purpose is to make the exe find the relevant dll
If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1\sdk\bin`) to the `PATH`.
If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1\sdk\bin`) to the `PATH`.
If choose to copy the dynamic libraries, copy the dll in the bin directory to the same level directory of the just compiled exe (build/Release).
@ -337,7 +337,7 @@ The following describes how to use the SDK's C API for inference
It is recommended to use `CMD` here.
Under `mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1\\sdk\\example\\build\\Release` directory
Under `mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1\\sdk\\example\\build\\Release` directory
```
.\image_classification.exe cpu C:\workspace\work_dir\onnx\resnet\ C:\workspace\mmclassification\demo\demo.JPEG
@ -347,7 +347,7 @@ The following describes how to use the SDK's C API for inference
1. Build examples
Under `mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example` directory
Under `mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example` directory
```
// Path should be modified according to the actual location
@ -355,7 +355,7 @@ The following describes how to use the SDK's C API for inference
cd build
cmake ..\cpp -A x64 -T v142 `
-DOpenCV_DIR=C:\Deps\opencv\build\x64\vc15\lib `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8 2.3.0\sdk\lib\cmake\MMDeploy `
-DMMDeploy_DIR=C:\workspace\mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8 2.3.0\sdk\lib\cmake\MMDeploy `
-DTENSORRT_DIR=C:\Deps\tensorrt\TensorRT-8.2.3.0 `
-DCUDNN_DIR=C:\Deps\cudnn\8.2.1
cmake --build . --config Release
@ -365,7 +365,7 @@ The following describes how to use the SDK's C API for inference
:point_right: The purpose is to make the exe find the relevant dll
If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\bin`) to the `PATH`.
If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\bin`) to the `PATH`.
If choose to copy the dynamic libraries, copy the dll in the bin directory to the same level directory of the just compiled exe (build/Release).
@ -373,7 +373,7 @@ The following describes how to use the SDK's C API for inference
It is recommended to use `CMD` here.
Under `mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example\\build\\Release` directory
Under `mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example\\build\\Release` directory
```
.\image_classification.exe cuda C:\workspace\work_dir\trt\resnet C:\workspace\mmclassification\demo\demo.JPEG

View File

@ -27,7 +27,7 @@ There are several methods to install mmdeploy, among which you can choose an app
**Method I:** Install precompiled package
> **TODO**. MMDeploy hasn't released based on 1.x branch.
> **TODO**. MMDeploy hasn't released based on dev-1.x branch.
**Method II:** Build using scripts
@ -35,7 +35,7 @@ If your target platform is **Ubuntu 18.04 or later version**, we encourage you t
[scripts](../01-how-to-build/build_from_script.md). For example, the following commands install mmdeploy as well as inference engine - `ONNX Runtime`.
```shell
git clone --recursive -b 1.x https://github.com/open-mmlab/mmdeploy.git
git clone --recursive -b dev-1.x https://github.com/open-mmlab/mmdeploy.git
cd mmdeploy
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
@ -48,7 +48,7 @@ If neither **I** nor **II** meets your requirements, [building mmdeploy from sou
## Convert model
You can use [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/blob/1.x/tools/deploy.py) to convert mmdet models to the specified backend models. Its detailed usage can be learned from [here](../02-how-to-run/convert_model.md).
You can use [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/blob/dev-1.x/tools/deploy.py) to convert mmdet models to the specified backend models. Its detailed usage can be learned from [here](../02-how-to-run/convert_model.md).
The command below shows an example about converting `Faster R-CNN` model to onnx model that can be inferred by ONNX Runtime.
@ -68,7 +68,7 @@ python tools/deploy.py \
--dump-info
```
It is crucial to specify the correct deployment config during model conversion. We've already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmdet) of all supported backends for mmdetection, under which the config file path follows the pattern:
It is crucial to specify the correct deployment config during model conversion. We've already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/dev-1.x/configs/mmdet) of all supported backends for mmdetection, under which the config file path follows the pattern:
```
{task}/{task}_{backend}-{precision}_{static | dynamic}_{shape}.py
@ -90,7 +90,7 @@ It is crucial to specify the correct deployment config during model conversion.
- **{shape}:** input shape or shape range of a model
Therefore, in the above example, you can also convert `faster r-cnn` to other backend models by changing the deployment config file `detection_onnxruntime_dynamic.py` to [others](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmdet/detection), e.g., converting to tensorrt-fp16 model by `detection_tensorrt-fp16_dynamic-320x320-1344x1344.py`.
Therefore, in the above example, you can also convert `faster r-cnn` to other backend models by changing the deployment config file `detection_onnxruntime_dynamic.py` to [others](https://github.com/open-mmlab/mmdeploy/tree/dev-1.x/configs/mmdet/detection), e.g., converting to tensorrt-fp16 model by `detection_tensorrt-fp16_dynamic-320x320-1344x1344.py`.
```{tip}
When converting mmdet models to tensorrt models, --device should be set to "cuda"

View File

@ -118,11 +118,11 @@ Take the latest precompiled package as example, you can install it as follows:
```shell
# install MMDeploy
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.10.0/mmdeploy-0.10.0-linux-x86_64-onnxruntime1.8.1.tar.gz
tar -zxvf mmdeploy-0.10.0-linux-x86_64-onnxruntime1.8.1.tar.gz
cd mmdeploy-0.10.0-linux-x86_64-onnxruntime1.8.1
pip install dist/mmdeploy-0.10.0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-0.10.0-cp38-none-linux_x86_64.whl
wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.0.0rc0/mmdeploy-1.0.0rc0-linux-x86_64-onnxruntime1.8.1.tar.gz
tar -zxvf mmdeploy-1.0.0rc0-linux-x86_64-onnxruntime1.8.1.tar.gz
cd mmdeploy-1.0.0rc0-linux-x86_64-onnxruntime1.8.1
pip install dist/mmdeploy-1.0.0rc0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-1.0.0rc0-cp38-none-linux_x86_64.whl
cd ..
# install inference engine: ONNX Runtime
pip install onnxruntime==1.8.1
@ -139,11 +139,11 @@ export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
```shell
# install MMDeploy
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.10.0/mmdeploy-0.10.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
tar -zxvf mmdeploy-0.10.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
cd mmdeploy-0.10.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
pip install dist/mmdeploy-0.10.0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-0.10.0-cp38-none-linux_x86_64.whl
wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.0.0rc0/mmdeploy-1.0.0rc0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
tar -zxvf mmdeploy-1.0.0rc0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
cd mmdeploy-1.0.0rc0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
pip install dist/mmdeploy-1.0.0rc0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-1.0.0rc0-cp38-none-linux_x86_64.whl
cd ..
# install inference engine: TensorRT
# !!! Download TensorRT-8.2.3.0 CUDA 11.x tar package from NVIDIA, and extract it to the current directory
@ -232,7 +232,7 @@ result = inference_model(
You can directly run MMDeploy demo programs in the precompiled package to get inference results.
```shell
cd mmdeploy-0.10.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
cd mmdeploy-1.0.0rc0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
# run python demo
python sdk/example/python/object_detection.py cuda ../mmdeploy_model/faster-rcnn ../mmdetection/demo/demo.jpg
# run C/C++ demo

View File

@ -98,7 +98,7 @@ make -j$(nproc) install
<tr>
<td>OpenJDK </td>
<td>编译Java API之前需要先准备OpenJDK开发环境</br>
请参考 <a href='https://github.com/open-mmlab/mmdeploy/blob/1.x/csrc/mmdeploy/apis/java/README.md'> Java API 编译 </a> 进行构建.
请参考 <a href='https://github.com/open-mmlab/mmdeploy/blob/dev-1.x/csrc/mmdeploy/apis/java/README.md'> Java API 编译 </a> 进行构建.
</td>
</tr>
</tbody>

View File

@ -27,7 +27,7 @@ git clone -b master git@github.com:open-mmlab/mmdeploy.git --recursive
- 如果以 `SSH` 方式 `git clone` 代码失败,您可以尝试使用 `HTTPS` 协议下载代码:
```bash
git clone -b 1.x https://github.com/open-mmlab/mmdeploy.git MMDeploy
git clone -b dev-1.x https://github.com/open-mmlab/mmdeploy.git MMDeploy
cd MMDeploy
git submodule update --init --recursive
```

View File

@ -199,7 +199,7 @@ conda activate mmdeploy
## 安装 MMDeploy
```shell
git clone -b 1.x --recursive https://github.com/open-mmlab/mmdeploy.git
git clone -b dev-1.x --recursive https://github.com/open-mmlab/mmdeploy.git
cd mmdeploy
export MMDEPLOY_DIR=$(pwd)
```

View File

@ -23,7 +23,7 @@ ______________________________________________________________________
目前,`MMDeploy`在`Windows`平台下提供`TensorRT`以及`ONNX Runtime`两种预编译包,可以从[Releases](https://github.com/open-mmlab/mmdeploy/releases)获取。
本篇教程以`mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1.zip`和`mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip`为例,展示预编译包的使用方法。
本篇教程以`mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1.zip`和`mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip`为例,展示预编译包的使用方法。
为了方便使用者快速上手,本教程以分类模型(mmclassification)为例,展示两种预编译包的使用方法。
@ -55,7 +55,7 @@ ______________________________________________________________________
2. 克隆mmdeploy仓库
```bash
git clone -b 1.x https://github.com/open-mmlab/mmdeploy.git
git clone -b dev-1.x https://github.com/open-mmlab/mmdeploy.git
```
:point_right: 这里主要为了使用configs文件所以没有加`--recursive`来下载submodule也不需要编译`mmdeploy`
@ -88,9 +88,9 @@ ______________________________________________________________________
5. 安装`mmdeploy`(模型转换)以及`mmdeploy_python`模型推理Python API的预编译包
```bash
# 先下载 mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1.zip
pip install .\mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1\dist\mmdeploy-0.10.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1\sdk\python\mmdeploy_python-0.10.0-cp38-none-win_amd64.whl
# 先下载 mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1.zip
pip install .\mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1\dist\mmdeploy-1.0.0rc0-py38-none-win_amd64.whl
pip install .\mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1\sdk\python\mmdeploy_python-1.0.0rc0-cp38-none-win_amd64.whl
```
:point_right: 如果之前安装过,需要先卸载后再安装。
@ -115,9 +115,9 @@ ______________________________________________________________________
5. 安装`mmdeploy`(模型转换)以及`mmdeploy_python`模型推理Python API的预编译包
```bash
# 先下载 mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip
pip install .\mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\dist\mmdeploy-0.10.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\python\mmdeploy_python-0.10.0-cp38-none-win_amd64.whl
# 先下载 mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip
pip install .\mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0\dist\mmdeploy-1.0.0rc0-py38-none-win_amd64.whl
pip install .\mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\python\mmdeploy_python-1.0.0rc0-cp38-none-win_amd64.whl
```
:point_right: 如果之前安装过,需要先卸载后再安装
@ -146,7 +146,7 @@ ______________________________________________________________________
```
..
|-- mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1
|-- mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1
|-- mmclassification
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
@ -194,7 +194,7 @@ export2SDK(deploy_cfg, model_cfg, work_dir, pth=model_checkpoint, device=device)
```
..
|-- mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmclassification
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
@ -257,8 +257,8 @@ export2SDK(deploy_cfg, model_cfg, work_dir, pth=model_checkpoint, device=device)
```
.
|-- mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1
|-- mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1
|-- mmclassification
|-- mmdeploy
|-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
@ -327,7 +327,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet
1. 编译 examples
在`mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1\sdk\example`目录下
在`mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1\sdk\example`目录下
```
// 部分路径根据实际位置进行修改
@ -335,7 +335,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet
cd build
cmake ..\cpp -A x64 -T v142 `
-DOpenCV_DIR=C:\Deps\opencv\build\x64\vc15\lib `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1\sdk\lib\cmake\MMDeploy `
-DMMDeploy_DIR=C:\workspace\mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1\sdk\lib\cmake\MMDeploy `
-DONNXRUNTIME_DIR=C:\Deps\onnxruntime\onnxruntime-win-gpu-x64-1.8.1
cmake --build . --config Release
@ -345,7 +345,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet
:point_right: 目的是使exe运行时可以正确找到相关dll
若选择添加环境变量,则将`mmdeploy`的运行时库路径(`mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1\sdk\bin`添加到PATH可参考onnxruntime的添加过程。
若选择添加环境变量,则将`mmdeploy`的运行时库路径(`mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1\sdk\bin`添加到PATH可参考onnxruntime的添加过程。
若选择拷贝动态库而将bin目录中的dll拷贝到刚才编译出的exe(build/Release)的同级目录下。
@ -353,7 +353,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet
这里建议使用cmd这样如果exe运行时如果找不到相关的dll的话会有弹窗
在mmdeploy-0.10.0-windows-amd64-onnxruntime1.8.1\\sdk\\example\\build\\Release目录下
在mmdeploy-1.0.0rc0-windows-amd64-onnxruntime1.8.1\\sdk\\example\\build\\Release目录下
```
.\image_classification.exe cpu C:\workspace\work_dir\onnx\resnet\ C:\workspace\mmclassification\demo\demo.JPEG
@ -363,7 +363,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet
1. 编译 examples
在mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example目录下
在mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example目录下
```
// 部分路径根据所在硬盘的位置进行修改
@ -371,7 +371,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet
cd build
cmake ..\cpp -A x64 -T v142 `
-DOpenCV_DIR=C:\Deps\opencv\build\x64\vc15\lib `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8 2.3.0\sdk\lib\cmake\MMDeploy `
-DMMDeploy_DIR=C:\workspace\mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8 2.3.0\sdk\lib\cmake\MMDeploy `
-DTENSORRT_DIR=C:\Deps\tensorrt\TensorRT-8.2.3.0 `
-DCUDNN_DIR=C:\Deps\cudnn\8.2.1
cmake --build . --config Release
@ -381,7 +381,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet
:point_right: 目的是使exe运行时可以正确找到相关dll
若选择添加环境变量,则将`mmdeploy`的运行时库路径(`mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\bin`添加到PATH可参考onnxruntime的添加过程。
若选择添加环境变量,则将`mmdeploy`的运行时库路径(`mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\bin`添加到PATH可参考onnxruntime的添加过程。
若选择拷贝动态库而将bin目录中的dll拷贝到刚才编译出的exe(build/Release)的同级目录下。
@ -389,7 +389,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet
这里建议使用cmd这样如果exe运行时如果找不到相关的dll的话会有弹窗
在mmdeploy-0.10.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example\\build\\Release目录下
在mmdeploy-1.0.0rc0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example\\build\\Release目录下
```
.\image_classification.exe cuda C:\workspace\work_dir\trt\resnet C:\workspace\mmclassification\demo\demo.JPEG

View File

@ -113,11 +113,11 @@ mim install "mmcv>=2.0.0rc2"
```shell
# 安装 MMDeploy ONNX Runtime 自定义算子库和推理 SDK
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.10.0/mmdeploy-0.10.0-linux-x86_64-onnxruntime1.8.1.tar.gz
tar -zxvf mmdeploy-0.10.0-linux-x86_64-onnxruntime1.8.1.tar.gz
cd mmdeploy-0.10.0-linux-x86_64-onnxruntime1.8.1
pip install dist/mmdeploy-0.10.0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-0.10.0-cp38-none-linux_x86_64.whl
wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.0.0rc0/mmdeploy-1.0.0rc0-linux-x86_64-onnxruntime1.8.1.tar.gz
tar -zxvf mmdeploy-1.0.0rc0-linux-x86_64-onnxruntime1.8.1.tar.gz
cd mmdeploy-1.0.0rc0-linux-x86_64-onnxruntime1.8.1
pip install dist/mmdeploy-1.0.0rc0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-1.0.0rc0-cp38-none-linux_x86_64.whl
cd ..
# 安装推理引擎 ONNX Runtime
pip install onnxruntime==1.8.1
@ -134,11 +134,11 @@ export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
```shell
# 安装 MMDeploy TensorRT 自定义算子库和推理 SDK
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.10.0/mmdeploy-0.10.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
tar -zxvf mmdeploy-0.10.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
cd mmdeploy-0.10.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
pip install dist/mmdeploy-0.10.0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-0.10.0-cp38-none-linux_x86_64.whl
wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.0.0rc0/mmdeploy-1.0.0rc0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
tar -zxvf mmdeploy-1.0.0rc0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
cd mmdeploy-1.0.0rc0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
pip install dist/mmdeploy-1.0.0rc0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-1.0.0rc0-cp38-none-linux_x86_64.whl
cd ..
# 安装推理引擎 TensorRT
# !!! 从 NVIDIA 官网下载 TensorRT-8.2.3.0 CUDA 11.x 安装包并解压到当前目录
@ -226,7 +226,7 @@ result = inference_model(
你可以直接运行预编译包中的 demo 程序,输入 SDK Model 和图像,进行推理,并查看推理结果。
```shell
cd mmdeploy-0.10.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
cd mmdeploy-1.0.0rc0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
# 运行 python demo
python sdk/example/python/object_detection.py cuda ../mmdeploy_model/faster-rcnn ../mmdetection/demo/demo.jpg
# 运行 C/C++ demo

View File

@ -1,7 +1,7 @@
# Copyright (c) OpenMMLab. All rights reserved.
from typing import Tuple
__version__ = '0.10.0'
__version__ = '1.0.0rc0'
short_version = __version__

View File

@ -1,2 +1,2 @@
# Copyright (c) OpenMMLab. All rights reserved.
__version__ = '0.10.0'
__version__ = '1.0.0rc0'