change docs from 1.x to main
parent
0196cd0048
commit
980ed3b5cd
|
@ -33,7 +33,7 @@ workflows:
|
|||
third_party/.* lint_only false
|
||||
tools/.* lint_only false
|
||||
setup.py lint_only false
|
||||
base-revision: dev-1.x
|
||||
base-revision: main
|
||||
# this is the path of the configuration we should trigger once
|
||||
# path filtering and pipeline parameter value updates are
|
||||
# complete. In this case, we are using the parent dynamic
|
||||
|
|
|
@ -2,7 +2,7 @@ blank_issues_enabled: false
|
|||
|
||||
contact_links:
|
||||
- name: 💥 FAQ
|
||||
url: https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/en/faq.md
|
||||
url: https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/faq.md
|
||||
about: Check if your issue already has solutions
|
||||
- name: 💬 Forum
|
||||
url: https://github.com/open-mmlab/mmdeploy/discussions
|
||||
|
|
|
@ -18,10 +18,10 @@
|
|||
</div>
|
||||
<div> </div>
|
||||
|
||||
[](https://mmdeploy.readthedocs.io/en/1.x/)
|
||||
[](https://mmdeploy.readthedocs.io/en/main/)
|
||||
[](https://github.com/open-mmlab/mmdeploy/actions)
|
||||
[](https://codecov.io/gh/open-mmlab/mmdeploy)
|
||||
[](https://github.com/open-mmlab/mmdeploy/tree/1.x/LICENSE)
|
||||
[](https://codecov.io/gh/open-mmlab/mmdeploy)
|
||||
[](https://github.com/open-mmlab/mmdeploy/tree/main/LICENSE)
|
||||
[](https://github.com/open-mmlab/mmdeploy/issues)
|
||||
[](https://github.com/open-mmlab/mmdeploy/issues)
|
||||
|
||||
|
@ -90,7 +90,7 @@ The benchmark can be found from [here](docs/en/03-benchmark/benchmark.md)
|
|||
|
||||
All kinds of modules in the SDK can be extended, such as `Transform` for image processing, `Net` for Neural Network inference, `Module` for postprocessing and so on
|
||||
|
||||
## [Documentation](https://mmdeploy.readthedocs.io/en/1.x/)
|
||||
## [Documentation](https://mmdeploy.readthedocs.io/en/main/)
|
||||
|
||||
Please read [getting_started](docs/en/get_started.md) for the basic usage of MMDeploy. We also provide tutoials about:
|
||||
|
||||
|
|
|
@ -19,10 +19,10 @@
|
|||
<div> </div>
|
||||
</div>
|
||||
|
||||
[](https://mmdeploy.readthedocs.io/zh_CN/1.x/)
|
||||
[](https://mmdeploy.readthedocs.io/zh_CN/main/)
|
||||
[](https://github.com/open-mmlab/mmdeploy/actions)
|
||||
[](https://codecov.io/gh/open-mmlab/mmdeploy)
|
||||
[](https://github.com/open-mmlab/mmdeploy/tree/1.x/LICENSE)
|
||||
[](https://codecov.io/gh/open-mmlab/mmdeploy)
|
||||
[](https://github.com/open-mmlab/mmdeploy/tree/main/LICENSE)
|
||||
[](https://github.com/open-mmlab/mmdeploy/issues)
|
||||
[](https://github.com/open-mmlab/mmdeploy/issues)
|
||||
|
||||
|
@ -75,7 +75,7 @@ MMDeploy 是 [OpenMMLab](https://openmmlab.com/) 模型部署工具箱,**为
|
|||
- Net 推理
|
||||
- Module 后处理
|
||||
|
||||
## [中文文档](https://mmdeploy.readthedocs.io/zh_CN/1.x/)
|
||||
## [中文文档](https://mmdeploy.readthedocs.io/zh_CN/main/)
|
||||
|
||||
- [快速上手](docs/zh_cn/get_started.md)
|
||||
- [编译](docs/zh_cn/01-how-to-build/build_from_source.md)
|
||||
|
@ -119,7 +119,7 @@ MMDeploy 是 [OpenMMLab](https://openmmlab.com/) 模型部署工具箱,**为
|
|||
|
||||
## 基准与模型库
|
||||
|
||||
基准和支持的模型列表可以在[基准](https://mmdeploy.readthedocs.io/zh_CN/1.x/03-benchmark/benchmark.html)和[模型列表](https://mmdeploy.readthedocs.io/en/1.x/03-benchmark/supported_models.html)中获得。
|
||||
基准和支持的模型列表可以在[基准](https://mmdeploy.readthedocs.io/zh_CN/main/03-benchmark/benchmark.html)和[模型列表](https://mmdeploy.readthedocs.io/en/main/03-benchmark/supported_models.html)中获得。
|
||||
|
||||
## 贡献指南
|
||||
|
||||
|
@ -176,9 +176,9 @@ MMDeploy 是 [OpenMMLab](https://openmmlab.com/) 模型部署工具箱,**为
|
|||
扫描下方的二维码可关注 OpenMMLab 团队的 [知乎官方账号](https://www.zhihu.com/people/openmmlab),加入 OpenMMLab 团队的 [官方交流 QQ 群](https://jq.qq.com/?_wv=1027&k=MSMAfWOe),或添加微信小助手”OpenMMLabwx“加入官方交流微信群。
|
||||
|
||||
<div align="center">
|
||||
<img src="https://raw.githubusercontent.com/open-mmlab/mmcv/master/docs/en/_static/zhihu_qrcode.jpg" height="400" />
|
||||
<img src="resources/qq_group_qrcode.jpg" height="400" />
|
||||
<img src="https://raw.githubusercontent.com/open-mmlab/mmcv/master/docs/en/_static/wechat_qrcode.jpg" height="400" />
|
||||
<img src="https://user-images.githubusercontent.com/25839884/205870927-39f4946d-8751-4219-a4c0-740117558fd7.jpg" height="400" />
|
||||
<img src="https://user-images.githubusercontent.com/25839884/203904835-62392033-02d4-4c73-a68c-c9e4c1e2b07f.jpg" height="400" />
|
||||
<img src="https://user-images.githubusercontent.com/25839884/205872898-e2e6009d-c6bb-4d27-8d07-117e697a3da8.jpg" height="400" />
|
||||
</div>
|
||||
|
||||
我们会在 OpenMMLab 社区为大家
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
"id": "mAWHDEbr6Q2i"
|
||||
},
|
||||
"source": [
|
||||
"[](https://colab.research.google.com/github/open-mmlab/mmdeploy/tree/1.x/demo/tutorials_1.ipynb)\n",
|
||||
"[](https://colab.research.google.com/github/open-mmlab/mmdeploy/tree/main/demo/tutorials_1.ipynb)\n",
|
||||
"# 前言\n",
|
||||
"OpenMMLab 的算法如何部署?是很多社区用户的困惑。而模型部署工具箱 [MMDeploy](https://zhuanlan.zhihu.com/p/450342651) 的开源,强势打通了从算法模型到应用程序这 \"最后一公里\"!\n",
|
||||
"今天我们将开启模型部署入门系列教程,在模型部署开源库 MMDeploy 的辅助下,介绍以下内容:\n",
|
||||
|
|
|
@ -85,9 +85,9 @@ ENV PATH="/root/workspace/ncnn/build/tools/quantize/:${PATH}"
|
|||
### install mmdeploy
|
||||
WORKDIR /root/workspace
|
||||
ARG VERSION
|
||||
RUN git clone -b 1.x https://github.com/open-mmlab/mmdeploy.git &&\
|
||||
RUN git clone -b main https://github.com/open-mmlab/mmdeploy.git &&\
|
||||
cd mmdeploy &&\
|
||||
if [ -z ${VERSION} ] ; then echo "No MMDeploy version passed in, building on 1.x" ; else git checkout tags/v${VERSION} -b tag_v${VERSION} ; fi &&\
|
||||
if [ -z ${VERSION} ] ; then echo "No MMDeploy version passed in, building on main" ; else git checkout tags/v${VERSION} -b tag_v${VERSION} ; fi &&\
|
||||
git submodule update --init --recursive &&\
|
||||
rm -rf build &&\
|
||||
mkdir build &&\
|
||||
|
@ -114,4 +114,4 @@ RUN cd mmdeploy && rm -rf build/CM* && mkdir -p build && cd build && cmake .. \
|
|||
-DMMDEPLOY_CODEBASES=all &&\
|
||||
cmake --build . -- -j$(nproc) && cmake --install . &&\
|
||||
export SPDLOG_LEVEL=warn &&\
|
||||
if [ -z ${VERSION} ] ; then echo "Built MMDeploy 1.x for CPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for CPU devices successfully!" ; fi
|
||||
if [ -z ${VERSION} ] ; then echo "Built MMDeploy main for CPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for CPU devices successfully!" ; fi
|
||||
|
|
|
@ -65,9 +65,9 @@ RUN cp -r /usr/local/lib/python${PYTHON_VERSION}/dist-packages/tensorrt* /opt/co
|
|||
ENV ONNXRUNTIME_DIR=/root/workspace/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}
|
||||
ENV TENSORRT_DIR=/workspace/tensorrt
|
||||
ARG VERSION
|
||||
RUN git clone -b 1.x https://github.com/open-mmlab/mmdeploy &&\
|
||||
RUN git clone -b main https://github.com/open-mmlab/mmdeploy &&\
|
||||
cd mmdeploy &&\
|
||||
if [ -z ${VERSION} ] ; then echo "No MMDeploy version passed in, building on 1.x" ; else git checkout tags/v${VERSION} -b tag_v${VERSION} ; fi &&\
|
||||
if [ -z ${VERSION} ] ; then echo "No MMDeploy version passed in, building on main" ; else git checkout tags/v${VERSION} -b tag_v${VERSION} ; fi &&\
|
||||
git submodule update --init --recursive &&\
|
||||
mkdir -p build &&\
|
||||
cd build &&\
|
||||
|
@ -101,6 +101,6 @@ RUN cd /root/workspace/mmdeploy &&\
|
|||
-DMMDEPLOY_CODEBASES=all &&\
|
||||
make -j$(nproc) && make install &&\
|
||||
export SPDLOG_LEVEL=warn &&\
|
||||
if [ -z ${VERSION} ] ; then echo "Built MMDeploy dev-1.x for GPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for GPU devices successfully!" ; fi
|
||||
if [ -z ${VERSION} ] ; then echo "Built MMDeploy for GPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for GPU devices successfully!" ; fi
|
||||
|
||||
ENV LD_LIBRARY_PATH="/root/workspace/mmdeploy/build/lib:${BACKUP_LD_LIBRARY_PATH}"
|
||||
|
|
|
@ -97,7 +97,7 @@ make -j$(nproc) install
|
|||
<tr>
|
||||
<td>OpenJDK </td>
|
||||
<td>It is necessary for building Java API.</br>
|
||||
See <a href='https://github.com/open-mmlab/mmdeploy/tree/1.x/csrc/mmdeploy/apis/java/README.md'> Java API build </a> for building tutorials.
|
||||
See <a href='https://github.com/open-mmlab/mmdeploy/tree/main/csrc/mmdeploy/apis/java/README.md'> Java API build </a> for building tutorials.
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
## Download
|
||||
|
||||
```shell
|
||||
git clone -b 1.x git@github.com:open-mmlab/mmdeploy.git --recursive
|
||||
git clone -b main git@github.com:open-mmlab/mmdeploy.git --recursive
|
||||
```
|
||||
|
||||
Note:
|
||||
|
@ -26,7 +26,7 @@ Note:
|
|||
- If it fails when `git clone` via `SSH`, you can try the `HTTPS` protocol like this:
|
||||
|
||||
```shell
|
||||
git clone -b 1.x https://github.com/open-mmlab/mmdeploy.git --recursive
|
||||
git clone -b main https://github.com/open-mmlab/mmdeploy.git --recursive
|
||||
```
|
||||
|
||||
## Build
|
||||
|
|
|
@ -237,7 +237,7 @@ It takes about 15 minutes to install ppl.cv on a Jetson Nano. So, please be pati
|
|||
## Install MMDeploy
|
||||
|
||||
```shell
|
||||
git clone -b 1.x --recursive https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone -b main --recursive https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
export MMDEPLOY_DIR=$(pwd)
|
||||
```
|
||||
|
@ -305,7 +305,7 @@ pip install -v -e . # or "python setup.py develop"
|
|||
|
||||
2. Follow [this document](../02-how-to-run/convert_model.md) on how to convert model files.
|
||||
|
||||
For this example, we have used [retinanet_r18_fpn_1x_coco.py](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/retinanet/retinanet_r18_fpn_1x_coco.py) as the model config, and [this file](https://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r18_fpn_1x_coco/retinanet_r18_fpn_1x_coco_20220407_171055-614fd399.pth) as the corresponding checkpoint file. Also for deploy config, we have used [detection_tensorrt_dynamic-320x320-1344x1344.py](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py)
|
||||
For this example, we have used [retinanet_r18_fpn_1x_coco.py](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/retinanet/retinanet_r18_fpn_1x_coco.py) as the model config, and [this file](https://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r18_fpn_1x_coco/retinanet_r18_fpn_1x_coco_20220407_171055-614fd399.pth) as the corresponding checkpoint file. Also for deploy config, we have used [detection_tensorrt_dynamic-320x320-1344x1344.py](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py)
|
||||
|
||||
```shell
|
||||
python ./tools/deploy.py \
|
||||
|
|
|
@ -140,7 +140,7 @@ label: 65, score: 0.95
|
|||
|
||||
- MMDet models.
|
||||
|
||||
YOLOV3 & YOLOX: you may paste the following partition configuration into [detection_rknn_static-320x320.py](https://github.com/open-mmlab/mmdeploy/blob/1.x/configs/mmdet/detection/detection_rknn-int8_static-320x320.py):
|
||||
YOLOV3 & YOLOX: you may paste the following partition configuration into [detection_rknn_static-320x320.py](https://github.com/open-mmlab/mmdeploy/blob/main/configs/mmdet/detection/detection_rknn-int8_static-320x320.py):
|
||||
|
||||
```python
|
||||
# yolov3, yolox for rknn-toolkit and rknn-toolkit2
|
||||
|
@ -156,7 +156,7 @@ label: 65, score: 0.95
|
|||
])
|
||||
```
|
||||
|
||||
RTMDet: you may paste the following partition configuration into [detection_rknn-int8_static-640x640.py](https://github.com/open-mmlab/mmdeploy/blob/dev-1.x/configs/mmdet/detection/detection_rknn-int8_static-640x640.py):
|
||||
RTMDet: you may paste the following partition configuration into [detection_rknn-int8_static-640x640.py](https://github.com/open-mmlab/mmdeploy/blob/main/configs/mmdet/detection/detection_rknn-int8_static-640x640.py):
|
||||
|
||||
```python
|
||||
# rtmdet for rknn-toolkit and rknn-toolkit2
|
||||
|
@ -172,7 +172,7 @@ label: 65, score: 0.95
|
|||
])
|
||||
```
|
||||
|
||||
RetinaNet & SSD & FSAF with rknn-toolkit2, you may paste the following partition configuration into [detection_rknn_static-320x320.py](https://github.com/open-mmlab/mmdeploy/blob/dev-1.x/configs/mmdet/detection/detection_rknn-int8_static-320x320.py). Users with rknn-toolkit can directly use default config.
|
||||
RetinaNet & SSD & FSAF with rknn-toolkit2, you may paste the following partition configuration into [detection_rknn_static-320x320.py](https://github.com/open-mmlab/mmdeploy/blob/main/configs/mmdet/detection/detection_rknn-int8_static-320x320.py). Users with rknn-toolkit can directly use default config.
|
||||
|
||||
```python
|
||||
# retinanet, ssd for rknn-toolkit2
|
||||
|
|
|
@ -48,7 +48,7 @@ In order to use the prebuilt package, you need to install some third-party depen
|
|||
2. Clone the mmdeploy repository
|
||||
|
||||
```bash
|
||||
git clone -b 1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
```
|
||||
|
||||
:point_right: The main purpose here is to use the configs, so there is no need to compile `mmdeploy`.
|
||||
|
@ -56,7 +56,7 @@ In order to use the prebuilt package, you need to install some third-party depen
|
|||
3. Install mmclassification
|
||||
|
||||
```bash
|
||||
git clone -b 1.x https://github.com/open-mmlab/mmclassification.git
|
||||
git clone -b main https://github.com/open-mmlab/mmclassification.git
|
||||
cd mmclassification
|
||||
pip install -e .
|
||||
```
|
||||
|
|
|
@ -37,7 +37,7 @@ If your target platform is **Ubuntu 18.04 or later version**, we encourage you t
|
|||
[scripts](../01-how-to-build/build_from_script.md). For example, the following commands install mmdeploy as well as inference engine - `ONNX Runtime`.
|
||||
|
||||
```shell
|
||||
git clone --recursive -b 1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone --recursive -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
|
||||
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
|
||||
|
@ -50,9 +50,9 @@ If neither **I** nor **II** meets your requirements, [building mmdeploy from sou
|
|||
|
||||
## Convert model
|
||||
|
||||
You can use [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/1.x/tools/deploy.py) to convert mmaction2 models to the specified backend models. Its detailed usage can be learned from [here](https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
You can use [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/main/tools/deploy.py) to convert mmaction2 models to the specified backend models. Its detailed usage can be learned from [here](https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
|
||||
When using `tools/deploy.py`, it is crucial to specify the correct deployment config. We've already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmaction) of all supported backends for mmaction2, under which the config file path follows the pattern:
|
||||
When using `tools/deploy.py`, it is crucial to specify the correct deployment config. We've already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmaction) of all supported backends for mmaction2, under which the config file path follows the pattern:
|
||||
|
||||
```
|
||||
{task}/{task}_{backend}-{precision}_{static | dynamic}_{shape}.py
|
||||
|
@ -178,7 +178,7 @@ for label_id, score in result:
|
|||
print(label_id, score)
|
||||
```
|
||||
|
||||
Besides python API, mmdeploy SDK also provides other FFI (Foreign Function Interface), such as C, C++, C#, Java and so on. You can learn their usage from [demos](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo).
|
||||
Besides python API, mmdeploy SDK also provides other FFI (Foreign Function Interface), such as C, C++, C#, Java and so on. You can learn their usage from [demos](https://github.com/open-mmlab/mmdeploy/tree/main/demo).
|
||||
|
||||
> MMAction2 only API of c, c++ and python for now.
|
||||
|
||||
|
|
|
@ -36,7 +36,7 @@ If your target platform is **Ubuntu 18.04 or later version**, we encourage you t
|
|||
[scripts](../01-how-to-build/build_from_script.md). For example, the following commands install mmdeploy as well as inference engine - `ONNX Runtime`.
|
||||
|
||||
```shell
|
||||
git clone --recursive -b 1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone --recursive -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
|
||||
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
|
||||
|
@ -49,9 +49,9 @@ If neither **I** nor **II** meets your requirements, [building mmdeploy from sou
|
|||
|
||||
## Convert model
|
||||
|
||||
You can use [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/1.x/tools/deploy.py) to convert mmedit models to the specified backend models. Its detailed usage can be learned from [here](https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
You can use [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/main/tools/deploy.py) to convert mmedit models to the specified backend models. Its detailed usage can be learned from [here](https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
|
||||
When using `tools/deploy.py`, it is crucial to specify the correct deployment config. We've already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmedit) of all supported backends for mmedit, under which the config file path follows the pattern:
|
||||
When using `tools/deploy.py`, it is crucial to specify the correct deployment config. We've already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmedit) of all supported backends for mmedit, under which the config file path follows the pattern:
|
||||
|
||||
```
|
||||
{task}/{task}_{backend}-{precision}_{static | dynamic}_{shape}.py
|
||||
|
@ -91,7 +91,7 @@ python tools/deploy.py \
|
|||
--dump-info
|
||||
```
|
||||
|
||||
You can also convert the above model to other backend models by changing the deployment config file `*_onnxruntime_dynamic.py` to [others](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmedit), e.g., converting to tensorrt model by `super-resolution/super-resolution_tensorrt-_dynamic-32x32-512x512.py`.
|
||||
You can also convert the above model to other backend models by changing the deployment config file `*_onnxruntime_dynamic.py` to [others](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmedit), e.g., converting to tensorrt model by `super-resolution/super-resolution_tensorrt-_dynamic-32x32-512x512.py`.
|
||||
|
||||
```{tip}
|
||||
When converting mmedit models to tensorrt models, --device should be set to "cuda"
|
||||
|
@ -180,7 +180,7 @@ result = result[..., ::-1]
|
|||
cv2.imwrite('output_restorer.bmp', result)
|
||||
```
|
||||
|
||||
Besides python API, mmdeploy SDK also provides other FFI (Foreign Function Interface), such as C, C++, C#, Java and so on. You can learn their usage from [demos](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo).
|
||||
Besides python API, mmdeploy SDK also provides other FFI (Foreign Function Interface), such as C, C++, C#, Java and so on. You can learn their usage from [demos](https://github.com/open-mmlab/mmdeploy/tree/main/demo).
|
||||
|
||||
## Supported models
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ If your target platform is **Ubuntu 18.04 or later version**, we encourage you t
|
|||
[scripts](../01-how-to-build/build_from_script.md). For example, the following commands install mmdeploy as well as inference engine - `ONNX Runtime`.
|
||||
|
||||
```shell
|
||||
git clone --recursive -b dev-1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone --recursive -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
|
||||
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
|
||||
|
@ -53,7 +53,7 @@ If neither **I** nor **II** meets your requirements, [building mmdeploy from sou
|
|||
|
||||
## Convert model
|
||||
|
||||
You can use [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/blob/dev-1.x/tools/deploy.py) to convert mmrotate models to the specified backend models. Its detailed usage can be learned from [here](https://github.com/open-mmlab/mmdeploy/blob/master/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
You can use [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/blob/main/tools/deploy.py) to convert mmrotate models to the specified backend models. Its detailed usage can be learned from [here](https://github.com/open-mmlab/mmdeploy/blob/main/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
|
||||
The command below shows an example about converting `rotated-faster-rcnn` model to onnx model that can be inferred by ONNX Runtime.
|
||||
|
||||
|
@ -76,7 +76,7 @@ python tools/deploy.py \
|
|||
--dump-info
|
||||
```
|
||||
|
||||
It is crucial to specify the correct deployment config during model conversion. We've already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/dev-1.x/configs/mmrotate) of all supported backends for mmrotate. The config filename pattern is:
|
||||
It is crucial to specify the correct deployment config during model conversion. We've already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmrotate) of all supported backends for mmrotate. The config filename pattern is:
|
||||
|
||||
```
|
||||
rotated_detection-{backend}-{precision}_{static | dynamic}_{shape}.py
|
||||
|
@ -87,7 +87,7 @@ rotated_detection-{backend}-{precision}_{static | dynamic}_{shape}.py
|
|||
- **{static | dynamic}:** static shape or dynamic shape
|
||||
- **{shape}:** input shape or shape range of a model
|
||||
|
||||
Therefore, in the above example, you can also convert `rotated-faster-rcnn` to other backend models by changing the deployment config file `rotated-detection_onnxruntime_dynamic` to [others](https://github.com/open-mmlab/mmdeploy/tree/dev-1.x/configs/mmrotate), e.g., converting to tensorrt-fp16 model by `rotated-detection_tensorrt-fp16_dynamic-320x320-1024x1024.py`.
|
||||
Therefore, in the above example, you can also convert `rotated-faster-rcnn` to other backend models by changing the deployment config file `rotated-detection_onnxruntime_dynamic` to [others](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmrotate), e.g., converting to tensorrt-fp16 model by `rotated-detection_tensorrt-fp16_dynamic-320x320-1024x1024.py`.
|
||||
|
||||
```{tip}
|
||||
When converting mmrotate models to tensorrt models, --device should be set to "cuda"
|
||||
|
@ -172,7 +172,7 @@ detector = RotatedDetector(model_path='./mmdeploy_models/mmrotate/ort', device_n
|
|||
det = detector(img)
|
||||
```
|
||||
|
||||
Besides python API, mmdeploy SDK also provides other FFI (Foreign Function Interface), such as C, C++, C#, Java and so on. You can learn their usage from [demos](https://github.com/open-mmlab/mmdeploy/tree/dev-1.x/demo).
|
||||
Besides python API, mmdeploy SDK also provides other FFI (Foreign Function Interface), such as C, C++, C#, Java and so on. You can learn their usage from [demos](https://github.com/open-mmlab/mmdeploy/tree/main/demo).
|
||||
|
||||
## Supported models
|
||||
|
||||
|
|
|
@ -36,7 +36,7 @@ If your target platform is **Ubuntu 18.04 or later version**, we encourage you t
|
|||
[scripts](../01-how-to-build/build_from_script.md). For example, the following commands install mmdeploy as well as inference engine - `ONNX Runtime`.
|
||||
|
||||
```shell
|
||||
git clone --recursive -b 1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone --recursive -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
|
||||
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
|
||||
|
@ -54,7 +54,7 @@ If neither **I** nor **II** meets your requirements, [building mmdeploy from sou
|
|||
|
||||
## Convert model
|
||||
|
||||
You can use [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/1.x/tools/deploy.py) to convert mmseg models to the specified backend models. Its detailed usage can be learned from [here](https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
You can use [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/main/tools/deploy.py) to convert mmseg models to the specified backend models. Its detailed usage can be learned from [here](https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
|
||||
The command below shows an example about converting `unet` model to onnx model that can be inferred by ONNX Runtime.
|
||||
|
||||
|
@ -76,7 +76,7 @@ python tools/deploy.py \
|
|||
--dump-info
|
||||
```
|
||||
|
||||
It is crucial to specify the correct deployment config during model conversion. We've already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmseg) of all supported backends for mmsegmentation. The config filename pattern is:
|
||||
It is crucial to specify the correct deployment config during model conversion. We've already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmseg) of all supported backends for mmsegmentation. The config filename pattern is:
|
||||
|
||||
```
|
||||
segmentation_{backend}-{precision}_{static | dynamic}_{shape}.py
|
||||
|
@ -87,7 +87,7 @@ segmentation_{backend}-{precision}_{static | dynamic}_{shape}.py
|
|||
- **{static | dynamic}:** static shape or dynamic shape
|
||||
- **{shape}:** input shape or shape range of a model
|
||||
|
||||
Therefore, in the above example, you can also convert `unet` to other backend models by changing the deployment config file `segmentation_onnxruntime_dynamic.py` to [others](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmseg), e.g., converting to tensorrt-fp16 model by `segmentation_tensorrt-fp16_dynamic-512x1024-2048x2048.py`.
|
||||
Therefore, in the above example, you can also convert `unet` to other backend models by changing the deployment config file `segmentation_onnxruntime_dynamic.py` to [others](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmseg), e.g., converting to tensorrt-fp16 model by `segmentation_tensorrt-fp16_dynamic-512x1024-2048x2048.py`.
|
||||
|
||||
```{tip}
|
||||
When converting mmseg models to tensorrt models, --device should be set to "cuda"
|
||||
|
@ -184,7 +184,7 @@ img = img.astype(np.uint8)
|
|||
cv2.imwrite('output_segmentation.png', img)
|
||||
```
|
||||
|
||||
Besides python API, mmdeploy SDK also provides other FFI (Foreign Function Interface), such as C, C++, C#, Java and so on. You can learn their usage from [demos](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo).
|
||||
Besides python API, mmdeploy SDK also provides other FFI (Foreign Function Interface), such as C, C++, C#, Java and so on. You can learn their usage from [demos](https://github.com/open-mmlab/mmdeploy/tree/main/demo).
|
||||
|
||||
## Supported models
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Currently, MMDeploy only tests rk3588 and rv1126 with linux platform.
|
||||
|
||||
The following features cannot be automatically enabled by mmdeploy and you need to manually modify the configuration in MMDeploy like [here](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/_base_/backends/rknn.py).
|
||||
The following features cannot be automatically enabled by mmdeploy and you need to manually modify the configuration in MMDeploy like [here](https://github.com/open-mmlab/mmdeploy/tree/main/configs/_base_/backends/rknn.py).
|
||||
|
||||
- target_platform other than default
|
||||
- quantization settings
|
||||
|
|
|
@ -105,7 +105,7 @@ html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
|
|||
# documentation.
|
||||
#
|
||||
html_theme_options = {
|
||||
'logo_url': 'https://mmdeploy.readthedocs.io/en/1.x/',
|
||||
'logo_url': 'https://mmdeploy.readthedocs.io/en/main/',
|
||||
'menu': [{
|
||||
'name': 'GitHub',
|
||||
'url': 'https://github.com/open-mmlab/mmdeploy'
|
||||
|
|
|
@ -170,7 +170,7 @@ Based on the above settings, we provide an example to convert the Faster R-CNN i
|
|||
|
||||
```shell
|
||||
# clone mmdeploy to get the deployment config. `--recursive` is not necessary
|
||||
git clone -b dev-1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
|
||||
# clone mmdetection repo. We have to use the config file to build PyTorch nn module
|
||||
git clone -b 3.x https://github.com/open-mmlab/mmdetection.git
|
||||
|
@ -269,7 +269,7 @@ for index, bbox, label_id in zip(indices, bboxes, labels):
|
|||
cv2.imwrite('output_detection.png', img)
|
||||
```
|
||||
|
||||
You can find more examples from [here](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo/python).
|
||||
You can find more examples from [here](https://github.com/open-mmlab/mmdeploy/tree/main/demo/python).
|
||||
|
||||
#### C++ API
|
||||
|
||||
|
@ -321,9 +321,9 @@ find_package(MMDeploy REQUIRED)
|
|||
target_link_libraries(${name} PRIVATE mmdeploy ${OpenCV_LIBS})
|
||||
```
|
||||
|
||||
For more SDK C++ API usages, please read these [samples](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo/csrc/cpp).
|
||||
For more SDK C++ API usages, please read these [samples](https://github.com/open-mmlab/mmdeploy/tree/main/demo/csrc/cpp).
|
||||
|
||||
For the rest C, C# and Java API usages, please read [C demos](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo/csrc/c), [C# demos](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo/csharp) and [Java demos](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo/java) respectively.
|
||||
For the rest C, C# and Java API usages, please read [C demos](https://github.com/open-mmlab/mmdeploy/tree/main/demo/csrc/c), [C# demos](https://github.com/open-mmlab/mmdeploy/tree/main/demo/csharp) and [Java demos](https://github.com/open-mmlab/mmdeploy/tree/main/demo/java) respectively.
|
||||
We'll talk about them more in our next release.
|
||||
|
||||
#### Accelerate preprocessing(Experimental)
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
## <a href='https://mmdeploy.readthedocs.io/en/1.x/'>English</a>
|
||||
## <a href='https://mmdeploy.readthedocs.io/en/main/'>English</a>
|
||||
|
||||
## <a href='https://mmdeploy.readthedocs.io/zh_CN/1.x/'>简体中文</a>
|
||||
## <a href='https://mmdeploy.readthedocs.io/zh_CN/main/'>简体中文</a>
|
||||
|
|
|
@ -136,7 +136,7 @@ python tools/deploy.py \
|
|||
|
||||
- RTMDet
|
||||
|
||||
将下面的模型拆分配置写入到 [detection_rknn-int8_static-640x640.py](https://github.com/open-mmlab/mmdeploy/blob/dev-1.x/configs/mmdet/detection/detection_rknn-int8_static-640x640.py)
|
||||
将下面的模型拆分配置写入到 [detection_rknn-int8_static-640x640.py](https://github.com/open-mmlab/mmdeploy/blob/main/configs/mmdet/detection/detection_rknn-int8_static-640x640.py)
|
||||
|
||||
```python
|
||||
# rtmdet for rknn-toolkit and rknn-toolkit2
|
||||
|
|
|
@ -37,7 +37,7 @@ mmdeploy 有以下几种安装方式:
|
|||
比如,以下命令可以安装 mmdeploy 以及配套的推理引擎——`ONNX Runtime`.
|
||||
|
||||
```shell
|
||||
git clone --recursive -b 1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone --recursive -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
|
||||
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
|
||||
|
@ -50,10 +50,10 @@ export LD_LIBRARY_PATH=$(pwd)/../mmdeploy-dep/onnxruntime-linux-x64-1.8.1/lib/:$
|
|||
|
||||
## 模型转换
|
||||
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/1.x/tools/deploy.py) 把 mmaction2 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/main/tools/deploy.py) 把 mmaction2 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmaction)。
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmaction)。
|
||||
文件的命名模式是:
|
||||
|
||||
```
|
||||
|
@ -181,7 +181,7 @@ for label_id, score in result:
|
|||
```
|
||||
|
||||
除了python API,mmdeploy SDK 还提供了诸如 C、C++、C#、Java等多语言接口。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo)学习其他语言接口的使用方法。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/main/demo)学习其他语言接口的使用方法。
|
||||
|
||||
> mmaction2 的 C#,Java接口待开发
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ mmdeploy 有以下几种安装方式:
|
|||
比如,以下命令可以安装 mmdeploy 以及配套的推理引擎——`ONNX Runtime`.
|
||||
|
||||
```shell
|
||||
git clone --recursive -b 1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone --recursive -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
|
||||
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
|
||||
|
@ -48,8 +48,8 @@ export LD_LIBRARY_PATH=$(pwd)/../mmdeploy-dep/onnxruntime-linux-x64-1.8.1/lib/:$
|
|||
|
||||
## 模型转换
|
||||
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/1.x/tools/deploy.py) 把 mmcls 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/zh_cn/02-how-to-run/convert_model.md#使用方法).
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/main/tools/deploy.py) 把 mmcls 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/main/docs/zh_cn/02-how-to-run/convert_model.md#使用方法).
|
||||
|
||||
以下,我们将演示如何把 `resnet18` 转换为 onnx 模型。
|
||||
|
||||
|
@ -71,7 +71,7 @@ python tools/deploy.py \
|
|||
--dump-info
|
||||
```
|
||||
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmcls)。
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmcls)。
|
||||
文件的命名模式是:
|
||||
|
||||
```
|
||||
|
@ -173,7 +173,7 @@ for label_id, score in result:
|
|||
```
|
||||
|
||||
除了python API,mmdeploy SDK 还提供了诸如 C、C++、C#、Java等多语言接口。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo)学习其他语言接口的使用方法。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/main/demo)学习其他语言接口的使用方法。
|
||||
|
||||
## 模型支持列表
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ mmdeploy 有以下几种安装方式:
|
|||
比如,以下命令可以安装 mmdeploy 以及配套的推理引擎——`ONNX Runtime`.
|
||||
|
||||
```shell
|
||||
git clone --recursive -b 1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone --recursive -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
|
||||
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
|
||||
|
@ -48,8 +48,8 @@ export LD_LIBRARY_PATH=$(pwd)/../mmdeploy-dep/onnxruntime-linux-x64-1.8.1/lib/:$
|
|||
|
||||
## 模型转换
|
||||
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/blob/1.x/tools/deploy.py) 把 mmdet 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/blob/main/tools/deploy.py) 把 mmdet 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
|
||||
以下,我们将演示如何把 `Faster R-CNN` 转换为 onnx 模型。
|
||||
|
||||
|
@ -69,7 +69,7 @@ python tools/deploy.py \
|
|||
--dump-info
|
||||
```
|
||||
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmdet)。
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmdet)。
|
||||
文件的命名模式是:
|
||||
|
||||
```
|
||||
|
@ -188,7 +188,7 @@ cv2.imwrite('output_detection.png', img)
|
|||
```
|
||||
|
||||
除了python API,mmdeploy SDK 还提供了诸如 C、C++、C#、Java等多语言接口。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo)学习其他语言接口的使用方法。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/main/demo)学习其他语言接口的使用方法。
|
||||
|
||||
## 模型支持列表
|
||||
|
||||
|
|
|
@ -36,7 +36,7 @@ mmdeploy 有以下几种安装方式:
|
|||
比如,以下命令可以安装 mmdeploy 以及配套的推理引擎——`ONNX Runtime`.
|
||||
|
||||
```shell
|
||||
git clone --recursive -b 1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone --recursive -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
|
||||
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
|
||||
|
@ -49,10 +49,10 @@ export LD_LIBRARY_PATH=$(pwd)/../mmdeploy-dep/onnxruntime-linux-x64-1.8.1/lib/:$
|
|||
|
||||
## 模型转换
|
||||
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/1.x/tools/deploy.py) 把 mmedit 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/zh_cn/02-how-to-run/convert_model.md#使用方法).
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/main/tools/deploy.py) 把 mmedit 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/main/docs/zh_cn/02-how-to-run/convert_model.md#使用方法).
|
||||
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmedit)。
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmedit)。
|
||||
文件的命名模式是:
|
||||
|
||||
```
|
||||
|
@ -185,7 +185,7 @@ cv2.imwrite('output_restorer.bmp', result)
|
|||
```
|
||||
|
||||
除了python API,mmdeploy SDK 还提供了诸如 C、C++、C#、Java等多语言接口。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo)学习其他语言接口的使用方法。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/main/demo)学习其他语言接口的使用方法。
|
||||
|
||||
## 模型支持列表
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@ mmdeploy 有以下几种安装方式:
|
|||
比如,以下命令可以安装 mmdeploy 以及配套的推理引擎——`ONNX Runtime`.
|
||||
|
||||
```shell
|
||||
git clone --recursive -b 1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone --recursive -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
|
||||
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
|
||||
|
@ -53,10 +53,10 @@ export LD_LIBRARY_PATH=$(pwd)/../mmdeploy-dep/onnxruntime-linux-x64-1.8.1/lib/:$
|
|||
|
||||
## 模型转换
|
||||
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/1.x/tools/deploy.py) 把 mmocr 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/main/tools/deploy.py) 把 mmocr 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmocr)。
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmocr)。
|
||||
文件的命名模式是:
|
||||
|
||||
```
|
||||
|
@ -234,7 +234,7 @@ print(texts)
|
|||
```
|
||||
|
||||
除了python API,mmdeploy SDK 还提供了诸如 C、C++、C#、Java等多语言接口。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo)学习其他语言接口的使用方法。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/main/demo)学习其他语言接口的使用方法。
|
||||
|
||||
## 模型支持列表
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ mmdeploy 有以下几种安装方式:
|
|||
比如,以下命令可以安装 mmdeploy 以及配套的推理引擎——`ONNX Runtime`.
|
||||
|
||||
```shell
|
||||
git clone --recursive -b 1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone --recursive -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
|
||||
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
|
||||
|
@ -48,8 +48,8 @@ export LD_LIBRARY_PATH=$(pwd)/../mmdeploy-dep/onnxruntime-linux-x64-1.8.1/lib/:$
|
|||
|
||||
## 模型转换
|
||||
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/1.x/tools/deploy.py) 把 mmpose 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/main/tools/deploy.py) 把 mmpose 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
|
||||
以下,我们将演示如何把 `hrnet` 转换为 onnx 模型。
|
||||
|
||||
|
@ -68,7 +68,7 @@ python tools/deploy.py \
|
|||
--show
|
||||
```
|
||||
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmpose)。
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmpose)。
|
||||
文件的命名模式是:
|
||||
|
||||
```
|
||||
|
|
|
@ -35,7 +35,7 @@ mmdeploy 有以下几种安装方式:
|
|||
比如,以下命令可以安装 mmdeploy 以及配套的推理引擎——`ONNX Runtime`.
|
||||
|
||||
```shell
|
||||
git clone --recursive -b dev-1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone --recursive -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
|
||||
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
|
||||
|
@ -52,7 +52,7 @@ export LD_LIBRARY_PATH=$(pwd)/../mmdeploy-dep/onnxruntime-linux-x64-1.8.1/lib/:$
|
|||
|
||||
## 模型转换
|
||||
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/blob/dev-1.x/tools/deploy.py) 把 mmrotate 模型一键式转换为推理后端模型。
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/blob/main/tools/deploy.py) 把 mmrotate 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/blob/master/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
|
||||
以下,我们将演示如何把 `rotated-faster-rcnn` 转换为 onnx 模型。
|
||||
|
@ -76,7 +76,7 @@ python tools/deploy.py \
|
|||
--dump-info
|
||||
```
|
||||
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/dev-1.x/configs/mmrotate)。
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmrotate)。
|
||||
文件的命名模式是:
|
||||
|
||||
```
|
||||
|
@ -176,7 +176,7 @@ det = detector(img)
|
|||
```
|
||||
|
||||
除了python API,mmdeploy SDK 还提供了诸如 C、C++、C#、Java等多语言接口。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/dev-1.x/demo)学习其他语言接口的使用方法。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/main/demo)学习其他语言接口的使用方法。
|
||||
|
||||
## 模型支持列表
|
||||
|
||||
|
|
|
@ -36,7 +36,7 @@ mmdeploy 有以下几种安装方式:
|
|||
比如,以下命令可以安装 mmdeploy 以及配套的推理引擎——`ONNX Runtime`.
|
||||
|
||||
```shell
|
||||
git clone --recursive -b 1.x https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone --recursive -b main https://github.com/open-mmlab/mmdeploy.git
|
||||
cd mmdeploy
|
||||
python3 tools/scripts/build_ubuntu_x64_ort.py $(nproc)
|
||||
export PYTHONPATH=$(pwd)/build/lib:$PYTHONPATH
|
||||
|
@ -53,8 +53,8 @@ export LD_LIBRARY_PATH=$(pwd)/../mmdeploy-dep/onnxruntime-linux-x64-1.8.1/lib/:$
|
|||
|
||||
## 模型转换
|
||||
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/1.x/tools/deploy.py) 把 mmseg 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
你可以使用 [tools/deploy.py](https://github.com/open-mmlab/mmdeploy/tree/main/tools/deploy.py) 把 mmseg 模型一键式转换为推理后端模型。
|
||||
该工具的详细使用说明请参考[这里](https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/02-how-to-run/convert_model.md#usage).
|
||||
|
||||
以下,我们将演示如何把 `unet` 转换为 onnx 模型。
|
||||
|
||||
|
@ -76,7 +76,7 @@ python tools/deploy.py \
|
|||
--dump-info
|
||||
```
|
||||
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmseg)。
|
||||
转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmseg)。
|
||||
文件的命名模式是:
|
||||
|
||||
```
|
||||
|
@ -188,7 +188,7 @@ cv2.imwrite('output_segmentation.png', img)
|
|||
```
|
||||
|
||||
除了python API,mmdeploy SDK 还提供了诸如 C、C++、C#、Java等多语言接口。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo)学习其他语言接口的使用方法。
|
||||
你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/main/demo)学习其他语言接口的使用方法。
|
||||
|
||||
## 模型支持列表
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
目前, MMDeploy 只在 rk3588 和 rv1126 的 linux 平台上测试过.
|
||||
|
||||
以下特性需要手动在 MMDeploy 自行配置,如[这里](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/_base_/backends/rknn.py).
|
||||
以下特性需要手动在 MMDeploy 自行配置,如[这里](https://github.com/open-mmlab/mmdeploy/tree/main/configs/_base_/backends/rknn.py).
|
||||
|
||||
- target_platform != default
|
||||
- quantization settings
|
||||
|
|
|
@ -165,7 +165,7 @@ export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
|
|||
|
||||
```shell
|
||||
# 克隆 mmdeploy 仓库。转换时,需要使用 mmdeploy 仓库中的配置文件,建立转换流水线, `--recursive` 不是必须的
|
||||
git clone -b dev-1.x --recursive https://github.com/open-mmlab/mmdeploy.git
|
||||
git clone -b main --recursive https://github.com/open-mmlab/mmdeploy.git
|
||||
|
||||
# 安装 mmdetection。转换时,需要使用 mmdetection 仓库中的模型配置文件,构建 PyTorch nn module
|
||||
git clone -b 3.x https://github.com/open-mmlab/mmdetection.git
|
||||
|
|
|
@ -44,7 +44,7 @@ python -c "import tensorrt;print(tensorrt.__version__)"
|
|||
|
||||
### Jetson
|
||||
|
||||
对于 Jetson 平台,我们有非常详细的安装环境配置教程,可参考 [MMDeploy 安装文档](https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/zh_cn/01-how-to-build/jetsons.md)。需要注意的是,在 Jetson 上配置的 CUDA 版本 TensorRT 版本与 JetPack 强相关的,我们选择适配硬件的版本即可。配置好环境后,通过 `python -c "import tensorrt;print(tensorrt.__version__)"` 查看TensorRT版本是否正确。
|
||||
对于 Jetson 平台,我们有非常详细的安装环境配置教程,可参考 [MMDeploy 安装文档](https://github.com/open-mmlab/mmdeploy/tree/main/docs/zh_cn/01-how-to-build/jetsons.md)。需要注意的是,在 Jetson 上配置的 CUDA 版本 TensorRT 版本与 JetPack 强相关的,我们选择适配硬件的版本即可。配置好环境后,通过 `python -c "import tensorrt;print(tensorrt.__version__)"` 查看TensorRT版本是否正确。
|
||||
|
||||
## 模型构建
|
||||
|
||||
|
|
Loading…
Reference in New Issue