[Enhancement] Switch pip to mim in Docs and Dockerfile (#1591)

* change pip install to mim install in docs and Dockerfil

* merge with latest dev-1.x
pull/1514/merge
Xin Li 2023-01-04 23:11:13 +08:00 committed by GitHub
parent 8a05b8d62d
commit 14550aac38
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 26 additions and 21 deletions

View File

@ -1,5 +1,5 @@
FROM openvino/ubuntu18_dev:2021.4.2
ARG PYTHON_VERSION=3.7
ARG PYTHON_VERSION=3.8
ARG TORCH_VERSION=1.10.0
ARG TORCHVISION_VERSION=0.11.0
ARG ONNXRUNTIME_VERSION=1.8.1
@ -114,4 +114,4 @@ RUN cd mmdeploy && rm -rf build/CM* && mkdir -p build && cd build && cmake .. \
-DMMDEPLOY_CODEBASES=all &&\
cmake --build . -- -j$(nproc) && cmake --install . &&\
export SPDLOG_LEVEL=warn &&\
if [ -z ${VERSION} ] ; then echo "Built MMDeploy master for CPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for CPU devices successfully!" ; fi
if [ -z ${VERSION} ] ; then echo "Built MMDeploy 1.x for CPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for CPU devices successfully!" ; fi

View File

@ -101,6 +101,6 @@ RUN cd /root/workspace/mmdeploy &&\
-DMMDEPLOY_CODEBASES=all &&\
make -j$(nproc) && make install &&\
export SPDLOG_LEVEL=warn &&\
if [ -z ${VERSION} ] ; then echo "Built MMDeploy master for GPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for GPU devices successfully!" ; fi
if [ -z ${VERSION} ] ; then echo "Built MMDeploy dev-1.x for GPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for GPU devices successfully!" ; fi
ENV LD_LIBRARY_PATH="/root/workspace/mmdeploy/build/lib:${BACKUP_LD_LIBRARY_PATH}"

View File

@ -75,7 +75,8 @@ conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=11.1 -c pytorch -c c
export cu_version=cu111 # cuda 11.1
export torch_version=torch1.8
pip install -U openmim
mim install "mmcv>=2.0.0rc1"
mim install mmengine
mim install "mmcv>=2.0.0rc2"
</code></pre>
</td>
</tr>
@ -326,7 +327,7 @@ Please check [cmake build option](cmake_option.md).
```bash
cd ${MMDEPLOY_DIR}
pip install -e .
mim install -e .
```
**Note**

View File

@ -37,7 +37,8 @@ Please refer to [get_started](../get_started.md) to install conda.
# install pytorch & mmcv
conda install pytorch==1.9.0 torchvision==0.10.0 -c pytorch
pip install -U openmim
mim install "mmcv>=2.0.0rc1"
mim install mmengine
mim install "mmcv>=2.0.0rc2"
```
### Install Dependencies for SDK
@ -146,7 +147,7 @@ conda install grpcio
```bash
cd ${MMDEPLOY_DIR}
pip install -v -e .
mim install -v -e .
```
**Note**

View File

@ -64,6 +64,7 @@ We recommend that users follow our best practices installing MMDeploy.
```shell
pip install -U openmim
mim install mmengine
mim install "mmcv>=2.0.0rc2"
```
@ -172,12 +173,12 @@ Based on the above settings, we provide an example to convert the Faster R-CNN i
```shell
# clone mmdeploy to get the deployment config. `--recursive` is not necessary
git clone https://github.com/open-mmlab/mmdeploy.git
git clone -b dev-1.x https://github.com/open-mmlab/mmdeploy.git
# clone mmdetection repo. We have to use the config file to build PyTorch nn module
git clone https://github.com/open-mmlab/mmdetection.git
git clone -b 3.x https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -v -e .
mim install -v -e .
cd ..
# download Faster R-CNN checkpoint
@ -186,7 +187,7 @@ wget -P checkpoints https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/
# run the command to start model conversion
python mmdeploy/tools/deploy.py \
mmdeploy/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
mmdetection/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
mmdetection/demo/demo.jpg \
--work-dir mmdeploy_model/faster-rcnn \
@ -201,7 +202,7 @@ For more details about model conversion, you can read [how_to_convert_model](02-
```{tip}
If MMDeploy-ONNXRuntime prebuilt package is installed, you can convert the above model to onnx model and perform ONNX Runtime inference
just by 'changing detection_tensorrt_dynamic-320x320-1344x1344.py' to 'detection_onnxruntime_dynamic.py' and making '--device' as 'cpu'.
just by changing 'detection_tensorrt_dynamic-320x320-1344x1344.py' to 'detection_onnxruntime_dynamic.py' and making '--device' as 'cpu'.
```
## Inference Model

View File

@ -76,7 +76,8 @@ conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=11.1 -c pytorch -c c
export cu_version=cu111 # cuda 11.1
export torch_version=torch1.8
pip install -U openmim
mim install "mmcv>=2.0.0rc1"
mim install mmengine
mim install "mmcv>=2.0.0rc2"
</code></pre>
</td>
</tr>
@ -323,7 +324,7 @@ export MMDEPLOY_DIR=$(pwd)
```bash
cd ${MMDEPLOY_DIR}
pip install -e .
mim install -e .
```
**注意**

View File

@ -40,7 +40,8 @@
# install pytoch & mmcv
conda install pytorch==1.9.0 torchvision==0.10.0 -c pytorch
pip install -U openmim
mim install "mmcv>=2.0.0rc1"
mim install mmengine
mim install "mmcv>=2.0.0rc2"
```
#### 安装 MMDeploy SDK 依赖
@ -147,7 +148,7 @@ conda install grpcio
```bash
cd ${MMDEPLOY_DIR}
pip install -v -e .
mim install -v -e .
```
**注意**

View File

@ -167,13 +167,13 @@ export LD_LIBRARY_PATH=$CUDNN_DIR/lib64:$LD_LIBRARY_PATH
以 [MMDetection](https://github.com/open-mmlab/mmdetection) 中的 `Faster R-CNN` 为例,我们可以使用如下命令,将 PyTorch 模型转换为 TenorRT 模型,从而部署到 NVIDIA GPU 上.
```shell
# 克隆 mmdeploy 仓库。转换时,需要使用 mmdeploy 仓库中的配置文件,建立转换流水线
git clone --recursive https://github.com/open-mmlab/mmdeploy.git
# 克隆 mmdeploy 仓库。转换时,需要使用 mmdeploy 仓库中的配置文件,建立转换流水线, `--recursive` 不是必须的
git clone -b dev-1.x --recursive https://github.com/open-mmlab/mmdeploy.git
# 安装 mmdetection。转换时需要使用 mmdetection 仓库中的模型配置文件,构建 PyTorch nn module
git clone https://github.com/open-mmlab/mmdetection.git
git clone -b 3.x https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -v -e .
mim install -v -e .
cd ..
# 下载 Faster R-CNN 模型权重
@ -182,7 +182,7 @@ wget -P checkpoints https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/
# 执行转换命令,实现端到端的转换
python mmdeploy/tools/deploy.py \
mmdeploy/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
mmdetection/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
mmdetection/demo/demo.jpg \
--work-dir mmdeploy_model/faster-rcnn \