mirror of
https://github.com/open-mmlab/mmdeploy.git
synced 2025-01-14 08:09:43 +08:00
[Enhance]: update installation docs (#189)
* update doc * resolve comments
This commit is contained in:
parent
b8e5ef00ea
commit
e2c9af0bb7
@ -29,6 +29,17 @@ export ONNXRUNTIME_DIR=$(pwd)
|
||||
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
|
||||
```
|
||||
|
||||
Note:
|
||||
|
||||
- If you want to save onnxruntime env variables to bashrc, you could run
|
||||
|
||||
```bash
|
||||
echo '# set env for onnxruntime' >> ~/.bashrc
|
||||
echo "export ONNXRUNTIME_DIR=${ONNXRUNTIME_DIR}" >> ~/.bashrc
|
||||
echo 'export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH' >> ~/.bashrc
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
#### Build on Linux
|
||||
|
||||
```bash
|
||||
@ -46,7 +57,7 @@ make -j10
|
||||
### List of supported custom ops
|
||||
|
||||
| Operator | CPU | GPU | MMDeploy Releases |
|
||||
| :----------------------------------------------------: | :---: | :---: | :-----------: |
|
||||
|:-----------------------------------------------------------------------------|:---:|:---:|:------------------|
|
||||
| [RoIAlign](../ops/onnxruntime.md#roialign) | Y | N | master |
|
||||
| [grid_sampler](../ops/onnxruntime.md#grid_sampler) | Y | N | master |
|
||||
| [MMCVModulatedDeformConv2d](../ops/onnxruntime.md#mmcvmodulateddeformconv2d) | Y | N | master |
|
||||
|
@ -6,7 +6,19 @@
|
||||
|
||||
Please install TensorRT 8 follow [install-guide](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing).
|
||||
|
||||
**Note**: `pip Wheel File Installation` is not supported yet in this repo.
|
||||
**Note**:
|
||||
|
||||
- `pip Wheel File Installation` is not supported yet in this repo.
|
||||
- We strongly suggest you install TensorRT through [tar file](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-tar)
|
||||
- After installation, you'd better add TensorRT environment variables to bashrc by:
|
||||
|
||||
```bash
|
||||
cd ${TENSORRT_DIR} # To TensorRT root directory
|
||||
echo '# set env for TensorRT' >> ~/.bashrc
|
||||
echo "export TENSORRT_DIR=${TENSORRT_DIR}" >> ~/.bashrc
|
||||
echo 'export LD_LIBRARY_PATH=$TENSORRT_DIR/lib:$TENSORRT_DIR' >> ~/.bashrc
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
#### Build custom ops
|
||||
|
||||
|
@ -1,15 +1,41 @@
|
||||
## Build MMdeploy
|
||||
## Build MMDeploy
|
||||
|
||||
### Preparation
|
||||
|
||||
- Download MMDeploy
|
||||
|
||||
```bash
|
||||
git clone -b master git@github.com:grimoire/deploy_prototype.git MMDeploy
|
||||
cd MMDeploy
|
||||
git submodule update --init --recursive
|
||||
```
|
||||
|
||||
Note:
|
||||
|
||||
- If fetching submodule fails, you could get submodule manually by following instructions:
|
||||
|
||||
```bash
|
||||
git clone git@github.com:NVIDIA/cub.git third_party/cub
|
||||
cd third_party/cub
|
||||
git checkout c3cceac115
|
||||
```
|
||||
|
||||
- Install cmake
|
||||
|
||||
Install cmake>=3.14.0, you could refer to [cmake website](https://cmake.org/install) for more detailed info.
|
||||
|
||||
```bash
|
||||
apt-get install -y libssl-dev
|
||||
wget https://github.com/Kitware/CMake/releases/download/v3.20.0/cmake-3.20.0.tar.gz
|
||||
tar -zxvf cmake-3.20.0.tar.gz
|
||||
cd cmake-3.20.0
|
||||
./bootstrap
|
||||
make
|
||||
make install
|
||||
```
|
||||
|
||||
### Build backend support
|
||||
|
||||
Update third-party libraries.
|
||||
|
||||
```bash
|
||||
git submodule update --init
|
||||
```
|
||||
|
||||
Install cmake>=3.14.0
|
||||
|
||||
Build the inference engine extension libraries you need.
|
||||
|
||||
- [ONNX Runtime](backends/onnxruntime.md)
|
||||
@ -21,5 +47,6 @@ Build the inference engine extension libraries you need.
|
||||
### Install mmdeploy
|
||||
|
||||
```bash
|
||||
cd ${MMDEPLOY_DIR} # To mmdeploy root directory
|
||||
pip install -e .
|
||||
```
|
||||
|
Loading…
x
Reference in New Issue
Block a user