modify the links & fix some typos (#1150)
* modify the links & fix some typos * typo fixedpull/1179/head
parent
51f4e65185
commit
60f50a1789
|
@ -35,20 +35,20 @@ python ./tools/deploy.py \
|
|||
- `deploy_cfg` : The deployment configuration of mmdeploy for the model, including the type of inference framework, whether quantize, whether the input shape is dynamic, etc. There may be a reference relationship between configuration files, `mmdeploy/mmcls/classification_ncnn_static.py` is an example.
|
||||
- `model_cfg` : Model configuration for algorithm library, e.g. `mmclassification/configs/vision_transformer/vit-base-p32_ft-64xb64_in1k-384.py`, regardless of the path to mmdeploy.
|
||||
- `checkpoint` : torch model path. It can start with http/https, see the implementation of `mmcv.FileClient` for details.
|
||||
- `img` : The path to the image or point cloud file used for testing during model conversion.
|
||||
- `--test-img` : The path of image file that used to test model. If not specified, it will be set to `None`.
|
||||
- `--work-dir` : The path of work directory that used to save logs and models.
|
||||
- `--calib-dataset-cfg` : Only valid in int8 mode. Config used for calibration. If not specified, it will be set to `None` and use "val" dataset in model config for calibration.
|
||||
- `--device` : The device used for model conversion. If not specified, it will be set to `cpu`, for trt use `cuda:0` format.
|
||||
- `img` : The path to the image or point cloud file used for testing during the model conversion.
|
||||
- `--test-img` : The path of the image file that is used to test the model. If not specified, it will be set to `None`.
|
||||
- `--work-dir` : The path of the work directory that is used to save logs and models.
|
||||
- `--calib-dataset-cfg` : Only valid in int8 mode. The config used for calibration. If not specified, it will be set to `None` and use the "val" dataset in the model config for calibration.
|
||||
- `--device` : The device used for model conversion. If not specified, it will be set to `cpu`. For trt, use `cuda:0` format.
|
||||
- `--log-level` : To set log level which in `'CRITICAL', 'FATAL', 'ERROR', 'WARN', 'WARNING', 'INFO', 'DEBUG', 'NOTSET'`. If not specified, it will be set to `INFO`.
|
||||
- `--show` : Whether to show detection outputs.
|
||||
- `--dump-info` : Whether to output information for SDK.
|
||||
|
||||
### How to find the corresponding deployment config of a PyTorch model
|
||||
|
||||
1. Find model's codebase folder in `configs/`. Example, convert a yolov3 model you need to find `configs/mmdet` folder.
|
||||
2. Find model's task folder in `configs/codebase_folder/`. Just like yolov3 model, you need to find `configs/mmdet/detection` folder.
|
||||
3. Find deployment config file in `configs/codebase_folder/task_folder/`. Just like deploy yolov3 model you can use `configs/mmdet/detection/detection_onnxruntime_dynamic.py`.
|
||||
1. Find the model's codebase folder in `configs/`. For converting a yolov3 model, you need to check `configs/mmdet` folder.
|
||||
2. Find the model's task folder in `configs/codebase_folder/`. For a yolov3 model, you need to check `configs/mmdet/detection` folder.
|
||||
3. Find the deployment config file in `configs/codebase_folder/task_folder/`. For deploying a yolov3 model to the onnx backend, you could use `configs/mmdet/detection/detection_onnxruntime_dynamic.py`.
|
||||
|
||||
### Example
|
||||
|
||||
|
|
|
@ -11,8 +11,6 @@
|
|||
- [示例](#示例)
|
||||
- [如何评测模型](#如何评测模型)
|
||||
- [各后端已支持导出的模型列表](#各后端已支持导出的模型列表)
|
||||
- [注意事项](#注意事项)
|
||||
- [问答](#问答)
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
|
@ -20,15 +18,15 @@
|
|||
|
||||
注意:
|
||||
|
||||
- 现在已支持的后端包括 [ONNX Runtime](https://mmdeploy.readthedocs.io/en/latest/backends/onnxruntime.html) ,[TensorRT](https://mmdeploy.readthedocs.io/en/latest/backends/tensorrt.html) ,[ncnn](https://mmdeploy.readthedocs.io/en/latest/backends/ncnn.html) ,[PPLNN](https://mmdeploy.readthedocs.io/en/latest/backends/pplnn.html), [OpenVINO](https://mmdeploy.readthedocs.io/en/latest/backends/openvino.html)。
|
||||
- 现在已支持的代码库包括 [MMClassification](https://mmdeploy.readthedocs.io/en/latest/codebases/mmcls.html) ,[MMDetection](https://mmdeploy.readthedocs.io/en/latest/codebases/mmdet.html) ,[MMSegmentation](https://mmdeploy.readthedocs.io/en/latest/codebases/mmseg.html) ,[MMOCR](https://mmdeploy.readthedocs.io/en/latest/codebases/mmocr.html) ,[MMEditing](https://mmdeploy.readthedocs.io/en/latest/codebases/mmedit.html)。
|
||||
- 现在已支持的后端包括 [ONNXRuntime](../05-supported-backends/onnxruntime.md) ,[TensorRT](../05-supported-backends/tensorrt.md) ,[ncnn](../05-supported-backends/ncnn.md) ,[PPLNN](../05-supported-backends/pplnn.md) ,[OpenVINO](../05-supported-backends/openvino.md)。
|
||||
- 现在已支持的代码库包括 [MMClassification](../04-supported-codebases/mmcls.md) ,[MMDetection](../04-supported-codebases/mmdet.md) ,[MMSegmentation](../04-supported-codebases/mmseg.md) ,[MMOCR](../04-supported-codebases/mmocr.md) ,[MMEditing](../04-supported-codebases/mmedit.md)。
|
||||
|
||||
## 如何将模型从pytorch形式转换成其他后端形式
|
||||
|
||||
### 准备工作
|
||||
|
||||
1. 安装您的目标后端。 您可以参考 [ONNXRuntime-install](https://mmdeploy.readthedocs.io/en/latest/backends/onnxruntime.html) ,[TensorRT-install](https://mmdeploy.readthedocs.io/en/latest/backends/tensorrt.html) ,[ncnn-install](https://mmdeploy.readthedocs.io/en/latest/backends/ncnn.html) ,[PPLNN-install](https://mmdeploy.readthedocs.io/en/latest/backends/pplnn.html), [OpenVINO-install](https://mmdeploy.readthedocs.io/en/latest/backends/openvino.html)。
|
||||
2. 安装您的目标代码库。 您可以参考 [MMClassification-install](https://github.com/open-mmlab/mmclassification/blob/master/docs/zh_CN/install.md), [MMDetection-install](https://github.com/open-mmlab/mmdetection/blob/master/docs/zh_cn/get_started.md), [MMSegmentation-install](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/get_started.md#installation), [MMOCR-install](https://mmocr.readthedocs.io/en/latest/install.html), [MMEditing-install](https://github.com/open-mmlab/mmediting/blob/master/docs/zh_cn/install.md)。
|
||||
1. 安装您的目标后端。 您可以参考 [ONNXRuntime-install](../05-supported-backends/onnxruntime.md) ,[TensorRT-install](../05-supported-backends/tensorrt.md) ,[ncnn-install](../05-supported-backends/ncnn.md) ,[PPLNN-install](../05-supported-backends/pplnn.md), [OpenVINO-install](../05-supported-backends/openvino.md)。
|
||||
2. 安装您的目标代码库。 您可以参考 [MMClassification-install](https://github.com/open-mmlab/mmclassification/blob/master/docs/zh_CN/install.md), [MMDetection-install](https://github.com/open-mmlab/mmdetection/blob/master/docs/zh_cn/get_started.md), [MMSegmentation-install](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/get_started.md#installation), [MMOCR-install](https://mmocr.readthedocs.io/zh_CN/latest/install.html), [MMEditing-install](https://github.com/open-mmlab/mmediting/blob/master/docs/zh_cn/install.md)。
|
||||
|
||||
### 使用方法
|
||||
|
||||
|
|
|
@ -22,12 +22,12 @@ def get_output_model_file(onnx_path: str,
|
|||
"""Returns the path to the .param, .bin file with export result.
|
||||
|
||||
Args:
|
||||
onnx_path (str): The path to the onnx model.
|
||||
work_dir (str|None): The path to the directory for saving the results.
|
||||
Defaults to `None`, which means use the directory of onnx_path.
|
||||
onnx_path (str): The path of the onnx model.
|
||||
work_dir (str|None): The path of the directory for saving the results.
|
||||
Defaults to `None`, which means using the directory of onnx_path.
|
||||
|
||||
Returns:
|
||||
List[str]: The path to the files where the export result will be
|
||||
List[str]: The path of the files where the export result will be
|
||||
located.
|
||||
"""
|
||||
if work_dir is None:
|
||||
|
@ -44,7 +44,7 @@ def from_onnx(onnx_model: Union[onnx.ModelProto, str],
|
|||
"""Convert ONNX to ncnn.
|
||||
|
||||
The inputs of ncnn include a model file and a weight file. We need to use
|
||||
a executable program to convert the `.onnx` file to a `.param` file and
|
||||
an executable program to convert the `.onnx` file to a `.param` file and
|
||||
a `.bin` file. The output files will save to work_dir.
|
||||
|
||||
Example:
|
||||
|
|
Loading…
Reference in New Issue