From b2cb5722f732c4789a11e282d13b00b6e25e2b0d Mon Sep 17 00:00:00 2001 From: Mengyang Liu <49838178+liu-mengyang@users.noreply.github.com> Date: Sat, 8 Oct 2022 09:16:54 +0800 Subject: [PATCH] Update get_started.md (#1155) --- docs/en/get_started.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/get_started.md b/docs/en/get_started.md index 9582b8169..12a9dfac0 100644 --- a/docs/en/get_started.md +++ b/docs/en/get_started.md @@ -200,7 +200,7 @@ And they make up of MMDeploy Model that can be fed to MMDeploy SDK to do model i For more details about model conversion, you can read [how_to_convert_model](02-how-to-run/convert_model.md). If you want to customize the conversion pipeline, you can edit the config file by following [this](02-how-to-run/write_config.md) tutorial. ```{tip} -If MMDeploy-ONNXRuntime prebuild package is installed, you can convert the above model to onnx model and perform ONNX Runtime inference +If MMDeploy-ONNXRuntime prebuilt package is installed, you can convert the above model to onnx model and perform ONNX Runtime inference just by 'changing detection_tensorrt_dynamic-320x320-1344x1344.py' to 'detection_onnxruntime_dynamic.py' and making '--device' as 'cpu'. ```