mmdeploy/demo/csharp
Chen Xin 423e27a4fe
[Sync] Sync csharp api to dev-1.x (#1889)
* [Feature] Sync csharp apis with newly added c apis && demo (#1718)

* sync c api to c#

* fix typo

* add pose tracker c# demo

* udpate gitignore

* remove print

* fix lint

* update rotated detection api

* update rotated detection demo

* rename pose_tracking -> pose_tracker

* use input size as default

* fix clang-format
2023-03-21 11:54:32 +08:00
..
image_classification bump version to 1.0.0rc2 (#1754) 2023-02-16 14:20:33 +08:00
image_restorer bump version to 1.0.0rc2 (#1754) 2023-02-16 14:20:33 +08:00
image_segmentation bump version to 1.0.0rc2 (#1754) 2023-02-16 14:20:33 +08:00
object_detection bump version to 1.0.0rc2 (#1754) 2023-02-16 14:20:33 +08:00
ocr_detection bump version to 1.0.0rc2 (#1754) 2023-02-16 14:20:33 +08:00
ocr_recognition bump version to 1.0.0rc2 (#1754) 2023-02-16 14:20:33 +08:00
pose_detection bump version to 1.0.0rc2 (#1754) 2023-02-16 14:20:33 +08:00
pose_tracker [Sync] Sync csharp api to dev-1.x (#1889) 2023-03-21 11:54:32 +08:00
rotated_detection [Sync] Sync csharp api to dev-1.x (#1889) 2023-03-21 11:54:32 +08:00
Demo.sln [Sync] Sync csharp api to dev-1.x (#1889) 2023-03-21 11:54:32 +08:00
README.md [Docs] Replace markdownlint with mdformat and configure myst-parser (#610) 2022-06-17 09:19:10 +08:00

README.md

Usage

step 0. install the local nuget package

You should build csharp api first, it will generate a nuget package, or you can download our prebuit package. You may refer to this on how to install local nuget package.

step 1. Add runtime dll to the system path

If you built csharp api from source and didn't build static lib, you should add the built dll to your system path. The same is to opencv, etc.

And don't forget to install backend dependencies. Take tensorrt backend as example, you have to install cudatoolkit, cudnn and tensorrt. The version of backend dependencies that our prebuit nuget package used will be offered in release note.

backend dependencies
tensorrt cudatoolkit, cudnn, tensorrt
onnxruntime onnxruntime / onnxruntime-gpu

step 2. Open Demo.sln and build solution.

step 3. Prepare the model.

You can either convert your model according to this tutorial or download the test models from OneDrive or BaiduYun. The web drive contains onnx and tensorrt models and the test models are converted under environment of cuda11.1 + cudnn8.2.1 + tensorrt 8.2.3.0 + GTX2070s.

Note:

  • a) If you want to use the tensorrt model from the link, make sure your environment and your gpu architecture is same with above.
  • b) When you use the downloaded onnx model, you have to edit deploy.json, edit end2end.engine to end2end.onnx and tensorrt to onnxruntime.

step 4. Set one project as startup project and run it.