Merge pull request #306 from TingquanGao/master

Add English version README about using Paddle-Lite to deploy
pull/336/head
littletomatodonkey 2020-10-14 19:21:34 +08:00 committed by GitHub
commit 203476ef1c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 1328 additions and 55 deletions

View File

@ -26,13 +26,13 @@ Paddle Lite是飞桨轻量化推理引擎为手机、IOT端提供高效推理
|平台|预测库下载链接|
|-|-|
|Android|[arm7](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/Android/inference_lite_lib.android.armv7.gcc.c++_static.with_extra.CV_ON.tar.gz) / [arm8](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/Android/inference_lite_lib.android.armv8.gcc.c++_static.with_extra.CV_ON.tar.gz)|
|IOS|[arm7](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/iOS/inference_lite_lib.ios.armv7.with_extra.CV_ON.tar.gz) / [arm8](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/iOS/inference_lite_lib.ios64.armv8.with_extra.CV_ON.tar.gz)|
|iOS|[arm7](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/iOS/inference_lite_lib.ios.armv7.with_extra.CV_ON.tar.gz) / [arm8](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/iOS/inference_lite_lib.ios64.armv8.with_extra.CV_ON.tar.gz)|
1. 如果是从下Paddle-Lite[官网文档](https://paddle-lite.readthedocs.io/zh/latest/user_guides/release_lib.html#android-toolchain-gcc)下载的预测库,
1. 如果是从 Paddle-Lite [官方文档](https://paddle-lite.readthedocs.io/zh/latest/user_guides/release_lib.html#android-toolchain-gcc)下载的预测库,
注意选择`with_extra=ONwith_cv=ON`的下载链接。2. 如果使用量化的模型部署在端侧建议使用Paddle-Lite develop分支编译预测库。
2. 编译Paddle-Lite得到预测库Paddle-Lite的编译方式如下
```
```shell
git clone https://github.com/PaddlePaddle/Paddle-Lite.git
cd Paddle-Lite
# 如果使用编译方式建议使用develop分支编译预测库
@ -40,10 +40,9 @@ git checkout develop
./lite/tools/build_android.sh --arch=armv8 --with_cv=ON --with_extra=ON
```
注意编译Paddle-Lite获得预测库时需要打开`--with_cv=ON --with_extra=ON`两个选项,`--arch`表示`arm`版本这里指定为armv8更多编译命令介绍请参考[链接](https://paddle-lite.readthedocs.io/zh/latest/user_guides/Compile/Android.html#id2)。
**注意**编译Paddle-Lite获得预测库时需要打开`--with_cv=ON --with_extra=ON`两个选项,`--arch`表示`arm`版本这里指定为armv8更多编译命令介绍请参考[链接](https://paddle-lite.readthedocs.io/zh/latest/user_guides/Compile/Android.html#id2)。
直接下载预测库并解压后,可以得到`inference_lite_lib.android.armv8/`文件夹通过编译Paddle-Lite得到的预测库位于
`Paddle-Lite/build.lite.android.armv8.gcc/inference_lite_lib.android.armv8/`文件夹下。
直接下载预测库并解压后,可以得到`inference_lite_lib.android.armv8/`文件夹通过编译Paddle-Lite得到的预测库位于`Paddle-Lite/build.lite.android.armv8.gcc/inference_lite_lib.android.armv8/`文件夹下。
预测库的文件目录如下:
```
@ -75,13 +74,13 @@ inference_lite_lib.android.armv8/
### 2.1 模型优化
Paddle-Lite 提供了多种策略来自动优化原始的模型其中包括量化、子图融合、混合调度、Kernel优选等方法使用Paddle-lite的opt工具可以自动对inference模型进行优化优化后的模型更轻量模型运行速度更快。有2种方式对inference模型进行优化。
Paddle-Lite 提供了多种策略来自动优化原始的模型其中包括量化、子图融合、混合调度、Kernel优选等方法使用Paddle-Lite的`opt`工具可以自动对inference模型进行优化目前支持两种优化方式,优化后的模型更轻量,模型运行速度更快。
**注意**:如果已经准备好了 `.nb` 结尾的模型文件,可以跳过此步骤。
#### 2.1.1 [建议]pip安装paddlelite并进行转换
* python下安装paddlelite
Python下安装 `paddlelite`,目前最高支持`Python3.7`。
```shell
pip install paddlelite
@ -104,8 +103,8 @@ pip install paddlelite
#### 2.1.2 源码编译Paddle-Lite生成opt工具
模型优化需要Paddle-Lite的opt可执行文件可以通过编译Paddle-Lite源码获得编译步骤如下
```
模型优化需要Paddle-Lite的`opt`可执行文件可以通过编译Paddle-Lite源码获得编译步骤如下
```shell
# 如果准备环境时已经clone了Paddle-Lite则不用重新clone Paddle-Lite
git clone https://github.com/PaddlePaddle/Paddle-Lite.git
cd Paddle-Lite
@ -114,130 +113,145 @@ git checkout develop
./lite/tools/build.sh build_optimize_tool
```
编译完成后opt文件位于`build.opt/lite/api/`下可通过如下方式查看opt的运行选项和使用方式
```
编译完成后,`opt`文件位于`build.opt/lite/api/`下,可通过如下方式查看`opt`的运行选项和使用方式;
```shell
cd build.opt/lite/api/
./opt
```
`opt`的使用方式与参数与上面的`paddle_lite_opt`完全一致。
<a name="2.1.3"></a>
#### 2.1.3 转换示例
下面以PaddleClas的`MobileNetV3_large_x1_0`模型为例,介绍使用`paddle_lite_opt`完成预训练模型到inference模型再到Paddle-Lite优化模型的转换。
下面以PaddleClas的 `MobileNetV3_large_x1_0` 模型为例,介绍使用`paddle_lite_opt`完成预训练模型到inference模型再到Paddle-Lite优化模型的转换。
```shell
# 进入PaddleClas根目录
cd PaddleClas_root_path
export PYTHONPATH=$PWD
# 下载并解压预训练模型
wget https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_pretrained.tar
tar -xf MobileNetV3_large_x1_0_pretrained
tar -xf MobileNetV3_large_x1_0_pretrained.tar
# 将预训练模型导出为inference模型
python tools/export_model.py -m MobileNetV3_large_x1_0 -p ./MobileNetV3_large_x1_0_pretrained/ -o ./MobileNetV3_large_x1_0_inference/
# 将inference模型转化为Paddle-Lite优化模型
paddle_lite_opt --model_file=./MobileNetV3_large_x1_0_inference/model --param_file=./MobileNetV3_large_x1_0_inference/params --optimize_out=./MobileNetV3_large_x1_0
```
最终在当前文件夹下生成`MobileNetV3_large_x1_0.nb`的文件。
**注意**`--optimize_out` 参数为优化后模型的保存路径,无需加后缀`.nb``--model_file` 参数为模型结构信息文件的路径,`--param_file` 参数为模型权重信息文件的路径,请注意文件名。
<a name="2.2与手机联调"></a>
### 2.2 与手机联调
首先需要进行一些准备工作。
1. 准备一台arm8的安卓手机如果编译的预测库和opt文件是armv7则需要arm7的手机并修改Makefile中`ARM_ABI = arm7`。
2. 打开手机的USB调试选项选择文件传输模式连接电脑。
3. 电脑上安装adb工具用于调试。 adb安装方式如下
2. 电脑上安装ADB工具用于调试。 ADB安装方式如下
3.1. MAC电脑安装ADB:
```
```shell
brew cask install android-platform-tools
```
3.2. Linux安装ADB
```
```shell
sudo apt update
sudo apt install -y wget adb
```
3.3. Window安装ADB
win上安装需要去谷歌的安卓平台下载adb软件包进行安装:[链接](https://developer.android.com/studio)
win上安装需要去谷歌的安卓平台下载ADB软件包进行安装:[链接](https://developer.android.com/studio)
打开终端,手机连接电脑,在终端中输入
```
adb devices
```
如果有device输出则表示安装成功。
```
List of devices attached
744be294 device
```
4. 手机连接电脑后,开启手机`USB调试`选项,选择`文件传输`模式,在电脑终端中输入:
4. 准备优化后的模型、预测库文件、测试图像和类别映射文件。
```shell
adb devices
```
如果有device输出则表示安装成功如下所示
```
List of devices attached
744be294 device
```
5. 准备优化后的模型、预测库文件、测试图像和类别映射文件。
```shell
cd PaddleClas_root_path
cd deploy/lite/
# 运行prepare.sh准备预测库文件、测试图像和使用的字典文件并放置在预测库中的demo/cxx/clas文件夹下
# 运行prepare.sh
# prepare.sh 会将预测库文件、测试图像和使用的字典文件放置在预测库中的demo/cxx/clas文件夹下
sh prepare.sh /{lite prediction library path}/inference_lite_lib.android.armv8
# 进入lite demo的工作目录
cd /{lite prediction library path}/inference_lite_lib.android.armv8/
cd demo/cxx/clas/
# 将C++预测动态库so文件复制到debug文件夹中
cp ../../../cxx/lib/libpaddle_light_api_shared.so ./debug/
```
准备测试图像,以`PaddleClas/deploy/lite/imgs/tabby_cat.jpg`为例,将测试的图像复制到`demo/cxx/clas/debug/`文件夹下。
准备`paddle_lite_opt`工具优化后的模型文件,比如使用`MobileNetV3_large_x1_0.nb`,模型文件放置在`demo/cxx/clas/debug/`文件夹下
`prepare.sh``PaddleClas/deploy/lite/imgs/tabby_cat.jpg` 作为测试图像,将测试图像复制到`demo/cxx/clas/debug/` 文件夹下。
`paddle_lite_opt` 工具优化后的模型文件放置到 `/{lite prediction library path}/inference_lite_lib.android.armv8/demo/cxx/clas/debug/` 文件夹下。本例中,使用[2.1.3](#2.1.3)生成的 `MobileNetV3_large_x1_0.nb` 模型文件
执行完成后clas文件夹下将有如下文件格式
执行完成后clas文件夹下将有如下文件格式
```shell
```
demo/cxx/clas/
|-- debug/
| |--MobileNetV3_large_x1_0.nb 优化后的文字方向分类器模型文件
| |--tabby_cat.jpg 待测试图像
| |--MobileNetV3_large_x1_0.nb 优化后的分类器模型文件
| |--tabby_cat.jpg 待测试图像
| |--imagenet1k_label_list.txt 类别映射文件
| |--libpaddle_light_api_shared.so C++预测库文件
| |--config.txt 分类预测超参数配置
|-- config.txt 分类预测超参数配置
|-- image_classfication.cpp 图像分类代码文件
|-- Makefile 编译文件
|-- config.txt 分类预测超参数配置
|-- image_classfication.cpp 图像分类代码文件
|-- Makefile 编译文件
```
#### 注意:
1. `imagenet1k_label_list.txt`是ImageNet1k数据集的类别映射文件如果是自定义的类别,需要更换该类别映射文件。
* 上述文件中,`imagenet1k_label_list.txt` 是ImageNet1k数据集的类别映射文件如果使用自定义的类别,需要更换该类别映射文件。
2. `config.txt` 包含了检测器、分类器的超参数,如下:
```
* `config.txt` 包含了分类器的超参数,如下:
```shell
clas_model_file ./MobileNetV3_large_x1_0.nb # 模型文件地址
label_path ./imagenet1k_label_list.txt # 类别映射文本文件
label_path ./imagenet1k_label_list.txt # 类别映射文本文件
resize_short_size 256 # resize之后的短边边长
crop_size 224 # 裁剪后用于预测的边长
crop_size 224 # 裁剪后用于预测的边长
visualize 0 # 是否进行可视化如果选择的话会在当前文件夹下生成名为clas_result.png的图像文件。
```
3. 启动调试上述步骤完成后就可以使用adb将文件push到手机上运行步骤如下
5. 启动调试上述步骤完成后就可以使用ADB将文件夹 `debug/` push到手机上运行步骤如下
```
```shell
# 执行编译得到可执行文件clas_system
# clas_system可执行文件的使用方式为:
# ./clas_system 配置文件路径 测试图像路径
make -j
# 将编译的可执行文件移动到debug文件夹中
# 将编译得到的可执行文件移动到debug文件夹中
mv clas_system ./debug/
# 将debug文件夹push到手机上
# 将上述debug文件夹push到手机上
adb push debug /data/local/tmp/
adb shell
cd /data/local/tmp/debug
export LD_LIBRARY_PATH=/data/local/tmp/debug:$LD_LIBRARY_PATH
# clas_system可执行文件的使用方式为:
# ./clas_system 配置文件路径 测试图像路径
./clas_system ./config.txt ./tabby_cat.jpg
```
```
如果对代码做了修改则需要重新编译并push到手机上。
如果对代码做了修改则需要重新编译并push到手机上。
运行效果如下:
运行效果如下:
<div align="center">
<img src="./imgs/lite_demo_result.png" width="600">
@ -246,7 +260,7 @@ export LD_LIBRARY_PATH=/data/local/tmp/debug:$LD_LIBRARY_PATH
## FAQ
Q1如果想更换模型怎么办需要重新按照流程走一遍吗
A1如果已经走通了上述步骤更换模型只需要替换`.nb`模型文件即可同时要注意修改下配置文件中的nb文件路径以及类别映射文件如有必要
A1如果已经走通了上述步骤更换模型只需要替换 `.nb` 模型文件即可,同时要注意修改下配置文件中的 `.nb` 文件路径以及类别映射文件(如有必要)。
Q2换一个图测试怎么做
A2替换debug下的测试图像为你想要测试的图像adb push 到手机上即可。
A2替换 debug 下的测试图像为你想要测试的图像,使用 ADB 再次 push 到手机上即可。

View File

@ -0,0 +1,259 @@
# Tutorial of PaddleClas Mobile Deployment
This tutorial will introduce how to use [Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite) to deploy PaddleClas models on mobile phones.
Paddle-Lite is a lightweight inference engine for PaddlePaddle. It provides efficient inference capabilities for mobile phones and IoTs, and extensively integrates cross-platform hardware to provide lightweight deployment solutions for mobile-side deployment issues.
If you only want to test speed, please refer to [The tutorial of Paddle-Lite mobile-side benchmark test](../../docs/zh_CN/extension/paddle_mobile_inference.md).
## 1. Preparation
- Computer (for compiling Paddle-Lite)
- Mobile phone (arm7 or arm8)
## 2. Build Paddle-Lite library
The cross-compilation environment is used to compile the C++ demos of Paddle-Lite and PaddleClas.
For the detailed compilation directions of different development environments, please refer to the corresponding documents.
1. [Docker](https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_env.html#docker)
2. [Linux](https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_env.html#linux)
3. [macOS](https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_env.html#mac-os)
## 3. Download inference library for Android or iOS
|Platform|Inference Library Download Link|
|-|-|
|Android|[arm7](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/Android/inference_lite_lib.android.armv7.gcc.c++_static.with_extra.CV_ON.tar.gz) / [arm8](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/Android/inference_lite_lib.android.armv8.gcc.c++_static.with_extra.CV_ON.tar.gz)|
|iOS|[arm7](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/iOS/inference_lite_lib.ios.armv7.with_extra.CV_ON.tar.gz) / [arm8](https://paddlelite-data.bj.bcebos.com/Release/2.6.1/iOS/inference_lite_lib.ios64.armv8.with_extra.CV_ON.tar.gz)|
**NOTE**:
1. If you download the inference library from [Paddle-Lite official document](https://paddle-lite.readthedocs.io/zh/latest/user_guides/release_lib.html#android-toolchain-gcc), please choose `with_extra=ON` , `with_cv=ON` .
2. It is recommended to build inference library using [Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite) develop branch if you want to deploy the [quantitative](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/quantization/README_en.md) model to mobile phones. Please refer to the [link](https://paddle-lite.readthedocs.io/zh/latest/user_guides/Compile/Android.html#id2) for more detailed information about compiling.
The structure of the inference library is as follows:
```
inference_lite_lib.android.armv8/
|-- cxx C++ inference library and header files
| |-- include C++ header files
| | |-- paddle_api.h
| | |-- paddle_image_preprocess.h
| | |-- paddle_lite_factory_helper.h
| | |-- paddle_place.h
| | |-- paddle_use_kernels.h
| | |-- paddle_use_ops.h
| | `-- paddle_use_passes.h
| `-- lib C++ inference library
| |-- libpaddle_api_light_bundled.a C++ static library
| `-- libpaddle_light_api_shared.so C++ dynamic library
|-- java Java inference library
| |-- jar
| | `-- PaddlePredictor.jar
| |-- so
| | `-- libpaddle_lite_jni.so
| `-- src
|-- demo C++ and java demos
| |-- cxx C++ demos
| `-- java Java demos
```
## 4. Inference Model Optimization
Paddle-Lite provides a variety of strategies to automatically optimize the original training model, including quantization, sub-graph fusion, hybrid scheduling, Kernel optimization and so on. In order to make the optimization process more convenient and easy to use, Paddle-Lite provides `opt` tool to automatically complete the optimization steps and output a lightweight, optimal executable model.
**NOTE**: If you have already got the `.nb` file, you can skip this step.
<a name="4.1"></a>
### 4.1 [RECOMMEND] Use `pip` to install Paddle-Lite and optimize model
* Use pip to install Paddle-Lite. The following command uses `pip3.7` .
```shell
pip install paddlelite
```
* Use `paddle_lite_opt` to optimize inference model, the parameters of `paddle_lite_opt` are as follows:
| Parameters | Explanation |
| ----------------------- | ------------------------------------------------------------ |
| --model_dir | Path to the PaddlePaddle model (no-combined) file to be optimized. |
| --model_file | Path to the net structure file of PaddlePaddle model (combined) to be optimized. |
| --param_file | Path to the net weight files of PaddlePaddle model (combined) to be optimized. |
| --optimize_out_type | Type of output model, `protobuf` by default. Supports `protobuf` and `naive_buffer` . Compared with `protobuf`, you can use`naive_buffer` to get a more lightweight serialization/deserialization model. If you need to predict on the mobile-side, please set it to `naive_buffer`. |
| --optimize_out | Path to output model, not needed to add `.nb` suffix. |
| --valid_targets | The executable backend of the model, `arm` by default. Supports one or some of `x86` , `arm` , `opencl` , `npu` , `xpu`. If set more than one, please separate the options by space, and the `opt` tool will choose the best way automatically. If need to support Huawei NPU (DaVinci core carried by Kirin 810/990 SoC), please set it to `npu arm` . |
| --record_tailoring_info | Whether to enable `Cut the Library Files According To the Model` , `false` by default. If need to record kernel and OP infos of optimized model, please set it to `true`. |
In addition, you can run `paddle_lite_opt` to get more detailed information about how to use.
### 4.2 Compile Paddle-Lite to generate `opt` tool
Optimizing model requires Paddle-Lite's `opt` executable file, which can be obtained by compiling the Paddle-Lite. The steps are as follows:
```shell
# get the Paddle-Lite source code, if have gotten , please skip
git clone https://github.com/PaddlePaddle/Paddle-Lite.git
cd Paddle-Lite
git checkout develop
# compile
./lite/tools/build.sh build_optimize_tool
```
After the compilation is complete, the `opt` file is located under `build.opt/lite/api/`.
`opt` tool is used in the same way as `paddle_lite_opt` , please refer to [4.1](#4.1).
<a name="4.3"></a>
### 4.3 Demo of get the optimized model
Taking the `MobileNetV3_large_x1_0` model of PaddleClas as an example, we will introduce how to use `paddle_lite_opt` to complete the conversion from the pre-trained model to the inference model, and then to the Paddle-Lite optimized model.
```shell
# enter PaddleClas root directory
cd PaddleClas_root_path
export PYTHONPATH=$PWD
# download and uncompress the pre-trained model
wget https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x1_0_pretrained.tar
tar -xf MobileNetV3_large_x1_0_pretrained.tar
# export the pre-trained model as an inference model
python tools/export_model.py -m MobileNetV3_large_x1_0 -p ./MobileNetV3_large_x1_0_pretrained/ -o ./MobileNetV3_large_x1_0_inference/
# convert inference model to Paddle-Lite optimized model
paddle_lite_opt --model_file=./MobileNetV3_large_x1_0_inference/model --param_file=./MobileNetV3_large_x1_0_inference/params --optimize_out=./MobileNetV3_large_x1_0
```
When the above code command is completed, there will be ``MobileNetV3_large_x1_0.nb` in the current directory, which is the converted model file.
## 5. Run optimized model on Phone
1. Prepare an Android phone with `arm8`. If the compiled inference library and `opt` file are `armv7`, you need an `arm7` phone and modify `ARM_ABI = arm7` in the Makefile.
2. Install the ADB tool on the computer.
* Install ADB for MAC
Recommend use homebrew to install.
```shell
brew cask install android-platform-tools
```
* Install ADB for Linux
```shell
sudo apt update
sudo apt install -y wget adb
```
* Install ADB for windows
If install ADB fo Windows, you need to download from Google's Android platform: [Download Link](https://developer.android.com/studio).
First, make sure the phone is connected to the computer, turn on the `USB debugging` option of the phone, and select the `file transfer` mode. Verify whether ADB is installed successfully as follows:
```shell
$ adb devices
List of devices attached
744be294 device
```
If there is `device` output like the above, it means the installation was successful.
4. Prepare optimized model, inference library files, test image and dictionary file used.
```shell
cd PaddleClas_root_path
cd deploy/lite/
# prepare.sh will put the inference library files, the test image and the dictionary files in demo/cxx/clas
sh prepare.sh /{lite inference library path}/inference_lite_lib.android.armv8
# enter the working directory of lite demo
cd /{lite inference library path}/inference_lite_lib.android.armv8/
cd demo/cxx/clas/
# copy the C++ inference dynamic library file ie. .so) to the debug folder
cp ../../../cxx/lib/libpaddle_light_api_shared.so ./debug/
```
The `prepare.sh` take `PaddleClas/deploy/lite/imgs/tabby_cat.jpg` as the test image, and copy it to the `demo/cxx/clas/debug/` directory.
You should put the model that optimized by `paddle_lite_opt` under the `demo/cxx/clas/debug/` directory. In this example, use `MobileNetV3_large_x1_0.nb` model file generated in [2.1.3](#4.3).
The structure of the clas demo is as follows after the above command is completed:
```
demo/cxx/clas/
|-- debug/
| |--MobileNetV3_large_x1_0.nb class model
| |--tabby_cat.jpg test image
| |--imagenet1k_label_list.txt dictionary file
| |--libpaddle_light_api_shared.so C++ .so file
| |--config.txt config file
|-- config.txt config file
|-- image_classfication.cpp source code
|-- Makefile compile file
```
**NOTE**:
* `Imagenet1k_label_list.txt` is the category mapping file of the `ImageNet1k` dataset. If use a custom category, you need to replace the category mapping file.
* `config.txt` contains the hyperparameters, as follows:
```shell
clas_model_file ./MobileNetV3_large_x1_0.nb # path of model file
label_path ./imagenet1k_label_list.txt # path of category mapping file
resize_short_size 256 # the short side length after resize
crop_size 224 # side length used for inference after cropping
visualize 0 # whether to visualize. If you set it to 1, an image file named 'clas_result.png' will be generated in the current directory.
```
5. Run Model on Phone
```shell
# run compile to get the executable file 'clas_system'
make -j
# move the compiled executable file to the debug folder
mv clas_system ./debug/
# push the debug folder to Phone
adb push debug /data/local/tmp/
adb shell
cd /data/local/tmp/debug
export LD_LIBRARY_PATH=/data/local/tmp/debug:$LD_LIBRARY_PATH
# the usage of clas_system is as follows:
# ./clas_system "path of config file" "path of test image"
./clas_system ./config.txt ./tabby_cat.jpg
```
**NOTE**: If you make changes to the code, you need to recompile and repush the `debug ` folder to the phone.
The result is as follows:
<div align="center">
<img src="./imgs/lite_demo_result.png" width="600">
</div>
## FAQ
Q1If I want to change the model, do I need to go through the all process again?
A1If you have completed the above steps, you only need to replace the `.nb` model file after replacing the model. At the same time, you may need to modify the path of `.nb` file in the config file and change the category mapping file to be compatible the model .
Q2How to change the test picture?
A2Replace the test image under debug folder with the image you want to testand then repush to the Phone again.

File diff suppressed because it is too large Load Diff