docs: fix invalid links

pull/1749/head
gaotingquan 2022-03-07 12:16:37 +00:00 committed by Tingquan Gao
parent f68c098a4a
commit 18b25e3e86
16 changed files with 34 additions and 34 deletions

View File

@ -55,7 +55,7 @@ After the data is settled, the model often determines the upper limit of the fin
<a name="2.3"></a>
### 2.3 Train the Model
After preparing the data and model, you can start training the model and updating the parameters of the model. After many iterations, a trained model can finally be obtained for image classification tasks. The training process of image classification requires a lot of experience and involves the setting of many hyperparameters. PaddleClas provides a series of [training tuning methods](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/en/models/Tricks_en.md), which can help you quickly obtain a high-precision model.
After preparing the data and model, you can start training the model and updating the parameters of the model. After many iterations, a trained model can finally be obtained for image classification tasks. The training process of image classification requires a lot of experience and involves the setting of many hyperparameters. PaddleClas provides a series of [training tuning methods](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/en/models_training/train_strategy_en.md), which can help you quickly obtain a high-precision model.
<a name="2.4"></a>

View File

@ -108,7 +108,7 @@ PaddleClas strictly follows the resolution used by the authors of the paper. Sin
**A**:
There are many ssld pre-training models available in PaddleClas, which obtain better pre-training weights by semi-supervised knowledge distillation, so that the accuracy can be improved by replacing the ssld pre-training models with higher accuracy in transfer tasks or downstream vision tasks without replacing the structure files. For example, in PaddleSeg, [HRNet](../models/HRNet_en.md) , with the weight of the ssld pre-training model, achieves much better accuracy than other same models in the industry; In PaddleDetection, [PP- YOLO](https://github.com/PaddlePaddle/PaddleDetection/blob/release/0.4/configs/ppyolo/README_cn.md)with ssld pre-training weights has further improvement in the already high baseline. The transfer of classification with ssld pre-training weights also yields impressive results, and the benefits of knowledge distillation for the transfer of classification task is detailed in [SSLD Distillation Strategy](../advanced_tutorials/knowledge_distillation_en.md)
There are many ssld pre-training models available in PaddleClas, which obtain better pre-training weights by semi-supervised knowledge distillation, so that the accuracy can be improved by replacing the ssld pre-training models with higher accuracy in transfer tasks or downstream vision tasks without replacing the structure files. For example, in PaddleSeg, [HRNet](../models/HRNet_en.md) , with the weight of the ssld pre-training model, achieves much better accuracy than other same models in the industry; In PaddleDetection, [PP- YOLO](https://github.com/PaddlePaddle/PaddleDetection/blob/release/0.4/configs/ppyolo/README_cn.md)with ssld pre-training weights has further improvement in the already high baseline. The transfer of classification with ssld pre-training weights also yields impressive results, and the benefits of knowledge distillation for the transfer of classification task is detailed in [SSLD Distillation Strategy](../advanced_tutorials/distillation/distillation_en.md)
<a name="3"></a>
@ -143,7 +143,7 @@ When adopting multiple models for inference, it is recommended to first export t
**A**
- You can adopt auto-mixed precision training, which can gain a significantly faster speed with almost zero precision loss. Take ResNet50 as an example, the configuration file of auto-mixed precision training in PaddleClas can be found at: [ResNet50_fp16.yml](../../../ppcls/configs/ImageNet/ResNet/ResNet50_fp16.yaml). The main step is to add the following lines to the standard configuration file.
- You can adopt auto-mixed precision training, which can gain a significantly faster speed with almost zero precision loss. Take ResNet50 as an example, the configuration file of auto-mixed precision training in PaddleClas can be found at: [ResNet50_amp_O1.yml](../../../ppcls/configs/ImageNet/ResNet/ResNet50_amp_O1.yaml). The main step is to add the following lines to the standard configuration file.
```
# mixed precision training
@ -351,7 +351,7 @@ At this stage, it has become a common practice in the image recognition field to
**A**: If the existing strategy cannot further improve the accuracy of the model, it means that the model has almost reached saturation with the existing dataset and strategy, and two methods are provided here.
- Mining relevant data: Use the model trained on the existing dataset to make predictions on the relevant data, label the data with higher confidence and add it to the training set for further training. Repeat the steps above to further improve the accuracy of the model.
- Knowledge distillation: You can use a larger model to train a teacher model with higher accuracy on the dataset, and then adopt the teacher model to teach a Student model, where the Student model is the target model. PaddleClas provides Baidu's own SSLD knowledge distillation scheme, which can steadily improve by more than 3% even on such a challenging classification task as ImageNet-1k. For the chapter on SSLD knowledge distillation, please refer to [**SSLD Knowledge Distillation**](../advanced_tutorials/knowledge_distillation_en.md).
- Knowledge distillation: You can use a larger model to train a teacher model with higher accuracy on the dataset, and then adopt the teacher model to teach a Student model, where the Student model is the target model. PaddleClas provides Baidu's own SSLD knowledge distillation scheme, which can steadily improve by more than 3% even on such a challenging classification task as ImageNet-1k. For the chapter on SSLD knowledge distillation, please refer to [**SSLD Knowledge Distillation**](../advanced_tutorials/distillation/distillation_en.md).
<a name="6"></a>

View File

@ -248,7 +248,7 @@ PaddleClas saves/updates the following three types of models during training.
#### Q2.4.2: How can recognition models be fine-tuned to train on the basis of pre-trained models?
**A**The fine-tuning training of the recognition model is similar to that of the classification model. The recognition model can be loaded with a pre-trained model of the product, and the training process can be found in [recognition model training](../../models_training/recognition_en.md), and we will continue to refine the documentation.
**A**The fine-tuning training of the recognition model is similar to that of the classification model. The recognition model can be loaded with a pre-trained model of the product, and the training process can be found in [recognition model training](../models_training/recognition_en.md), and we will continue to refine the documentation.
#### Q2.4.3: Why does it fail to run all mini-batches in each epoch when training metric learning?
@ -353,4 +353,4 @@ pip install paddle2onnx
- `InputSpec()` function is used to describe the signature information of the model input, including the `shape`, `type` and `name` of the input data (can be omitted).
- The `paddle.onnx.export()` function needs to specify the model grouping object `net`, the save path of the exported model `save_path`, and the description of the model's input data `input_spec`.
Note that the `paddlepaddle` `2.0.0` or above should be adopted.See [paddle.onnx.export](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/onnx/) for more details on the parameters of the `paddle.onnx.export()` function.
Note that the `paddlepaddle` `2.0.0` or above should be adopted.See [paddle.onnx.export](https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/onnx/export_en.html) for more details on the parameters of the `paddle.onnx.export()` function.

View File

@ -27,8 +27,8 @@
> >
- Q: 怎样根据自己的任务选择合适的模型进行训练?How to choose the right training model?
- A: If you want to deploy on the server with a high requirement for accuracy but not model storage size or prediction speed, then it is recommended to use ResNet_vd, Res2Net_vd, DenseNet, Xception, etc., which are suitable for server-side models. If you want to deploy on the mobile side, then it is recommended to use MobileNetV3 and GhostNet. Meanwhile, we suggest you refer to the speed-accuracy metrics chart in [Model Library](../models/models_intro_en.md) when choosing models.
- Q: How to choose the right training model?
- A: If you want to deploy on the server with a high requirement for accuracy but not model storage size or prediction speed, then it is recommended to use ResNet_vd, Res2Net_vd, DenseNet, Xception, etc., which are suitable for server-side models. If you want to deploy on the mobile side, then it is recommended to use MobileNetV3 and GhostNet. Meanwhile, we suggest you refer to the speed-accuracy metrics chart in [Model Library](../algorithm_introduction/ImageNet_models_en.md) when choosing models.
> >
@ -280,7 +280,7 @@ Loss:
> >
- Q: How to train with Automatic Mixed Precision (AMP) during training?
- A: You can refer to [ResNet50_fp16.yaml](../../../ppcls/configs/ImageNet/ResNet/ResNet50_fp16.yaml). Specifically, if you want your configuration file to support automatic mixed precision during model training, you can add the following information to the file.
- A: You can refer to [ResNet50_amp_O1.yaml](../../../ppcls/configs/ImageNet/ResNet/ResNet50_amp_O1.yaml). Specifically, if you want your configuration file to support automatic mixed precision during model training, you can add the following information to the file.
```
# mixed precision training

View File

@ -48,7 +48,7 @@ Before installing the service module, you need to prepare the inference model an
**Notice**:
* The model file path can be viewed and modified in `PaddleClas/deploy/hubserving/clas/params.py`.
* It should be noted that the prefix of model structure file and model parameters file must be `inference`.
* More models provided by PaddleClas can be obtained from the [model library](../models/models_intro_en.md). You can also use models trained by yourself.
* More models provided by PaddleClas can be obtained from the [model library](../algorithm_introduction/ImageNet_models_en.md). You can also use models trained by yourself.
<a name="4"></a>
## 4. Install Service Module

View File

@ -4,7 +4,7 @@ This tutorial will introduce how to use [Paddle-Lite](https://github.com/PaddleP
Paddle-Lite is a lightweight inference engine for PaddlePaddle. It provides efficient inference capabilities for mobile phones and IoTs, and extensively integrates cross-platform hardware to provide lightweight deployment solutions for mobile-side deployment issues.
If you only want to test speed, please refer to [The tutorial of Paddle-Lite mobile-side benchmark test](../extension/paddle_mobile_inference_en.md).
If you only want to test speed, please refer to [The tutorial of Paddle-Lite mobile-side benchmark test](../others/paddle_mobile_inference_en.md).
---
@ -47,7 +47,7 @@ For the detailed compilation directions of different development environments, p
1. If you download the inference library from [Paddle-Lite official document](https://paddle-lite.readthedocs.io/zh/latest/quick_start/release_lib.html#android-toolchain-gcc), please choose `with_extra=ON` , `with_cv=ON` .
2. It is recommended to build inference library using [Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite) develop branch if you want to deploy the [quantitative](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/quantization/README_en.md) model to mobile phones. Please refer to the [link](https://paddle-lite.readthedocs.io/zh/latest/user_guides/Compile/Android.html#id2) for more detailed information about compiling.
2. It is recommended to build inference library using [Paddle-Lite](https://github.com/PaddlePaddle/Paddle-Lite) develop branch if you want to deploy the [quantitative](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/quantization/README_en.md) model to mobile phones. Please refer to the [link](https://paddle-lite.readthedocs.io/) for more detailed information about compiling.
The structure of the inference library is as follows:

View File

@ -107,7 +107,7 @@ For image classification, ImageNet dataset is adopted. Compared with the current
| PPLCNet_x1_0_ssld | 3.0 | 161 | 74.39 | 92.09 | 2.46 |
| PPLCNet_x2_5_ssld | 9.0 | 906 | 80.82 | 95.33 | 5.39 |
where `_ssld` represents the model after using `SSLD distillation`. For details about `SSLD distillation`, see [SSLD distillation](../advanced_tutorials/knowledge_distillation_en.md).
where `_ssld` represents the model after using `SSLD distillation`. For details about `SSLD distillation`, see [SSLD distillation](../advanced_tutorials/distillation/distillation_en.md).
Performance comparison with other lightweight networks:

View File

@ -217,7 +217,7 @@ Some of the configurable evaluation parameters are described as follows:
**Note** When loading the model to be evaluated, you only need to specify the path of the model file stead of the suffix. PaddleClas will automatically add the `.pdparams` suffix, such as [3.1.3 Resume Training](#3.1.3).
When loading the model to be evaluated, you only need to specify the path of the model file stead of the suffix. PaddleClas will automatically add the `.pdparams` suffix, such as [3.1.3 Resume Training](https://github.com/PaddlePaddle/PaddleClas/blob/ develop/docs/zh_CN/models_training/classification.md#3.1.3).
When loading the model to be evaluated, you only need to specify the path of the model file stead of the suffix. PaddleClas will automatically add the `.pdparams` suffix, such as [3.1.3 Resume Training](../models_training/classification_en.md#3.1.3).
<a name="3.2"></a>

View File

@ -27,7 +27,7 @@ The first step is to select the model to be studied, here we choose ResNet50. Co
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_pretrained.pdparams
```
For other pre-training models and codes of network structure, please download [model library](../../../ppcls/arch/backbone/) and [pre-training models](../models/models_intro_en.md).
For other pre-training models and codes of network structure, please download [model library](../../../ppcls/arch/backbone/) and [pre-training models](../algorithm_introduction/ImageNet_models_en.md).
<a name='3'></a>

View File

@ -1,6 +1,6 @@
# Quick Start of Multi-label Classification
Experience the training, evaluation, and prediction of multi-label classification based on the [NUS-WIDE-SCENE](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html) dataset, which is a subset of the NUS-WIDE dataset. Please first install PaddlePaddle and PaddleClas, see [Paddle Installation](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/installation) and [PaddleClas installation](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/installation/install_ paddleclas.md) for more details.
Experience the training, evaluation, and prediction of multi-label classification based on the [NUS-WIDE-SCENE](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html) dataset, which is a subset of the NUS-WIDE dataset. Please first install PaddlePaddle and PaddleClas, see [Paddle Installation](../installation/install_paddle_en.md) and [PaddleClas installation](../installation/install_paddleclas_en.md) for more details.
## Catalogue

View File

@ -80,7 +80,7 @@
因为要对模型进行训练,所以收集自己的数据集。数据准备及相应格式请参考:[特征提取文档](../image_recognition_pipeline/feature_extraction.md)中 `4.1数据准备`部分、[识别数据集说明](../data_preparation/recognition_dataset.md)。值得注意的是,此部分需要准备大量的数据,以保证识别模型效果。训练配置文件参考:[通用识别模型配置文件](../../../ppcls/configs/GeneralRecognition/GeneralRecognition_PPLCNet_x2_5.yaml),训练方法参考:[识别模型训练](../models_training/recognition.md)
- 数据增强:根据实际情况选择不同数据增强方法。如:实际应用中数据遮挡比较严重,建议添加`RandomErasing`增强方法。详见[数据增强文档](./DataAugmentation.md)
- 换不同的`backbone`,一般来说,越大的模型,特征提取能力更强。不同`backbone`详见[模型介绍](../models/models_intro.md)
- 换不同的`backbone`,一般来说,越大的模型,特征提取能力更强。不同`backbone`详见[模型介绍](../algorithm_introduction/ImageNet_models.md)
- 选择不同的`Metric Learning`方法。不同的`Metric Learning`方法,对不同的数据集效果可能不太一样,建议尝试其他`Loss`,详见[Metric Learning](../algorithm_introduction/metric_learning.md)
- 采用蒸馏方法,对小模型进行模型能力提升,详见[模型蒸馏](../algorithm_introduction/knowledge_distillation.md)
- 增补数据集。针对错误样本添加badcase数据

View File

@ -122,14 +122,15 @@ ResNet 系列模型中相比于其他模型ResNet_vd 模型在预测速度
**A**
* 可以使用自动混合精度进行训练,这在精度几乎无损的情况下,可以有比较明显的速度收益,以 ResNet50 为例PaddleClas 中使用自动混合精度训练的配置文件可以参考:[ResNet50_fp16.yml](../../../ppcls/configs/ImageNet/ResNet/ResNet50_fp16.yaml),主要就是需要在标准的配置文件中添加以下几行
* 可以使用自动混合精度进行训练,这在精度几乎无损的情况下,可以有比较明显的速度收益,以 ResNet50 为例PaddleClas 中使用自动混合精度训练的配置文件可以参考:[ResNet50_amp_O1.yml](../../../ppcls/configs/ImageNet/ResNet/ResNet50_amp_O1.yaml),主要就是需要在标准的配置文件中添加以下几行
```
```yaml
# mixed precision training
AMP:
scale_loss: 128.0
use_dynamic_loss_scaling: True
use_pure_fp16: &use_pure_fp16 True
# O1: mixed fp16
level: O1
```
* 可以开启 dali将数据预处理方法放在 GPU 上运行在模型比较小时reader 耗时占比更高一些),开启 dali 会带来比较明显的训练速度收益,在训练的时候,添加 `-o Global.use_dali=True` 即可使用 dali 进行训练,更多关于 dali 安装与介绍可以参考:[dali 安装教程](https://docs.nvidia.com/deeplearning/dali/user-guide/docs/installation.html#nightly-builds)。

View File

@ -31,7 +31,8 @@
>>
* Q: 怎样根据自己的任务选择合适的模型进行训练?
* A: 如果希望在服务器部署,或者希望精度尽可能地高,对模型存储大小或者预测速度的要求不是很高,那么推荐使用 ResNet_vd、Res2Net_vd、DenseNet、Xception 等适合于服务器端的系列模型;如果希望在移动端侧部署,则推荐使用 MobileNetV3、GhostNet 等适合于移动端的系列模型。同时,我们推荐在选择模型的时候可以参考[模型库](../models/models_intro.md)中的速度-精度指标图。
* A: 如果希望在服务器部署,或者希望精度尽可能地高,对模型存储大小或者预测速度的要求不是很高,那么推荐使用 ResNet_vd、Res2Net_vd、DenseNet、Xception 等适合于服务器端的系列模型;如果希望在移动端侧部署,则推荐使用 MobileNetV3、GhostNet
等适合于移动端的系列模型。同时,我们推荐在选择模型的时候可以参考[模型库](../algorithm_introduction/ImageNet_models.md)中的速度-精度指标图。
>>
* Q: 如何进行参数初始化,什么样的初始化可以加快模型收敛?
@ -232,11 +233,13 @@ Loss:
* A: 如果希望使用 TensorRT 进行模型预测推理的话,需要安装或是自己编译带 TensorRT 的 PaddlePaddleLinux、Windows、macOS 系统的用户下载安装可以参考参考[下载预测库](https://paddleinference.paddlepaddle.org.cn/user_guides/download_lib.html),如果没有符合您所需要的版本,则需要本地编译安装,编译方法可以参考[源码编译](https://paddleinference.paddlepaddle.org.cn/user_guides/source_compile.html)。
>>
* Q: 怎样在训练的时候使用自动混合精度(Automatic Mixed Precision, AMP)训练呢?
* A: 可以参考 [ResNet50_fp16.yaml](../../../ppcls/configs/ImageNet/ResNet/ResNet50_fp16.yaml) 这个配置文件;具体地,如果希望自己的配置文件在模型训练的时候也支持自动混合精度,可以在配置文件中添加下面的配置信息。
```
* A: 可以参考 [ResNet50_amp_O1.yaml](../../../ppcls/configs/ImageNet/ResNet/ResNet50_amp_O1.yaml) 这个配置文件;具体地,如果希望自己的配置文件在模型训练的时候也支持自动混合精度,可以在配置文件中添加下面的配置信息。
```yaml
# mixed precision training
AMP:
scale_loss: 128.0
use_dynamic_loss_scaling: True
use_pure_fp16: &use_pure_fp16 True
# O1: mixed fp16
level: O1
```

View File

@ -55,7 +55,7 @@ pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/sim
```
需要注意,
* 模型文件(包括 `.pdmodel``.pdiparams`)名称必须为 `inference`
* 我们也提供了大量基于 ImageNet-1k 数据集的预训练模型,模型列表及下载地址详见[模型库概览](../models/models_intro.md),也可以使用自己训练转换好的模型。
* 我们也提供了大量基于 ImageNet-1k 数据集的预训练模型,模型列表及下载地址详见[模型库概览](../algorithm_introduction/ImageNet_models.md),也可以使用自己训练转换好的模型。
<a name="4"></a>

View File

@ -23,7 +23,7 @@
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_pretrained.pdparams
```
其他模型网络结构代码及预训练模型请自行下载:[模型库](../../../ppcls/arch/backbone/)[预训练模型](../models/models_intro.md)。
其他模型网络结构代码及预训练模型请自行下载:[模型库](../../../ppcls/arch/backbone/)[预训练模型](../algorithm_introduction/ImageNet_models.md)。
<a name='3'></a>

View File

@ -4,11 +4,7 @@
- 2021.10.23 发布轻量级图像识别系统PP-ShiTuCPU上0.2s即可完成在10w+库的图像识别。[点击这里](../quick_start/quick_start_recognition.md)立即体验。
- 2021.09.17 发布PP-LCNet系列超轻量骨干网络模型, 在Intel CPU上单张图像预测速度约5msImageNet-1K数据集上Top1识别准确率达到80.82%超越ResNet152的模型效果。PP-LCNet的介绍可以参考[论文](https://arxiv.org/pdf/2109.15099.pdf), 或者[PP-LCNet模型介绍](../models/PP-LCNet.md),相关指标和预训练权重可以从 [这里](../algorithm_introduction/ImageNet_models.md)下载。
- 2021.08.11 更新 7 个[FAQ](../faq_series/faq_2021_s2.md)。
- 2021.06.29 添加Swin-transformer系列模型ImageNet1k数据集上Top1 acc最高精度可达87.2%支持训练预测评估与whl包部署预训练模型可以从[这里](../models/models_intro.md)下载。
- 2021.06.22,23,24 PaddleClas官方研发团队带来技术深入解读三日直播课。课程回放[https://aistudio.baidu.com/aistudio/course/introduce/24519](https://aistudio.baidu.com/aistudio/course/introduce/24519)
- 2021.06.16 PaddleClas v2.2版本升级集成Metric learning向量检索等组件。新增商品识别、动漫人物识别、车辆识别和logo识别等4个图像识别应用。新增LeViT、Twins、TNT、DLA、HarDNet、RedNet系列30个预训练模型。
- 2021.08.11 更新 7 个[FAQ](../faq_series/faq_2021_s2.md)。
- 2021.06.29 添加 Swin-transformer 系列模型ImageNet1k 数据集上 Top1 acc 最高精度可达 87.2%;支持训练预测评估与 whl 包部署,预训练模型可以从[这里](../models/models_intro.md)下载。
- 2021.06.29 添加 Swin-transformer 系列模型ImageNet1k 数据集上 Top1 acc 最高精度可达 87.2%;支持训练预测评估与 whl 包部署,预训练模型可以从[这里](../algorithm_introduction/ImageNet_models.md)下载。
- 2021.06.22,23,24 PaddleClas 官方研发团队带来技术深入解读三日直播课。课程回放:[https://aistudio.baidu.com/aistudio/course/introduce/24519](https://aistudio.baidu.com/aistudio/course/introduce/24519)
- 2021.06.16 PaddleClas v2.2 版本升级,集成 Metric learning向量检索等组件。新增商品识别、动漫人物识别、车辆识别和 logo 识别等 4 个图像识别应用。新增 LeViT、Twins、TNT、DLA、HarDNet、RedNet 系列 30 个预训练模型。
- 2021.04.15