add docs, config, model for DSNet

pull/2562/head
LittleMoon 2022-12-14 14:35:19 +08:00 committed by cuicheng01
parent 830852e745
commit 28e094e097
10 changed files with 1545 additions and 0 deletions

View File

@ -0,0 +1,168 @@
# DSNet
---
## 目录
- [1. 模型介绍](#1)
- [1.1 模型简介](#1.1)
- [1.2 模型细节](#1.2)
- [1.2.1 尺度内传播模块](#1.2.1)
- [1.2.2 尺度间对齐模块](#1.2.2)
- [1.3 实验结果](#1.3)
- [2. 模型快速体验](#2)
- [3. 模型训练、评估和预测](#3)
- [4. 模型推理部署](#4)
- [4.1 推理模型准备](#4.1)
- [4.2 基于 Python 预测引擎推理](#4.2)
- [4.3 基于 C++ 预测引擎推理](#4.3)
- [4.4 服务化部署](#4.4)
- [4.5 端侧部署](#4.5)
- [4.6 Paddle2ONNX 模型转换与预测](#4.6)
- [5. 引用](#5)
<a name="1"></a>
## 1. 模型介绍
### 1.1 模型简介
具有优越的全局表示能力的Transformer在视觉任务中取得了具有竞争性的结果。但是这些基于Transformer的模型未能考虑输入图像中的细粒度更高的局部信息。现有的一些工作如ContNet、CrossViT、CvT、PVT等尝试以不同的方式将卷积引入到Transformer中从而兼顾局部特征与图片全局信息。然而这些方法要么依次执行卷积和注意力机制要么只是在注意力机制中用卷积投影代替线性投影且关注局部模式的卷积操作与关注全局模式的注意力机制可能会在训练过程中产生冲突妨碍了两者的优点合并。
本文提出了一种通用的双流网络Dual-Stream Network, DSNet以充分探索用于图像分类的局部和全局模式特征的表示能力。DSNet可以并行地同时计算细粒度的局部特征和集成的全局特征并有效地对两种特征进行融合。具体而言我们提出了一个尺度内传播模块intra-scale propagation来处理每个DS-Block中的具有两种不同分辨率的局部特征和全局特征以及一个尺度间对齐模块inter-scale alignment来在双尺度上跨特征执行信息交互并将局部特征与全局特征进行融合。我们所提出的DSNet在ImageNet-1k上的top-1 accuracy比DeiT-Small精度高出2.4%并与其他Vision Transformers和ResNet模型相比实现了最先进的性能。在目标检测和实例分割任务上利用DSNet-Small作为骨干网络的模型在MSCOCO 2017数据集上的mAP分别比利用ResNet-50作为骨干网络的模型高出6.4%和5.5%并超过了之前SOTA水平从而证明了其作为视觉任务通用骨干的潜力。
论文地址https://arxiv.org/abs/2105.14734v4。
<a name="1.2"></a>
### 1.2 模型细节
网络整体结构如下图所示。
<div align="center">
<img width="700" alt="DSNet" src="https://user-images.githubusercontent.com/71830213/207497958-f9802c03-3eec-4ba5-812f-c6a9158856c1.png">
</div>
受ResNet等网络分stage对网络结构进行设计的启发我们在DSNet中同样设置了4个stage对应原图的下采样倍数分别为4、8、16、32。每个stage均采用若干个DS-Block来生成并组合双尺度卷积操作和注意力机制的特征表示。我们的关键思想是以相对较高的分辨率来表征局部特征以保留细节而以较低的分辨率图像大小的$\frac{1}{32}$[^1]表示全局特征以保留全局图案。具体来说在每个DS-Block中我们将输入特征图在通道维度上分成两部分。其中一部分用于提取局部特征${f}_ {l}$,另一部分用于汇总全局特征${f}_ {g}$。其中每个stage的所有DS-Block中保持${f}_ {g}$的大小不变。接下来对DS-Block中的尺度内传播模块和尺度间对齐模块进行介绍。
<a name="1.2.1"></a>
#### 1.2.1 尺度内传播模块
对于高分辨率的${f}_ {l}$,我们采用${3} \times {3}$的depth-wise卷积来提取局部特征从而得到特征图${f}_ {L}$,即
$$
{f}_ {L} = \sum^{M, N}_ {m, n} W \left( m, n \right) \odot {f}_ {l} \left( i+m, j+n \right),
$$
其中$W ( m, n ), ( m, n ) \in \{ -1, 0, 1 \}$表示卷积核,$\odot$表示元素级相乘。而对于低分辨率的${f}_ {g}$,我们首先将其展平为长度为${l}_ {g}$的序列从而将序列中每一个向量视为一个visual token然后通过self-attention机制得到特征图为
$$
{f}_ {G} = \text{softmax} \left( \frac{{f}_ {Q} {f}_ {K}^{T}}{\sqrt{d}} \right) {f}_ {V},
$$
其中${f}_ {Q} = {W}_ {Q} {f}_ {g}, {f}_ {K} = {W}_ {K} {f}_ {g}, {f}_ {V} = {W}_ {V} {f}_ {g}$。这样的双流架构在两条路径中解耦了细粒度和全局的特征,显著消除了训练过程中的两者的冲突,最大限度地发挥局部和全局特征的优势。
<a name="1.2.2"></a>
#### 1.2.2 尺度间对齐模块
双尺度表示的适当融合对于DSNet的成功至关重要因为它们捕捉了一幅图像的两个不同视角。为了解决这个问题我们提出了一种新的基于co-attention的尺度间对齐模块。该模块旨在捕获每个局部-全局token对之间的相互关联并以可学习和动态的方式双向传播信息从而促使局部特征自适应地探索它们与全局信息的关系从而使它们更具代表性和信息性反之亦然。具体地对于尺度内传播模块计算所得的${f}_ {L}, {f}_ {G}$我们分别计算它们对应的query、key和value为
$$
{Q}_ {L} = {f}_ {L} {W}_ {Q}^{l}, {K}_ {L} = {f}_ {L} {W}_ {K}^{l}, {V}_ {L} = {f}_ {L} {W}_ {V}^{l},\\
{Q}_ {G} = {f}_ {G} {W}_ {Q}^{g}, {K}_ {G} = {f}_ {G} {W}_ {K}^{g}, {V}_ {G} = {f}_ {G} {W}_ {V}^{g},
$$
从而计算得到从全局特征到局部特征、从局部特征到全局特征的注意力权重为:
$$
{W}_ { G \rightarrow L} = \text{softmax} \left( \frac{ {Q}_ {L} {K}_ {G}^{T} }{ \sqrt{d} } \right), {W}_ { L \rightarrow G } = \text{softmax} \left( \frac{ {Q}_ {G} {K}_ {L}^{T} }{ \sqrt{d} } \right)
$$
从而得到混合的特征为:
$$
{h}_ {L} = {W}_ { G \rightarrow L } {V}_ {G}, {h}_ {G} = {W}_ { L \rightarrow G } {V}_ {L},
$$
这种双向信息流能够识别本地和全局token之间的跨尺度关系通过这种关系双尺度特征高度对齐并相互耦合。在这之后我们对低分辨率表示${h}_ {G}$进行上采样,将其与高分辨率${h}_ {L}$拼接起来,并执行${1} \times {1}$卷积,以实现信道级双尺度信息融合。
<a name="1.3"></a>
### 1.3 实验结果
| Models | Top1 | Top5 | Reference<br>top1 | Reference<br>top5 | FLOPs<br>(G) | Params<br>(M) |
| :---------: | :---: | :---: | :---------------: | :---------------: | :----------: | :-----------: |
| DSNet-tiny | 0.792 | 0.948 | 0.790 | - | 1.8 | 10.5 |
| DSNet-small | 0.814 | 0.954 | 0.823 | - | 3.5 | 23.0 |
| DSNet-base | 0.818 | 0.952 | 0.831 | - | 8.4 | 49.3 |
<a name="2"></a>
## 2. 模型快速体验
安装 paddlepaddle 和 paddleclas 即可快速对图片进行预测,体验方法可以参考[ResNet50 模型快速体验](./ResNet.md#2-模型快速体验)。
<a name="3"></a>
## 3. 模型训练、评估和预测
此部分内容包括训练环境配置、ImageNet数据的准备、该模型在 ImageNet 上的训练、评估、预测等内容。在 `ppcls/configs/ImageNet/DSNet/` 中提供了该模型的训练配置,启动训练方法可以参考:[ResNet50 模型训练、评估和预测](./ResNet.md#3-模型训练评估和预测)。
**备注:** 由于 DSNet 系列模型默认使用的 GPU 数量为 8 个所以在训练时需要指定8个GPU如`python3 -m paddle.distributed.launch --gpus="0,1,2,3,4,5,6,7" tools/train.py -c xxx.yaml`, 如果使用 4 个 GPU 训练,默认学习率需要减小一半,精度可能有损。
<a name="4"></a>
## 4. 模型推理部署
<a name="4.1"></a>
### 4.1 推理模型准备
Paddle Inference 是飞桨的原生推理库, 作用于服务器端和云端提供高性能的推理能力。相比于直接基于预训练模型进行预测Paddle Inference可使用MKLDNN、CUDNN、TensorRT 进行预测加速从而实现更优的推理性能。更多关于Paddle Inference推理引擎的介绍可以参考[Paddle Inference官网教程](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/infer/inference/inference_cn.html)。
Inference 的获取可以参考 [ResNet50 推理模型准备](./ResNet.md#41-推理模型准备) 。
<a name="4.2"></a>
### 4.2 基于 Python 预测引擎推理
PaddleClas 提供了基于 python 预测引擎推理的示例。您可以参考[ResNet50 基于 Python 预测引擎推理](./ResNet.md#42-基于-python-预测引擎推理) 。
<a name="4.3"></a>
### 4.3 基于 C++ 预测引擎推理
PaddleClas 提供了基于 C++ 预测引擎推理的示例,您可以参考[服务器端 C++ 预测](../../deployment/image_classification/cpp/linux.md)来完成相应的推理部署。如果您使用的是 Windows 平台,可以参考[基于 Visual Studio 2019 Community CMake 编译指南](../../deployment/image_classification/cpp/windows.md)完成相应的预测库编译和模型预测工作。
<a name="4.4"></a>
### 4.4 服务化部署
Paddle Serving 提供高性能、灵活易用的工业级在线推理服务。Paddle Serving 支持 RESTful、gRPC、bRPC 等多种协议提供多种异构硬件和多种操作系统环境下推理解决方案。更多关于Paddle Serving 的介绍,可以参考[Paddle Serving 代码仓库](https://github.com/PaddlePaddle/Serving)。
PaddleClas 提供了基于 Paddle Serving 来完成模型服务化部署的示例,您可以参考[模型服务化部署](../../deployment/image_classification/paddle_serving.md)来完成相应的部署工作。
<a name="4.5"></a>
### 4.5 端侧部署
Paddle Lite 是一个高性能、轻量级、灵活性强且易于扩展的深度学习推理框架,定位于支持包括移动端、嵌入式以及服务器端在内的多硬件平台。更多关于 Paddle Lite 的介绍,可以参考[Paddle Lite 代码仓库](https://github.com/PaddlePaddle/Paddle-Lite)。
PaddleClas 提供了基于 Paddle Lite 来完成模型端侧部署的示例,您可以参考[端侧部署](../../deployment/image_classification/paddle_lite.md)来完成相应的部署工作。
<a name="4.6"></a>
### 4.6 Paddle2ONNX 模型转换与预测
Paddle2ONNX 支持将 PaddlePaddle 模型格式转化到 ONNX 模型格式。通过 ONNX 可以完成将 Paddle 模型到多种推理引擎的部署包括TensorRT/OpenVINO/MNN/TNN/NCNN以及其它对 ONNX 开源格式进行支持的推理引擎或硬件。更多关于 Paddle2ONNX 的介绍,可以参考[Paddle2ONNX 代码仓库](https://github.com/PaddlePaddle/Paddle2ONNX)。
PaddleClas 提供了基于 Paddle2ONNX 来完成 inference 模型转换 ONNX 模型并作推理预测的示例,您可以参考[Paddle2ONNX 模型转换与预测](../../deployment/image_classification/paddle2onnx.md)来完成相应的部署工作。
<a name="5"></a>
## 5. 引用
如果你的论文用到了 DSNet 的方法,请添加如下 cite
```
@article{mao2021dual,
title={Dual-stream network for visual recognition},
author={Mao, Mingyuan and Zhang, Renrui and Zheng, Honghui and Ma, Teli and Peng, Yan and Ding, Errui and Zhang, Baochang and Han, Shumin and others},
journal={Advances in Neural Information Processing Systems},
volume={34},
pages={25346--25358},
year={2021}
}
```
[^1]:若公式无法正常显示,请打开谷歌浏览器前往[此链接](https://chrome.google.com/webstore/detail/mathjax-plugin-for-github/ioemnmodlmafdkllaclgeombjnmnbima/related)安装MathJax插件安装完毕后用谷歌浏览器重新打开此页面并刷新即可。

View File

@ -51,6 +51,7 @@
- [TNT 系列](#TNT) - [TNT 系列](#TNT)
- [NextViT 系列](#NextViT) - [NextViT 系列](#NextViT)
- [UniFormer 系列](#UniFormer) - [UniFormer 系列](#UniFormer)
- [DSNet 系列](#DSNet)
- [4.2 轻量级模型](#Transformer_lite) - [4.2 轻量级模型](#Transformer_lite)
- [MobileViT 系列](#MobileViT) - [MobileViT 系列](#MobileViT)
- [五、参考文献](#reference) - [五、参考文献](#reference)
@ -762,6 +763,17 @@ DeiTData-efficient Image Transformers系列模型的精度、速度指标
| UniFormer_base | 0.8376 | 0.9672 | - | - |- | 7.77 | 49.78 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/UniFormer_base_pretrained.pdparams) | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/UniFormer_base_infer.tar) | | UniFormer_base | 0.8376 | 0.9672 | - | - |- | 7.77 | 49.78 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/UniFormer_base_pretrained.pdparams) | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/UniFormer_base_infer.tar) |
| UniFormer_base_ls | 0.8398 | 0.9675 | - | - | - | 7.77 | 49.78 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/UniFormer_base_ls_pretrained.pdparams) | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/UniFormer_base_ls_infer.tar) | | UniFormer_base_ls | 0.8398 | 0.9675 | - | - | - | 7.77 | 49.78 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/UniFormer_base_ls_pretrained.pdparams) | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/UniFormer_base_ls_infer.tar) |
<a name="DSNet"></a>
## DSNet 系列 <sup>[[49](#ref49)]</sup>
关于 DSNet 系列模型的精度、速度指标如下表所示,更多介绍可以参考:[DSNet 系列模型文档](DSNet.md)。
| 模型 | Top-1 Acc | Top-5 Acc | time(ms)<br>bs=1 | time(ms)<br>bs=4 | time(ms)<br/>bs=8 | FLOPs(G) | Params(M) | 预训练模型下载地址 | inference模型下载地址 |
| ----------- | --------- | --------- | ---------------- | ---------------- | ----------------- | -------- | --------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| DSNet_tiny | 0.7919 | 0.9476 | - | - | - | 1.8 | 10.5 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DSNet_tiny_pretrained.pdparams) | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/DSNet_tiny_infer.tar) |
| DSNet_small | 0.8137 | 0.9544 | - | - | - | 3.5 | 23.0 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DSNet_small_pretrained.pdparams) | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/DSNet_small_infer.tar) |
| DSNet_base | 0.8175 | 0.9522 | - | - | - | 8.4 | 49.3 | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DSNet_base_pretrained.pdparams) | [下载链接](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/DSNet_base_infer.tar) |
<a name="Transformer_lite"></a> <a name="Transformer_lite"></a>
@ -879,3 +891,5 @@ TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE.
<a name="ref47">[47]</a>Jiashi Li, Xin Xia, Wei Li, Huixia Li, Xing Wang, Xuefeng Xiao, Rui Wang, Min Zheng, Xin Pan. Next-ViT: Next Generation Vision Transformer for Efficient Deployment in Realistic Industrial Scenarios. <a name="ref47">[47]</a>Jiashi Li, Xin Xia, Wei Li, Huixia Li, Xing Wang, Xuefeng Xiao, Rui Wang, Min Zheng, Xin Pan. Next-ViT: Next Generation Vision Transformer for Efficient Deployment in Realistic Industrial Scenarios.
<a name="ref48">[48]</a>Kunchang Li, Yali Wang, Junhao Zhang, Peng Gao, Guanglu Song, Yu Liu, Hongsheng Li, Yu Qiao. UniFormer: Unifying Convolution and Self-attention for Visual Recognition <a name="ref48">[48]</a>Kunchang Li, Yali Wang, Junhao Zhang, Peng Gao, Guanglu Song, Yu Liu, Hongsheng Li, Yu Qiao. UniFormer: Unifying Convolution and Self-attention for Visual Recognition
<a name="ref49">[49]</a>Mingyuan Mao, Renrui Zhang, Honghui Zheng, Peng Gao, Teli Ma, Yan Peng, Errui Ding, Baochang Zhang, Shumin Han. Dual-stream Network for Visual Recognition.

View File

@ -35,6 +35,7 @@ from .model_zoo.se_resnet_vd import SE_ResNet18_vd, SE_ResNet34_vd, SE_ResNet50_
from .model_zoo.se_resnext_vd import SE_ResNeXt50_vd_32x4d, SE_ResNeXt50_vd_32x4d, SENet154_vd from .model_zoo.se_resnext_vd import SE_ResNeXt50_vd_32x4d, SE_ResNeXt50_vd_32x4d, SENet154_vd
from .model_zoo.se_resnext import SE_ResNeXt50_32x4d, SE_ResNeXt101_32x4d, SE_ResNeXt152_64x4d from .model_zoo.se_resnext import SE_ResNeXt50_32x4d, SE_ResNeXt101_32x4d, SE_ResNeXt152_64x4d
from .model_zoo.dpn import DPN68, DPN92, DPN98, DPN107, DPN131 from .model_zoo.dpn import DPN68, DPN92, DPN98, DPN107, DPN131
from .model_zoo.dsnet import DSNet_tiny_patch16_224, DSNet_small_patch16_224, DSNet_base_patch16_224
from .model_zoo.densenet import DenseNet121, DenseNet161, DenseNet169, DenseNet201, DenseNet264 from .model_zoo.densenet import DenseNet121, DenseNet161, DenseNet169, DenseNet201, DenseNet264
from .model_zoo.efficientnet import EfficientNetB0, EfficientNetB1, EfficientNetB2, EfficientNetB3, EfficientNetB4, EfficientNetB5, EfficientNetB6, EfficientNetB7, EfficientNetB0_small from .model_zoo.efficientnet import EfficientNetB0, EfficientNetB1, EfficientNetB2, EfficientNetB3, EfficientNetB4, EfficientNetB5, EfficientNetB6, EfficientNetB7, EfficientNetB0_small
from .model_zoo.resnest import ResNeSt50_fast_1s1x64d, ResNeSt50, ResNeSt101, ResNeSt200, ResNeSt269 from .model_zoo.resnest import ResNeSt50_fast_1s1x64d, ResNeSt50, ResNeSt101, ResNeSt200, ResNeSt269

View File

@ -0,0 +1,710 @@
# copyright (c) 2022 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# reference: https://arxiv.org/abs/2105.14734v4
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
from .vision_transformer import to_2tuple, zeros_, ones_, VisionTransformer, Identity, zeros_
from functools import partial
from paddle.nn.initializer import TruncatedNormal, Constant, Normal
from ....utils.save_load import load_dygraph_pretrain, load_dygraph_pretrain_from_url
MODEL_URLS = {
"DSNet_tiny_patch16_224":
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DSNet_tiny_patch16_224_pretrained.pdparams",
"DSNet_small_patch16_224":
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DSNet_small_patch16_224_pretrained.pdparams",
"DSNet_base_patch16_224":
"https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/DSNet_base_patch16_224_pretrained.pdparams",
}
__all__ = list(MODEL_URLS.keys())
class Mlp(nn.Layer):
def __init__(self,
in_features,
hidden_features=None,
out_features=None,
act_layer=nn.GELU,
drop=0.):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = nn.Conv2D(in_features, hidden_features, 1)
self.act = act_layer()
self.fc2 = nn.Conv2D(hidden_features, out_features, 1)
self.drop = nn.Dropout(drop)
def forward(self, x):
x = self.fc1(x)
x = self.act(x)
x = self.drop(x)
x = self.fc2(x)
x = self.drop(x)
return x
class DWConv(nn.Layer):
def __init__(self, dim=768):
super(DWConv, self).__init__()
self.dwconv = nn.Conv2D(dim, dim, 3, 1, 1, bias=True, groups=dim)
def forward(self, x):
x = self.dwconv(x)
return x
class DWConvMlp(nn.Layer):
def __init__(self,
in_features,
hidden_features=None,
out_features=None,
act_layer=nn.GELU,
drop=0.):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = nn.Conv2D(in_features, hidden_features, 1)
self.dwconv = DWConv(hidden_features)
self.act = act_layer()
self.fc2 = nn.Conv2D(hidden_features, out_features, 1)
self.drop = nn.Dropout(drop)
def forward(self, x):
x = self.fc1(x)
x = self.dwconv(x)
x = self.act(x)
x = self.drop(x)
x = self.fc2(x)
x = self.drop(x)
return x
def drop_path(x, drop_prob=0., training=False):
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ...
"""
if drop_prob == 0. or not training:
return x
keep_prob = paddle.to_tensor(1 - drop_prob)
shape = (paddle.shape(x)[0], ) + (1, ) * (x.ndim - 1)
random_tensor = keep_prob + paddle.rand(shape, dtype=x.dtype)
random_tensor = paddle.floor(random_tensor) # binarize
output = x.divide(keep_prob) * random_tensor
return output
class DropPath(nn.Layer):
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
"""
def __init__(self, drop_prob=None):
super(DropPath, self).__init__()
self.drop_prob = drop_prob
def forward(self, x):
return drop_path(x, self.drop_prob, self.training)
class Attention(nn.Layer):
def __init__(self,
dim,
num_heads=8,
qkv_bias=False,
qk_scale=None,
attn_drop=0.,
proj_drop=0.):
super().__init__()
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = qk_scale or head_dim**-0.5
self.attn_drop = nn.Dropout(attn_drop)
self.proj_drop = nn.Dropout(proj_drop)
def forward(self, x):
B, N, C = x.shape
C = int(C // 3)
qkv = x.reshape(
(B, N, 3, self.num_heads, C // self.num_heads)).transpose(
(2, 0, 3, 1, 4))
q, k, v = qkv[0], qkv[1], qkv[2]
attn = (q.matmul(k.transpose((0, 1, 3, 2)))) * self.scale
attn = F.softmax(attn, axis=-1)
attn = self.attn_drop(attn)
x = (attn.matmul(v)).transpose((0, 2, 1, 3)).reshape((B, N, C))
x = self.proj_drop(x)
return x
class Cross_Attention(nn.Layer):
def __init__(self,
dim,
num_heads=8,
qkv_bias=False,
qk_scale=None,
attn_drop=0.,
proj_drop=0.):
super().__init__()
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = qk_scale or head_dim**-0.5
self.attn_drop = nn.Dropout(attn_drop)
self.proj_drop = nn.Dropout(proj_drop)
def forward(self, tokens_q, memory_k, memory_v, shape=None):
assert shape is not None
attn = (tokens_q.matmul(memory_k.transpose((0, 1, 3, 2)))) * self.scale
attn = F.softmax(attn, axis=-1)
attn = self.attn_drop(attn)
x = (attn.matmul(memory_v)).transpose((0, 2, 1, 3)).reshape(
(shape[0], shape[1], shape[2]))
x = self.proj_drop(x)
return x
class MixBlock(nn.Layer):
def __init__(self,
dim,
num_heads,
mlp_ratio=4.,
qkv_bias=False,
qk_scale=None,
drop=0.,
attn_drop=0.,
drop_path=0.,
act_layer=nn.GELU,
norm_layer=nn.LayerNorm,
downsample=2,
conv_ffn=False):
super().__init__()
self.pos_embed = nn.Conv2D(dim, dim, 3, padding=1, groups=dim)
self.dim = dim
self.norm1 = nn.BatchNorm2D(dim)
self.conv1 = nn.Conv2D(dim, dim, 1)
self.conv2 = nn.Conv2D(dim, dim, 1)
self.dim_conv = int(dim * 0.5)
self.dim_sa = dim - self.dim_conv
self.norm_conv1 = nn.BatchNorm2D(self.dim_conv)
self.norm_sa1 = nn.LayerNorm(self.dim_sa)
self.conv = nn.Conv2D(
self.dim_conv, self.dim_conv, 3, padding=1, groups=self.dim_conv)
self.channel_up = nn.Linear(self.dim_sa, 3 * self.dim_sa)
self.cross_channel_up_conv = nn.Conv2D(self.dim_conv,
3 * self.dim_conv, 1)
self.cross_channel_up_sa = nn.Linear(self.dim_sa, 3 * self.dim_sa)
self.fuse_channel_conv = nn.Linear(self.dim_conv, self.dim_conv)
self.fuse_channel_sa = nn.Linear(self.dim_sa, self.dim_sa)
self.num_heads = num_heads
self.attn = Attention(
self.dim_sa,
num_heads=self.num_heads,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
attn_drop=0.1,
proj_drop=drop)
self.cross_attn = Cross_Attention(
self.dim_sa,
num_heads=num_heads,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
attn_drop=0.1,
proj_drop=drop)
self.norm_conv2 = nn.BatchNorm2D(self.dim_conv)
self.norm_sa2 = nn.LayerNorm(self.dim_sa)
self.drop_path = DropPath(drop_path) if drop_path > 0. else Identity()
self.norm2 = nn.BatchNorm2D(dim)
self.downsample = downsample
mlp_hidden_dim = int(dim * mlp_ratio)
if conv_ffn:
self.mlp = DWConvMlp(
in_features=dim,
hidden_features=mlp_hidden_dim,
act_layer=act_layer,
drop=drop)
else:
self.mlp = Mlp(in_features=dim,
hidden_features=mlp_hidden_dim,
act_layer=act_layer,
drop=drop)
def forward(self, x):
x = x + self.pos_embed(x)
_, _, H, W = x.shape
residual = x
x = self.norm1(x)
x = self.conv1(x)
qkv = x[:, :self.dim_sa, :]
conv = x[:, self.dim_sa:, :, :]
residual_conv = conv
conv = residual_conv + self.conv(self.norm_conv1(conv))
sa = F.interpolate(
qkv,
size=(H // self.downsample, W // self.downsample),
mode='bilinear')
B, _, H_down, W_down = sa.shape
sa = sa.flatten(2).transpose([0, 2, 1])
residual_sa = sa
sa = self.norm_sa1(sa)
sa = self.channel_up(sa)
sa = residual_sa + self.attn(sa)
# cross attention
residual_conv_co = conv
residual_sa_co = sa
conv_qkv = self.cross_channel_up_conv(self.norm_conv2(conv))
conv_qkv = conv_qkv.flatten(2).transpose([0, 2, 1])
sa_qkv = self.cross_channel_up_sa(self.norm_sa2(sa))
B_conv, N_conv, C_conv = conv_qkv.shape
C_conv = int(C_conv // 3)
conv_qkv = conv_qkv.reshape((B_conv, N_conv, 3, self.num_heads,
C_conv // self.num_heads)).transpose(
(2, 0, 3, 1, 4))
conv_q, conv_k, conv_v = conv_qkv[0], conv_qkv[1], conv_qkv[2]
B_sa, N_sa, C_sa = sa_qkv.shape
C_sa = int(C_sa // 3)
sa_qkv = sa_qkv.reshape(
(B_sa, N_sa, 3, self.num_heads, C_sa // self.num_heads)).transpose(
(2, 0, 3, 1, 4))
sa_q, sa_k, sa_v = sa_qkv[0], sa_qkv[1], sa_qkv[2]
# sa -> conv
conv = self.cross_attn(
conv_q, sa_k, sa_v, shape=(B_conv, N_conv, C_conv))
conv = self.fuse_channel_conv(conv)
conv = conv.reshape((B, H, W, C_conv)).transpose((0, 3, 1, 2))
conv = residual_conv_co + conv
# conv -> sa
sa = self.cross_attn(sa_q, conv_k, conv_v, shape=(B_sa, N_sa, C_sa))
sa = residual_sa_co + self.fuse_channel_sa(sa)
sa = sa.reshape((B, H_down, W_down, C_sa)).transpose((0, 3, 1, 2))
sa = F.interpolate(sa, size=(H, W), mode='bilinear')
x = paddle.concat([conv, sa], axis=1)
x = residual + self.drop_path(self.conv2(x))
x = x + self.drop_path(self.mlp(self.norm2(x)))
return x
class Block(nn.Layer):
def __init__(self,
dim,
num_heads,
mlp_ratio=4.,
qkv_bias=False,
qk_scale=None,
drop=0.,
attn_drop=0.,
drop_path=0.,
act_layer=nn.GELU,
norm_layer=nn.LayerNorm):
super().__init__()
self.norm1 = norm_layer(dim)
self.attn = Attention(
dim,
num_heads=num_heads,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
attn_drop=attn_drop,
proj_drop=drop)
self.drop_path = DropPath(drop_path) if drop_path > 0. else Identity()
self.norm2 = norm_layer(dim)
mlp_hidden_dim = int(dim * mlp_ratio)
self.mlp = Mlp(in_features=dim,
hidden_features=mlp_hidden_dim,
act_layer=act_layer,
drop=drop)
def forward(self, x):
x = x + self.drop_path(self.attn(self.norm1(x)))
x = x + self.drop_path(self.mlp(self.norm2(x)))
return x
class PatchEmbed(nn.Layer):
""" Image to Patch Embedding
"""
def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768):
super().__init__()
img_size = to_2tuple(img_size)
patch_size = to_2tuple(patch_size)
num_patches = (img_size[1] // patch_size[1]) * (img_size[0] //
patch_size[0])
self.img_size = img_size
self.patch_size = patch_size
self.num_patches = num_patches
self.proj = nn.Conv2D(
in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
def forward(self, x):
B, C, H, W = x.shape
assert H == self.img_size[0] and W == self.img_size[1], \
f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
x = self.proj(x)
return x
class OverlapPatchEmbed(nn.Layer):
""" Image to Overlapping Patch Embedding
"""
def __init__(self,
img_size=224,
patch_size=7,
stride=4,
in_chans=3,
embed_dim=768):
super().__init__()
img_size = to_2tuple(img_size)
patch_size = to_2tuple(patch_size)
self.img_size = img_size
self.patch_size = patch_size
self.H, self.W = img_size[0] // patch_size[0], img_size[
1] // patch_size[1]
self.num_patches = self.H * self.W
self.proj = nn.Conv2D(
in_chans,
embed_dim,
kernel_size=patch_size,
stride=stride,
padding=(patch_size[0] // 2, patch_size[1] // 2))
def forward(self, x):
B, C, H, W = x.shape
assert H == self.img_size[0] and W == self.img_size[1], \
f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
x = self.proj(x)
return x
class MixVisionTransformer(nn.Layer):
""" Mixed Vision Transformer for DSNet
A PaddlePaddle impl of : `Dual-stream Network for Visual Recognition` - https://arxiv.org/abs/2105.14734v4
"""
def __init__(self,
img_size=224,
patch_size=16,
in_chans=3,
class_num=1000,
embed_dim=[64, 128, 320, 512],
depth=[2, 2, 4, 1],
num_heads=[1, 2, 5, 8],
mlp_ratio=4.,
qkv_bias=True,
qk_scale=None,
representation_size=None,
drop_rate=0.,
attn_drop_rate=0.,
drop_path_rate=0.1,
norm_layer=None,
overlap_embed=False,
conv_ffn=False):
"""
Args:
img_size (int, tuple): input image size
patch_size (int, tuple): patch size
in_chans (int): number of input channels
class_num (int): number of classes for classification head
embed_dim (int): embedding dimension
depth (int): depth of transformer
num_heads (int): number of attention heads
mlp_ratio (int): ratio of mlp hidden dim to embedding dim
qkv_bias (bool): enable bias for qkv if True
qk_scale (float): override default qk scale of head_dim ** -0.5 if set
representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
drop_rate (float): dropout rate
attn_drop_rate (float): attention dropout rate
drop_path_rate (float): stochastic depth rate
norm_layer: (nn.Layer): normalization layer
overlap_embed (bool): enable overlapped patch embedding if True
conv_ffn (bool): enable depthwise convolution for mlp if True
"""
super().__init__()
self.class_num = class_num
self.num_features = self.embed_dim = embed_dim
norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
downsamples = [8, 4, 2, 2]
if overlap_embed:
self.patch_embed1 = OverlapPatchEmbed(
img_size=img_size,
patch_size=7,
stride=4,
in_chans=in_chans,
embed_dim=embed_dim[0])
self.patch_embed2 = OverlapPatchEmbed(
img_size=img_size // 4,
patch_size=3,
stride=2,
in_chans=embed_dim[0],
embed_dim=embed_dim[1])
self.patch_embed3 = OverlapPatchEmbed(
img_size=img_size // 8,
patch_size=3,
stride=2,
in_chans=embed_dim[1],
embed_dim=embed_dim[2])
self.patch_embed4 = OverlapPatchEmbed(
img_size=img_size // 16,
patch_size=3,
stride=2,
in_chans=embed_dim[2],
embed_dim=embed_dim[3])
else:
self.patch_embed1 = PatchEmbed(
img_size=img_size,
patch_size=4,
in_chans=in_chans,
embed_dim=embed_dim[0])
self.patch_embed2 = PatchEmbed(
img_size=img_size // 4,
patch_size=2,
in_chans=embed_dim[0],
embed_dim=embed_dim[1])
self.patch_embed3 = PatchEmbed(
img_size=img_size // 8,
patch_size=2,
in_chans=embed_dim[1],
embed_dim=embed_dim[2])
self.patch_embed4 = PatchEmbed(
img_size=img_size // 16,
patch_size=2,
in_chans=embed_dim[2],
embed_dim=embed_dim[3])
self.pos_drop = nn.Dropout(p=drop_rate)
self.mixture = False
dpr = [
x.item() for x in paddle.linspace(0, drop_path_rate, sum(depth))
]
self.blocks1 = nn.LayerList([
MixBlock(
dim=embed_dim[0],
num_heads=num_heads[0],
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop=drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
norm_layer=norm_layer,
downsample=downsamples[0],
conv_ffn=conv_ffn) for i in range(depth[0])
])
self.blocks2 = nn.LayerList([
MixBlock(
dim=embed_dim[1],
num_heads=num_heads[1],
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop=drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
norm_layer=norm_layer,
downsample=downsamples[1],
conv_ffn=conv_ffn) for i in range(depth[1])
])
self.blocks3 = nn.LayerList([
MixBlock(
dim=embed_dim[2],
num_heads=num_heads[2],
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop=drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
norm_layer=norm_layer,
downsample=downsamples[2],
conv_ffn=conv_ffn) for i in range(depth[2])
])
if self.mixture:
self.blocks4 = nn.LayerList([
Block(
dim=embed_dim[3],
num_heads=16,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop=drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
norm_layer=norm_layer,
downsample=downsamples[3],
conv_ffn=conv_ffn) for i in range(depth[3])
])
self.norm = norm_layer(embed_dim[-1])
else:
self.blocks4 = nn.LayerList([
MixBlock(
dim=embed_dim[3],
num_heads=num_heads[3],
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop=drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
norm_layer=norm_layer,
downsample=downsamples[3],
conv_ffn=conv_ffn) for i in range(depth[3])
])
self.norm = nn.BatchNorm2D(embed_dim[-1])
# Representation layer
if representation_size:
self.num_features = representation_size
self.pre_logits = nn.Sequential(
OrderedDict([('fc', nn.Linear(embed_dim, representation_size)),
('act', nn.Tanh())]))
else:
self.pre_logits = Identity()
# Classifier head
self.head = nn.Linear(embed_dim[-1],
class_num) if class_num > 0 else Identity()
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
TruncatedNormal(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
zeros_(m.bias)
elif isinstance(m, nn.LayerNorm):
zeros_(m.bias)
ones_(m.weight)
def get_classifier(self):
return self.head
def reset_classifier(self, class_num, global_pool=''):
self.class_num = class_num
self.head = nn.Linear(self.embed_dim,
class_num) if class_num > 0 else Identity()
def forward_features(self, x):
B = x.shape[0]
x = self.patch_embed1(x)
x = self.pos_drop(x)
for blk in self.blocks1:
x = blk(x)
x = self.patch_embed2(x)
for blk in self.blocks2:
x = blk(x)
x = self.patch_embed3(x)
for blk in self.blocks3:
x = blk(x)
x = self.patch_embed4(x)
if self.mixture:
x = x.flatten(2).transpose([0, 2, 1])
for blk in self.blocks4:
x = blk(x)
x = self.norm(x)
x = self.pre_logits(x)
return x
def forward(self, x):
x = self.forward_features(x)
if self.mixture:
x = x.mean(1)
else:
x = x.flatten(2).mean(-1)
x = self.head(x)
return x
def _load_pretrained(pretrained, model, model_url, use_ssld=False):
if pretrained is False:
pass
elif pretrained is True:
load_dygraph_pretrain_from_url(model, model_url, use_ssld=use_ssld)
elif isinstance(pretrained, str):
load_dygraph_pretrain(model, pretrained)
else:
raise RuntimeError(
"pretrained type is not available. Please use `string` or `boolean` type."
)
def DSNet_tiny_patch16_224(pretrained=False, use_ssld=False, **kwargs):
model = MixVisionTransformer(
patch_size=16,
depth=[2, 2, 4, 1],
mlp_ratio=4,
qkv_bias=True,
norm_layer=partial(
nn.LayerNorm, eps=1e-6),
**kwargs)
_load_pretrained(
pretrained,
model,
MODEL_URLS["DSNet_tiny_patch16_224"],
use_ssld=use_ssld)
return model
def DSNet_small_patch16_224(pretrained=False, use_ssld=False, **kwargs):
model = MixVisionTransformer(
patch_size=16,
depth=[3, 4, 8, 3],
mlp_ratio=4,
qkv_bias=True,
norm_layer=partial(
nn.LayerNorm, eps=1e-6),
**kwargs)
_load_pretrained(
pretrained,
model,
MODEL_URLS["DSNet_small_patch16_224"],
use_ssld=use_ssld)
return model
def DSNet_base_patch16_224(pretrained=False, use_ssld=False, **kwargs):
model = MixVisionTransformer(
patch_size=16,
depth=[3, 4, 28, 3],
mlp_ratio=4,
qkv_bias=True,
norm_layer=partial(
nn.LayerNorm, eps=1e-6),
**kwargs)
_load_pretrained(
pretrained,
model,
MODEL_URLS["DSNet_base_patch16_224"],
use_ssld=use_ssld)
return model

View File

@ -0,0 +1,157 @@
# global configs
Global:
checkpoints: null
pretrained_model: null
output_dir: ./output/
device: gpu
save_interval: 1
eval_during_train: True
eval_interval: 1
epochs: 300
print_batch_step: 10
use_visualdl: False
# used for static mode and model export
image_shape: [3, 224, 224]
save_inference_dir: ./inference
# training model under @to_static
to_static: False
# model architecture
Arch:
name: DSNet_base_patch16_224
class_num: 1000
# loss function config for traing/eval process
Loss:
Train:
- CELoss:
weight: 1.0
epsilon: 0.1
Eval:
- CELoss:
weight: 1.0
Optimizer:
name: AdamW
beta1: 0.9
beta2: 0.999
epsilon: 1e-8
weight_decay: 0.05
no_weight_decay_name: norm cls_token pos_embed dist_token
one_dim_param_no_weight_decay: True
lr:
name: Cosine
learning_rate: 1e-3
eta_min: 1e-5
warmup_epoch: 5
warmup_start_lr: 1e-6
# data loader for train and eval
DataLoader:
Train:
dataset:
name: ImageNetDataset
image_root: ./dataset/ILSVRC2012/
cls_label_path: ./dataset/ILSVRC2012/train_list.txt
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
interpolation: bicubic
backend: pil
- RandFlipImage:
flip_code: 1
- TimmAutoAugment:
config_str: rand-m9-mstd0.5-inc1
interpolation: bicubic
img_size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- RandomErasing:
EPSILON: 0.25
sl: 0.02
sh: 1.0/3.0
r1: 0.3
attempt: 10
use_log_aspect: True
mode: pixel
batch_transform_ops:
- OpSampler:
MixupOperator:
alpha: 0.8
prob: 0.5
CutmixOperator:
alpha: 1.0
prob: 0.5
sampler:
name: DistributedBatchSampler
batch_size: 128
drop_last: False
shuffle: True
loader:
num_workers: 4
use_shared_memory: True
Eval:
dataset:
name: ImageNetDataset
image_root: ./dataset/ILSVRC2012/
cls_label_path: ./dataset/ILSVRC2012/val_list.txt
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
resize_short: 248
interpolation: bicubic
backend: pil
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
sampler:
name: DistributedBatchSampler
batch_size: 128
drop_last: False
shuffle: False
loader:
num_workers: 4
use_shared_memory: True
Infer:
infer_imgs: docs/images/inference_deployment/whl_demo.jpg
batch_size: 10
transforms:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
resize_short: 248
interpolation: bicubic
backend: pil
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- ToCHWImage:
PostProcess:
name: Topk
topk: 5
class_id_map_file: ppcls/utils/imagenet1k_label_list.txt
Metric:
Eval:
- TopkAcc:
topk: [1, 5]

View File

@ -0,0 +1,158 @@
# global configs
Global:
checkpoints: null
pretrained_model: null
output_dir: ./output/
device: gpu
save_interval: 1
eval_during_train: True
eval_interval: 1
epochs: 300
print_batch_step: 10
use_visualdl: False
# used for static mode and model export
image_shape: [3, 224, 224]
save_inference_dir: ./inference
# training model under @to_static
to_static: False
# model architecture
Arch:
name: DSNet_small_patch16_224
class_num: 1000
# loss function config for traing/eval process
Loss:
Train:
- CELoss:
weight: 1.0
epsilon: 0.1
Eval:
- CELoss:
weight: 1.0
Optimizer:
name: AdamW
beta1: 0.9
beta2: 0.999
epsilon: 1e-8
weight_decay: 0.05
no_weight_decay_name: norm cls_token pos_embed dist_token
one_dim_param_no_weight_decay: True
lr:
name: Cosine
learning_rate: 1e-3
eta_min: 1e-5
warmup_epoch: 5
warmup_start_lr: 1e-6
# data loader for train and eval
DataLoader:
Train:
dataset:
name: ImageNetDataset
image_root: ./dataset/ILSVRC2012/
cls_label_path: ./dataset/ILSVRC2012/train_list.txt
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
interpolation: bicubic
backend: pil
- RandFlipImage:
flip_code: 1
- TimmAutoAugment:
config_str: rand-m9-mstd0.5-inc1
interpolation: bicubic
img_size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- RandomErasing:
EPSILON: 0.25
sl: 0.02
sh: 1.0/3.0
r1: 0.3
attempt: 10
use_log_aspect: True
mode: pixel
batch_transform_ops:
- OpSampler:
MixupOperator:
alpha: 0.8
prob: 0.5
CutmixOperator:
alpha: 1.0
prob: 0.5
sampler:
name: DistributedBatchSampler
batch_size: 128
drop_last: False
shuffle: True
loader:
num_workers: 4
use_shared_memory: True
Eval:
dataset:
name: ImageNetDataset
image_root: ./dataset/ILSVRC2012/
cls_label_path: ./dataset/ILSVRC2012/val_list.txt
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
resize_short: 248
interpolation: bicubic
backend: pil
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
sampler:
name: DistributedBatchSampler
batch_size: 128
drop_last: False
shuffle: False
loader:
num_workers: 4
use_shared_memory: True
Infer:
infer_imgs: docs/images/inference_deployment/whl_demo.jpg
batch_size: 10
transforms:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
resize_short: 248
interpolation: bicubic
backend: pil
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- ToCHWImage:
PostProcess:
name: Topk
topk: 5
class_id_map_file: ppcls/utils/imagenet1k_label_list.txt
Metric:
Eval:
- TopkAcc:
topk: [1, 5]

View File

@ -0,0 +1,157 @@
# global configs
Global:
checkpoints: null
pretrained_model: null
output_dir: ./output/
device: gpu
save_interval: 1
eval_during_train: True
eval_interval: 1
epochs: 300
print_batch_step: 10
use_visualdl: False
# used for static mode and model export
image_shape: [3, 224, 224]
save_inference_dir: ./inference
# training model under @to_static
to_static: False
# model architecture
Arch:
name: DSNet_tiny_patch16_224
class_num: 1000
# loss function config for traing/eval process
Loss:
Train:
- CELoss:
weight: 1.0
epsilon: 0.1
Eval:
- CELoss:
weight: 1.0
Optimizer:
name: AdamW
beta1: 0.9
beta2: 0.999
epsilon: 1e-8
weight_decay: 0.05
no_weight_decay_name: norm cls_token pos_embed dist_token
one_dim_param_no_weight_decay: True
lr:
name: Cosine
learning_rate: 1e-3
eta_min: 1e-5
warmup_epoch: 5
warmup_start_lr: 1e-6
# data loader for train and eval
DataLoader:
Train:
dataset:
name: ImageNetDataset
image_root: ./dataset/ILSVRC2012/
cls_label_path: ./dataset/ILSVRC2012/train_list.txt
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- RandCropImage:
size: 224
interpolation: bicubic
backend: pil
- RandFlipImage:
flip_code: 1
- TimmAutoAugment:
config_str: rand-m9-mstd0.5-inc1
interpolation: bicubic
img_size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- RandomErasing:
EPSILON: 0.25
sl: 0.02
sh: 1.0/3.0
r1: 0.3
attempt: 10
use_log_aspect: True
mode: pixel
batch_transform_ops:
- OpSampler:
MixupOperator:
alpha: 0.8
prob: 0.5
CutmixOperator:
alpha: 1.0
prob: 0.5
sampler:
name: DistributedBatchSampler
batch_size: 128
drop_last: False
shuffle: True
loader:
num_workers: 8
use_shared_memory: True
Eval:
dataset:
name: ImageNetDataset
image_root: ./dataset/ILSVRC2012/
cls_label_path: ./dataset/ILSVRC2012/val_list.txt
transform_ops:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
resize_short: 248
interpolation: bicubic
backend: pil
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
sampler:
name: DistributedBatchSampler
batch_size: 128
drop_last: False
shuffle: False
loader:
num_workers: 4
use_shared_memory: True
Infer:
infer_imgs: docs/images/inference_deployment/whl_demo.jpg
batch_size: 10
transforms:
- DecodeImage:
to_rgb: True
channel_first: False
- ResizeImage:
resize_short: 248
interpolation: bicubic
backend: pil
- CropImage:
size: 224
- NormalizeImage:
scale: 1.0/255.0
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: ''
- ToCHWImage:
PostProcess:
name: Topk
topk: 5
class_id_map_file: ppcls/utils/imagenet1k_label_list.txt
Metric:
Eval:
- TopkAcc:
topk: [1, 5]

View File

@ -0,0 +1,60 @@
===========================train_params===========================
model_name:DSNet_base_patch16_224
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
-o Global.auto_cast:null
-o Global.epochs:lite_train_lite_infer=2|whole_train_whole_infer=120
-o Global.output_dir:./output/
-o DataLoader.Train.sampler.batch_size:8
-o Global.pretrained_model:null
train_model_name:latest
train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
norm_train:tools/train.py -c ppcls/configs/ImageNet/DSNet/DSNet_base_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2 -o Arch.pretrained=False
pact_train:null
fpgm_train:null
distill_train:null
to_static_train:-o Global.to_static=True
null:null
##
===========================eval_params===========================
eval:tools/eval.py -c ppcls/configs/ImageNet/DSNet/DSNet_base_patch16_224.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
norm_export:tools/export_model.py -c ppcls/configs/ImageNet/DSNet/DSNet_base_patch16_224.yaml
quant_export:null
fpgm_export:null
distill_export:null
kl_quant:null
export2:null
pretrained_model_url:null
infer_model:../inference/
infer_export:True
infer_quant:False
inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=248
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:False
-o Global.cpu_num_threads:1
-o Global.batch_size:1
-o Global.use_tensorrt:False
-o Global.use_fp16:False
-o Global.inference_model_dir:../inference
-o Global.infer_imgs:../dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
-o Global.save_log_path:null
-o Global.benchmark:False
null:null
null:null
===========================train_benchmark_params==========================
batch_size:128
fp_items:fp32
epoch:1
--profiler_options:batch_range=[10,20];state=GPU;tracer_option=Default;profile_path=model.profile
flags:FLAGS_eager_delete_tensor_gb=0.0;FLAGS_fraction_of_gpu_memory_to_use=0.98;FLAGS_conv_workspace_size_limit=4096
===========================infer_benchmark_params==========================
random_infer_input:[{float32,[3,224,224]}]

View File

@ -0,0 +1,60 @@
===========================train_params===========================
model_name:DSNet_small_patch16_224
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
-o Global.auto_cast:null
-o Global.epochs:lite_train_lite_infer=2|whole_train_whole_infer=120
-o Global.output_dir:./output/
-o DataLoader.Train.sampler.batch_size:8
-o Global.pretrained_model:null
train_model_name:latest
train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
norm_train:tools/train.py -c ppcls/configs/ImageNet/DSNet/DSNet_small_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2 -o Arch.pretrained=False
pact_train:null
fpgm_train:null
distill_train:null
to_static_train:-o Global.to_static=True
null:null
##
===========================eval_params===========================
eval:tools/eval.py -c ppcls/configs/ImageNet/DSNet/DSNet_small_patch16_224.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
norm_export:tools/export_model.py -c ppcls/configs/ImageNet/DSNet/DSNet_small_patch16_224.yaml
quant_export:null
fpgm_export:null
distill_export:null
kl_quant:null
export2:null
pretrained_model_url:null
infer_model:../inference/
infer_export:True
infer_quant:False
inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=248
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:False
-o Global.cpu_num_threads:1
-o Global.batch_size:1
-o Global.use_tensorrt:False
-o Global.use_fp16:False
-o Global.inference_model_dir:../inference
-o Global.infer_imgs:../dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
-o Global.save_log_path:null
-o Global.benchmark:False
null:null
null:null
===========================train_benchmark_params==========================
batch_size:128
fp_items:fp32
epoch:1
--profiler_options:batch_range=[10,20];state=GPU;tracer_option=Default;profile_path=model.profile
flags:FLAGS_eager_delete_tensor_gb=0.0;FLAGS_fraction_of_gpu_memory_to_use=0.98;FLAGS_conv_workspace_size_limit=4096
===========================infer_benchmark_params==========================
random_infer_input:[{float32,[3,224,224]}]

View File

@ -0,0 +1,60 @@
===========================train_params===========================
model_name:DSNet_tiny_patch16_224
python:python3.7
gpu_list:0|0,1
-o Global.device:gpu
-o Global.auto_cast:null
-o Global.epochs:lite_train_lite_infer=2|whole_train_whole_infer=120
-o Global.output_dir:./output/
-o DataLoader.Train.sampler.batch_size:8
-o Global.pretrained_model:null
train_model_name:latest
train_infer_img_dir:./dataset/ILSVRC2012/val
null:null
##
trainer:norm_train
norm_train:tools/train.py -c ppcls/configs/ImageNet/DSNet/DSNet_tiny_patch16_224.yaml -o Global.seed=1234 -o DataLoader.Train.sampler.shuffle=False -o DataLoader.Train.loader.num_workers=0 -o DataLoader.Train.loader.use_shared_memory=False -o Global.print_batch_step=1 -o Global.eval_during_train=False -o Global.save_interval=2 -o Arch.pretrained=False
pact_train:null
fpgm_train:null
distill_train:null
to_static_train:-o Global.to_static=True
null:null
##
===========================eval_params===========================
eval:tools/eval.py -c ppcls/configs/ImageNet/DSNet/DSNet_tiny_patch16_224.yaml
null:null
##
===========================infer_params==========================
-o Global.save_inference_dir:./inference
-o Global.pretrained_model:
norm_export:tools/export_model.py -c ppcls/configs/ImageNet/DSNet/DSNet_tiny_patch16_224.yaml
quant_export:null
fpgm_export:null
distill_export:null
kl_quant:null
export2:null
pretrained_model_url:null
infer_model:../inference/
infer_export:True
infer_quant:False
inference:python/predict_cls.py -c configs/inference_cls.yaml -o PreProcess.transform_ops.0.ResizeImage.resize_short=248
-o Global.use_gpu:True|False
-o Global.enable_mkldnn:False
-o Global.cpu_num_threads:1
-o Global.batch_size:1
-o Global.use_tensorrt:False
-o Global.use_fp16:False
-o Global.inference_model_dir:../inference
-o Global.infer_imgs:../dataset/ILSVRC2012/val/ILSVRC2012_val_00000001.JPEG
-o Global.save_log_path:null
-o Global.benchmark:False
null:null
null:null
===========================train_benchmark_params==========================
batch_size:128
fp_items:fp32
epoch:1
--profiler_options:batch_range=[10,20];state=GPU;tracer_option=Default;profile_path=model.profile
flags:FLAGS_eager_delete_tensor_gb=0.0;FLAGS_fraction_of_gpu_memory_to_use=0.98;FLAGS_conv_workspace_size_limit=4096
===========================infer_benchmark_params==========================
random_infer_input:[{float32,[3,224,224]}]