Bump version to v0.23.0 (#809)

pull/827/head v0.23.0
Ma Zerun 2022-05-01 21:58:33 +08:00 committed by GitHub
parent 1d6fbe0efe
commit 7c5ddb1e5b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 57 additions and 38 deletions

View File

@ -59,6 +59,12 @@ The master branch works with **PyTorch 1.5+**.
## What's new
v0.23.0 was released in 1/5/2022.
Highlights of the new version:
- Support **DenseNet**, **VAN** and **PoolFormer**, and provide pre-trained models.
- Support training on IPU.
- New style API docs, welcome [view it](https://mmclassification.readthedocs.io/en/master/api/models.html).
v0.22.0 was released in 30/3/2022.
Highlights of the new version:
@ -66,13 +72,6 @@ Highlights of the new version:
- A new `CustomDataset` class to help you **build dataset of yourself**!
- Support new backbones - **ConvMixer**, **RepMLP** and new dataset - **CUB dataset**.
v0.21.0 was released in 4/3/2022.
Highlights of the new version:
- Support **ResNetV1c** and **Wide-ResNet**, and provide pre-trained models.
- Support **dynamic input shape** for ViT-based algorithms. Now our ViT, DeiT, Swin-Transformer and T2T-ViT support forwarding with any input shape.
- Reproduce training results of DeiT. And our DeiT-T and DeiT-S have **higher accuracy** comparing with the official weights.
Please refer to [changelog.md](docs/en/changelog.md) for more details and other release history.
## Installation

View File

@ -57,6 +57,13 @@ MMClassification 是一款基于 PyTorch 的开源图像分类工具箱,是 [O
## 更新日志
2022/5/1 发布了 v0.23.0 版本
新版本亮点:
- 支持了 **DenseNet****VAN** 和 **PoolFormer** 三个网络,并提供了预训练模型。
- 支持在 IPU 上进行训练。
- 更新了 API 文档的样式,更方便查阅,[欢迎查阅](https://mmclassification.readthedocs.io/en/master/api/models.html)。
2022/3/30 发布了 v0.22.0 版本
新版本亮点:
@ -64,13 +71,6 @@ MMClassification 是一款基于 PyTorch 的开源图像分类工具箱,是 [O
- 我们提供了一个新的 `CustomDataset` 类,这个类将帮助你轻松使用**自己的数据集**
- 支持了新的主干网络 **ConvMixer**、**RepMLP** 和一个新的数据集 **CUB dataset**
2022/3/4 发布了 v0.21.0 版本
新版本亮点:
- 支持了 **ResNetV1c****Wide-ResNet** 两个 ResNet 变种,并提供了预训练模型
- ViT 相关模型支持 **动态输入尺寸**。现在我们的 ViTDeiTSwin-Transformer 和 T2T-ViT 支持任意尺寸的输入。
- 复现了 DeiT 的训练结果,并且我们的 DeiT-T 和 DeiT-S 拥有比官方权重 **更高的精度**。
发布历史和更新细节请参考 [更新日志](docs/en/changelog.md)
## 安装

View File

@ -18,10 +18,10 @@ While originally designed for natural language processing (NLP) tasks, the self-
| Model | Pretrain | resolution | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
|:---------:|:------------:|:-----------:|:---------:|:---------:|:---------:|:---------:|:------:|:--------:|
| VAN-T\* | From scratch | 224x224 | 4.11 | 0.88 | 75.41 | 93.02 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-tiny_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-tiny_8xb128_in1k_20220427-8ac0feec.pth) |
| VAN-S\* | From scratch | 224x224 | 13.86 | 2.52 | 81.01 | 95.63 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-small_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-small_8xb128_in1k_20220427-bd6a9edd.pth) |
| VAN-B\* | From scratch | 224x224 | 26.58 | 5.03 | 82.80 | 96.21 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-base_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-base_8xb128_in1k_20220427-5275471d.pth) |
| VAN-L\* | From scratch | 224x224 | 44.77 | 8.99 | 83.86 | 96.73 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-large_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-large_8xb128_in1k_20220427-56159105.pth) |
| VAN-T\* | From scratch | 224x224 | 4.11 | 0.88 | 75.41 | 93.02 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-tiny_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-tiny_8xb128_in1k_20220501-385941af.pth) |
| VAN-S\* | From scratch | 224x224 | 13.86 | 2.52 | 81.01 | 95.63 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-small_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-small_8xb128_in1k_20220501-17bc91aa.pth) |
| VAN-B\* | From scratch | 224x224 | 26.58 | 5.03 | 82.80 | 96.21 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-base_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-base_8xb128_in1k_20220501-6a4cc31b.pth) |
| VAN-L\* | From scratch | 224x224 | 44.77 | 8.99 | 83.86 | 96.73 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-large_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-large_8xb128_in1k_20220501-f212ba21.pth) |
*Models with \* are converted from [the official repo](https://github.com/Visual-Attention-Network/VAN-Classification). The config files of these models are only for validation. We don't ensure these config files' training accuracy and welcome you to contribute your reproduction results.

View File

@ -27,7 +27,7 @@ Models:
Top 1 Accuracy: 75.41
Top 5 Accuracy: 93.02
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/van/van-tiny_8xb128_in1k_20220427-8ac0feec.pth
Weights: https://download.openmmlab.com/mmclassification/v0/van/van-tiny_8xb128_in1k_20220501-385941af.pth
Config: configs/van/van-tiny_8xb128_in1k.py
- Name: van-small_8xb128_in1k
Metadata:
@ -40,7 +40,7 @@ Models:
Top 1 Accuracy: 81.01
Top 5 Accuracy: 95.63
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/van/van-small_8xb128_in1k_20220427-bd6a9edd.pth
Weights: https://download.openmmlab.com/mmclassification/v0/van/van-small_8xb128_in1k_20220501-17bc91aa.pth
Config: configs/van/van-small_8xb128_in1k.py
- Name: van-base_8xb128_in1k
Metadata:
@ -53,7 +53,7 @@ Models:
Top 1 Accuracy: 82.80
Top 5 Accuracy: 96.21
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/van/van-base_8xb128_in1k_20220427-5275471d.pth
Weights: https://download.openmmlab.com/mmclassification/v0/van/van-base_8xb128_in1k_20220501-6a4cc31b.pth
Config: configs/van/van-base_8xb128_in1k.py
- Name: van-large_8xb128_in1k
Metadata:
@ -66,5 +66,5 @@ Models:
Top 1 Accuracy: 83.86
Top 5 Accuracy: 96.73
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/van/van-large_8xb128_in1k_20220427-56159105.pth
Weights: https://download.openmmlab.com/mmclassification/v0/van/van-large_8xb128_in1k_20220501-f212ba21.pth
Config: configs/van/van-large_8xb128_in1k.py

View File

@ -4,7 +4,7 @@ ARG CUDNN="7"
FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
ARG MMCV="1.4.2"
ARG MMCLS="0.22.1"
ARG MMCLS="0.23.0"
ENV PYTHONUNBUFFERED TRUE

View File

@ -1,5 +1,20 @@
# Changelog
## v0.23.0(1/5/2022)
### New Features
- Support DenseNet. ([#750](https://github.com/open-mmlab/mmclassification/pull/750))
- Support VAN. ([#739](https://github.com/open-mmlab/mmclassification/pull/739))
### Improvements
- Support training on IPU and add fine-tuning configs of ViT. ([#723](https://github.com/open-mmlab/mmclassification/pull/723))
### Docs Update
- New style API reference, and easier to use! Welcome [view it](https://mmclassification.readthedocs.io/en/master/api/models.html). ([#774](https://github.com/open-mmlab/mmclassification/pull/774))
## v0.22.1(15/4/2022)
### New Features

View File

@ -10,8 +10,9 @@ The compatible MMClassification and MMCV versions are as below. Please install t
| MMClassification version | MMCV version |
|:------------------------:|:---------------------:|
| dev | mmcv>=1.4.8, <1.6.0 |
| 0.22.1 (master) | mmcv>=1.4.2, <1.6.0 |
| dev | mmcv>=1.5.0, <1.6.0 |
| 0.23.0 (master) | mmcv>=1.4.2, <1.6.0 |
| 0.22.1 | mmcv>=1.4.2, <1.6.0 |
| 0.21.0 | mmcv>=1.4.2, <=1.5.0 |
| 0.20.1 | mmcv>=1.4.2, <=1.5.0 |
| 0.19.0 | mmcv>=1.3.16, <=1.5.0 |

View File

@ -137,10 +137,10 @@ The ResNet family models below are trained by standard data augmentations, i.e.,
| DenseNet169\* | 14.15 | 3.42 | 76.08 | 93.11 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/densenet/densenet169_4xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/densenet/densenet169_4xb256_in1k_20220426-a2889902.pth) |
| DenseNet201\* | 20.01 | 4.37 | 77.32 | 93.64 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/densenet/densenet201_4xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/densenet/densenet201_4xb256_in1k_20220426-05cae4ef.pth) |
| DenseNet161\* | 28.68 | 7.82 | 77.61 | 93.83 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/densenet/densenet161_4xb256_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/densenet/densenet161_4xb256_in1k_20220426-ee6a80a9.pth) |
| VAN-T\* | 4.11 | 0.88 | 75.41 | 93.02 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-tiny_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-tiny_8xb128_in1k_20220427-8ac0feec.pth) |
| VAN-S\* | 13.86 | 2.52 | 81.01 | 95.63 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-small_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-small_8xb128_in1k_20220427-bd6a9edd.pth) |
| VAN-B\* | 26.58 | 5.03 | 82.80 | 96.21 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-base_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-base_8xb128_in1k_20220427-5275471d.pth) |
| VAN-L\* | 44.77 | 8.99 | 83.86 | 96.73 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-large_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-large_8xb128_in1k_20220427-56159105.pth) |
| VAN-T\* | 4.11 | 0.88 | 75.41 | 93.02 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-tiny_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-tiny_8xb128_in1k_20220501-385941af.pth) |
| VAN-S\* | 13.86 | 2.52 | 81.01 | 95.63 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-small_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-small_8xb128_in1k_20220501-17bc91aa.pth) |
| VAN-B\* | 26.58 | 5.03 | 82.80 | 96.21 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-base_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-base_8xb128_in1k_20220501-6a4cc31b.pth) |
| VAN-L\* | 44.77 | 8.99 | 83.86 | 96.73 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/van/van-large_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/van/van-large_8xb128_in1k_20220501-f212ba21.pth) |
*Models with \* are converted from other repos, others are trained by ourselves.*

View File

@ -11,7 +11,8 @@ MMClassification 和 MMCV 的适配关系如下,请安装正确版本的 MMCV
| MMClassification 版本 | MMCV 版本 |
|:------------------------:|:---------------------:|
| dev | mmcv>=1.4.8, <1.6.0 |
| 0.22.1 (master) | mmcv>=1.4.2, <1.6.0 |
| 0.23.0 (master) | mmcv>=1.4.2, <1.6.0 |
| 0.22.1 | mmcv>=1.4.2, <1.6.0 |
| 0.21.0 | mmcv>=1.4.2, <=1.5.0 |
| 0.20.1 | mmcv>=1.4.2, <=1.5.0 |
| 0.19.0 | mmcv>=1.3.16, <=1.5.0 |

View File

@ -506,11 +506,14 @@ class SwinTransformer(BaseBackbone):
def _prepare_relative_position_bias_table(self, state_dict, prefix, *args,
**kwargs):
all_keys = list(state_dict.keys())
state_dict_model = self.state_dict()
all_keys = list(state_dict_model.keys())
for key in all_keys:
if 'relative_position_bias_table' in key:
relative_position_bias_table_pretrained = state_dict[key]
ckpt_key = prefix + key
if ckpt_key not in state_dict:
continue
relative_position_bias_table_pretrained = state_dict[ckpt_key]
relative_position_bias_table_current = state_dict_model[key]
L1, nH1 = relative_position_bias_table_pretrained.size()
L2, nH2 = relative_position_bias_table_current.size()
@ -522,11 +525,11 @@ class SwinTransformer(BaseBackbone):
relative_position_bias_table_pretrained, nH1)
from mmcls.utils import get_root_logger
logger = get_root_logger()
logger.info(
f'Resize the relative_position_bias_table from \
{state_dict[key].shape} to {new_rel_pos_bias.shape}')
state_dict[key] = new_rel_pos_bias
logger.info('Resize the relative_position_bias_table from '
f'{state_dict[ckpt_key].shape} to '
f'{new_rel_pos_bias.shape}')
state_dict[ckpt_key] = new_rel_pos_bias
# The index buffer need to be re-generated.
index_buffer = key.replace('bias_table', 'index')
index_buffer = ckpt_key.replace('bias_table', 'index')
del state_dict[index_buffer]

View File

@ -1,6 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved
__version__ = '0.22.1'
__version__ = '0.23.0'
def parse_version_info(version_str):