Bump version to v1.0.0rc3. (#1211)
* Bump version to v1.0.0rc3 * Update pre-commit hookpull/1217/head v1.0.0rc3
parent
b0007812d6
commit
13ff394985
|
@ -15,12 +15,12 @@ assignees: ''
|
|||
|
||||
### 描述你遇到的问题
|
||||
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
|
||||
### 相关信息
|
||||
|
||||
1. `pip list | grep "mmcv\|mmcls\|^torch"` 命令的输出
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
2. 如果你修改了,或者使用了新的配置文件,请在这里写明
|
||||
|
||||
```python
|
||||
|
@ -28,6 +28,6 @@ assignees: ''
|
|||
```
|
||||
|
||||
3. 如果你是在训练过程中遇到的问题,请填写完整的训练日志和报错信息
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
4. 如果你对 `mmcls` 文件夹下的代码做了其他相关的修改,请在这里写明
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
|
|
|
@ -10,7 +10,7 @@ assignees: ''
|
|||
|
||||
### 描述这个功能
|
||||
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
|
||||
### 动机
|
||||
|
||||
|
@ -18,17 +18,17 @@ assignees: ''
|
|||
例 1. 现在进行 xxx 的时候不方便
|
||||
例 2. 最近的论文中提出了有一个很有帮助的 xx
|
||||
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
|
||||
### 相关资源
|
||||
|
||||
是否有相关的官方实现或者第三方实现?这些会很有参考意义。
|
||||
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
|
||||
### 其他相关信息
|
||||
|
||||
其他和这个功能相关的信息或者截图,请放在这里。
|
||||
另外如果你愿意参与实现这个功能并提交 PR,请在这里说明,我们将非常欢迎。
|
||||
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
|
|
|
@ -12,7 +12,7 @@ assignees: ''
|
|||
|
||||
简单地描述一下遇到了什么 bug
|
||||
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
|
||||
### 复现流程
|
||||
|
||||
|
@ -25,7 +25,7 @@ assignees: ''
|
|||
### 相关信息
|
||||
|
||||
1. `pip list | grep "mmcv\|mmcls\|^torch"` 命令的输出
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
2. 如果你修改了,或者使用了新的配置文件,请在这里写明
|
||||
|
||||
```python
|
||||
|
@ -33,12 +33,12 @@ assignees: ''
|
|||
```
|
||||
|
||||
3. 如果你是在训练过程中遇到的问题,请填写完整的训练日志和报错信息
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
4. 如果你对 `mmcls` 文件夹下的代码做了其他相关的修改,请在这里写明
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
|
||||
### 附加内容
|
||||
|
||||
任何其他有关该 bug 的信息、截图等
|
||||
|
||||
\[填写这里\]
|
||||
[填写这里]
|
||||
|
|
|
@ -10,7 +10,7 @@ assignees: ''
|
|||
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
\[here\]
|
||||
[here]
|
||||
|
||||
### To Reproduce
|
||||
|
||||
|
@ -23,7 +23,7 @@ The command you executed.
|
|||
### Post related information
|
||||
|
||||
1. The output of `pip list | grep "mmcv\|mmcls\|^torch"`
|
||||
\[here\]
|
||||
[here]
|
||||
2. Your config file if you modified it or created a new one.
|
||||
|
||||
```python
|
||||
|
@ -31,12 +31,12 @@ The command you executed.
|
|||
```
|
||||
|
||||
3. Your train log file if you meet the problem during training.
|
||||
\[here\]
|
||||
[here]
|
||||
4. Other code you modified in the `mmcls` folder.
|
||||
\[here\]
|
||||
[here]
|
||||
|
||||
### Additional context
|
||||
|
||||
Add any other context about the problem here.
|
||||
|
||||
\[here\]
|
||||
[here]
|
||||
|
|
|
@ -8,25 +8,25 @@ assignees: ''
|
|||
|
||||
### Describe the feature
|
||||
|
||||
\[here\]
|
||||
[here]
|
||||
|
||||
### Motivation
|
||||
|
||||
A clear and concise description of the motivation of the feature.
|
||||
Ex1. It is inconvenient when \[....\].
|
||||
Ex2. There is a recent paper \[....\], which is very helpful for \[....\].
|
||||
Ex1. It is inconvenient when [....].
|
||||
Ex2. There is a recent paper [....], which is very helpful for [....].
|
||||
|
||||
\[here\]
|
||||
[here]
|
||||
|
||||
### Related resources
|
||||
|
||||
If there is an official code release or third-party implementation, please also provide the information here, which would be very helpful.
|
||||
|
||||
\[here\]
|
||||
[here]
|
||||
|
||||
### Additional context
|
||||
|
||||
Add any other context or screenshots about the feature request here.
|
||||
If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.
|
||||
|
||||
\[here\]
|
||||
[here]
|
||||
|
|
|
@ -13,12 +13,12 @@ assignees: ''
|
|||
|
||||
### Describe the question you meet
|
||||
|
||||
\[here\]
|
||||
[here]
|
||||
|
||||
### Post related information
|
||||
|
||||
1. The output of `pip list | grep "mmcv\|mmcls\|^torch"`
|
||||
\[here\]
|
||||
[here]
|
||||
2. Your config file if you modified it or created a new one.
|
||||
|
||||
```python
|
||||
|
@ -26,6 +26,6 @@ assignees: ''
|
|||
```
|
||||
|
||||
3. Your train log file if you meet the problem during training.
|
||||
\[here\]
|
||||
[here]
|
||||
4. Other code you modified in the `mmcls` folder.
|
||||
\[here\]
|
||||
[here]
|
||||
|
|
|
@ -29,7 +29,7 @@ repos:
|
|||
rev: 0.7.9
|
||||
hooks:
|
||||
- id: mdformat
|
||||
args: ["--number", "--table-width", "200", '--disable-escape', 'backslash']
|
||||
args: ["--number", "--table-width", "200", '--disable-escape', 'backslash', '--disable-escape', 'link-enclosure']
|
||||
additional_dependencies:
|
||||
- "mdformat-openmmlab>=0.0.4"
|
||||
- mdformat_frontmatter
|
||||
|
|
14
README.md
14
README.md
|
@ -58,18 +58,18 @@ The `1.x` branch works with **PyTorch 1.6+**.
|
|||
|
||||
## What's new
|
||||
|
||||
v1.0.0rc3 was released in 21/11/2022.
|
||||
|
||||
- Add **Switch Recipe** Hook, Now we can modify training pipeline, mixup and loss settings during training, see [#1101](https://github.com/open-mmlab/mmclassification/pull/1101).
|
||||
- Add **TIMM and HuggingFace** wrappers. Now you can train/use models in TIMM/HuggingFace directly, see [#1102](https://github.com/open-mmlab/mmclassification/pull/1102).
|
||||
- Support **retrieval tasks**, see [#1055](https://github.com/open-mmlab/mmclassification/pull/1055).
|
||||
- Reproduce **mobileone** training accuracy. See [#1191](https://github.com/open-mmlab/mmclassification/pull/1191)
|
||||
|
||||
v1.0.0rc2 was released in 12/10/2022.
|
||||
|
||||
- Support Deit-3 backbone.
|
||||
- Fix MMEngine version requirements.
|
||||
|
||||
v1.0.0rc1 was released in 30/9/2022.
|
||||
|
||||
- Support MViT, EdgeNeXt, Swin-Transformer V2, EfficientFormer and MobileOne.
|
||||
- Support BEiT type transformer layer.
|
||||
|
||||
v1.0.0rc0 was released in 31/8/2022.
|
||||
|
||||
This release introduced a brand new and flexible training & test engine, but it's still in progress. Welcome
|
||||
to try according to [the documentation](https://mmclassification.readthedocs.io/en/1.x/).
|
||||
|
||||
|
|
|
@ -57,6 +57,13 @@ MMClassification 是一款基于 PyTorch 的开源图像分类工具箱,是 [O
|
|||
|
||||
## 更新日志
|
||||
|
||||
2022/11/21 发布了 v1.0.0rc3 版本
|
||||
|
||||
- 添加了 **Switch Recipe Hook**,现在我们可以在训练过程中修改数据增强、Mixup设置、loss设置等
|
||||
- 添加了 **TIMM 和 HuggingFace** 包装器,现在我们可以直接训练、使用 TIMM 和 HuggingFace 中的模型
|
||||
- 支持了检索任务
|
||||
- 复现了 **MobileOne** 训练精度
|
||||
|
||||
2022/10/12 发布了 v1.0.0rc2 版本
|
||||
|
||||
- 支持了 Deit-3 主干网络
|
||||
|
|
|
@ -27,7 +27,6 @@ train_dataloader = dict(
|
|||
test_mode=False,
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -39,7 +38,6 @@ val_dataloader = dict(
|
|||
test_mode=True,
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, ))
|
||||
|
||||
|
|
|
@ -27,7 +27,6 @@ train_dataloader = dict(
|
|||
test_mode=False,
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -39,7 +38,6 @@ val_dataloader = dict(
|
|||
test_mode=True,
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, ))
|
||||
|
||||
|
|
|
@ -33,7 +33,6 @@ train_dataloader = dict(
|
|||
test_mode=False,
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -45,7 +44,6 @@ val_dataloader = dict(
|
|||
test_mode=True,
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, ))
|
||||
|
||||
|
|
|
@ -32,7 +32,6 @@ train_dataloader = dict(
|
|||
test_mode=False,
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -44,7 +43,6 @@ val_dataloader = dict(
|
|||
test_mode=True,
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, ))
|
||||
|
||||
|
|
|
@ -33,7 +33,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -46,7 +45,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -48,7 +48,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -61,7 +60,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -62,7 +62,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -75,7 +74,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -62,7 +62,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -75,7 +74,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -35,7 +35,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -48,7 +47,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -62,7 +62,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -75,7 +74,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -54,7 +54,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -67,7 +66,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -54,7 +54,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -67,7 +66,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -33,7 +33,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -46,7 +45,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -42,7 +42,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -55,7 +54,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -33,7 +33,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -46,7 +45,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -33,7 +33,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -46,7 +45,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -41,7 +41,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -54,7 +53,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -62,7 +62,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -75,7 +74,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -62,7 +62,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -75,7 +74,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -42,7 +42,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -55,7 +54,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -62,7 +62,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -75,7 +74,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -34,7 +34,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -47,7 +46,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -33,7 +33,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -46,7 +45,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -50,7 +50,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -63,7 +62,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -62,7 +62,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -75,7 +74,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -62,7 +62,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -75,7 +74,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -36,7 +36,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -49,7 +48,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -62,7 +62,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -75,7 +74,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -41,7 +41,6 @@ train_dataloader = dict(
|
|||
data_prefix='train',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -54,7 +53,6 @@ val_dataloader = dict(
|
|||
data_prefix='val',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, 5))
|
||||
|
||||
|
|
|
@ -34,7 +34,6 @@ train_dataloader = dict(
|
|||
image_set_path='ImageSets/Layout/val.txt',
|
||||
pipeline=train_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -46,7 +45,6 @@ val_dataloader = dict(
|
|||
image_set_path='ImageSets/Layout/val.txt',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
test_dataloader = dict(
|
||||
|
@ -58,7 +56,6 @@ test_dataloader = dict(
|
|||
image_set_path='ImageSets/Layout/val.txt',
|
||||
pipeline=test_pipeline),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
# calculate precision_recall_f1 and mAP
|
||||
|
|
|
@ -66,8 +66,8 @@ Note: In MMClassification, we support training with AutoAugment, don't support A
|
|||
| EfficientNet-B7 (AA + AdvProp)\* | 66.35 | 39.3 | 85.14 | 97.23 | [config](./efficientnet-b7_8xb32-01norm_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-b7_3rdparty_8xb32-aa-advprop_in1k_20220119-c6dbff10.pth) |
|
||||
| EfficientNet-B7 (RA + NoisyStudent)\* | 66.35 | 65.0 | 86.83 | 98.08 | [config](./efficientnet-b7_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-b7_3rdparty-ra-noisystudent_in1k_20221103-a82894bc.pth) |
|
||||
| EfficientNet-B8 (AA + AdvProp)\* | 87.41 | 65.0 | 85.38 | 97.28 | [config](./efficientnet-b8_8xb32-01norm_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-b8_3rdparty_8xb32-aa-advprop_in1k_20220119-297ce1b7.pth) |
|
||||
| EfficientNet-L2-475 (RA + NoisyStudent)\* | 480.30 | 174.20 | 88.18 | 98.55 | [config](./efficientnet-l2-475_8xb8_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-l2_3rdparty-ra-noisystudent_in1k-475px_20221103-5a0d8058.pth) |
|
||||
| EfficientNet-L2 (RA + NoisyStudent)\* | 480.30 | 484.98 | 88.33 | 98.65 | [config](./efficientnet-l2_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-l2_3rdparty-ra-noisystudent_in1k_20221103-be73be13.pth) |
|
||||
| EfficientNet-L2-475 (RA + NoisyStudent)\* | 480.30 | 174.20 | 88.18 | 98.55 | [config](./efficientnet-l2_8xb32_in1k-475px.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-l2_3rdparty-ra-noisystudent_in1k-475px_20221103-5a0d8058.pth) |
|
||||
| EfficientNet-L2 (RA + NoisyStudent)\* | 480.30 | 484.98 | 88.33 | 98.65 | [config](./efficientnet-l2_8xb8_in1k-800px.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-l2_3rdparty-ra-noisystudent_in1k_20221103-be73be13.pth) |
|
||||
|
||||
*Models with * are converted from the [official repo](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). The config files of these models
|
||||
are only for inference. We don't ensure these config files' training accuracy and welcome you to contribute your reproduction results.*
|
||||
|
|
|
@ -517,7 +517,7 @@ Models:
|
|||
Converted From:
|
||||
Weights: https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/advprop/efficientnet-b8.tar.gz
|
||||
Code: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
|
||||
- Name: efficientnet-l2_3rdparty-ra-noisystudent_in1k
|
||||
- Name: efficientnet-l2_3rdparty-ra-noisystudent_in1k-800px
|
||||
Metadata:
|
||||
FLOPs: 174203533416
|
||||
Parameters: 480309308
|
||||
|
@ -529,7 +529,7 @@ Models:
|
|||
Top 5 Accuracy: 98.65
|
||||
Task: Image Classification
|
||||
Weights: https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-l2_3rdparty-ra-noisystudent_in1k_20221103-be73be13.pth
|
||||
Config: configs/efficientnet/efficientnet-l2_8xb8_in1k.py
|
||||
Config: configs/efficientnet/efficientnet-l2_8xb8_in1k-800px.py
|
||||
Converted From:
|
||||
Weights: https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/noisystudent/noisy_student_efficientnet-l2.tar.gz
|
||||
Code: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
|
||||
|
|
|
@ -22,7 +22,6 @@ train_dataloader = dict(
|
|||
num_workers=2,
|
||||
dataset=dict(**common_data_cfg, test_mode=False),
|
||||
sampler=dict(type='DefaultSampler', shuffle=True),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
val_dataloader = dict(
|
||||
|
@ -30,7 +29,6 @@ val_dataloader = dict(
|
|||
num_workers=2,
|
||||
dataset=dict(**common_data_cfg, test_mode=True),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
persistent_workers=True,
|
||||
)
|
||||
val_evaluator = dict(type='Accuracy', topk=(1, ))
|
||||
|
||||
|
|
|
@ -7,9 +7,9 @@ FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
|
|||
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub 32
|
||||
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub
|
||||
|
||||
ARG MMENGINE="0.2.0"
|
||||
ARG MMENGINE="0.3.1"
|
||||
ARG MMCV="2.0.0rc1"
|
||||
ARG MMCLS="1.0.0rc2"
|
||||
ARG MMCLS="1.0.0rc3"
|
||||
|
||||
ENV PYTHONUNBUFFERED TRUE
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ Here are some usual arguments, and all available arguments can be found in the [
|
|||
- **`by_epoch`** (bool): Whether the **`interval`** is by epoch or by iteration. Defaults to `True`.
|
||||
- **`out_dir`** (str): The root directory to save checkpoints. If not specified, the checkpoints will be saved in the work directory. If specified, the checkpoints will be saved in the sub-folder of the **`out_dir`**.
|
||||
- **`max_keep_ckpts`** (int): The maximum checkpoints to keep. In some cases, we want only the latest few checkpoints and would like to delete old ones to save disk space. Defaults to -1, which means unlimited.
|
||||
- **`save_best`** (str, List\[str\]): If specified, it will save the checkpoint with the best evaluation result.
|
||||
- **`save_best`** (str, List[str]): If specified, it will save the checkpoint with the best evaluation result.
|
||||
Usually, you can simply use `save_best="auto"` to automatically select the evaluation metric. And if you
|
||||
want more advanced configuration, please refer to the [CheckpointHook docs](mmengine.hooks.CheckpointHook).
|
||||
|
||||
|
|
|
@ -223,7 +223,7 @@ names of learning rate schedulers end with `LR`.
|
|||
]
|
||||
```
|
||||
|
||||
Notice that, we use `begin` and `end` arguments here to assign the valid range, which is \[`begin`, `end`) for this schedule. And the range unit is defined by `by_epoch` argument. If not specified, the `begin` is 0 and the `end` is the max epochs or iterations.
|
||||
Notice that, we use `begin` and `end` arguments here to assign the valid range, which is [`begin`, `end`) for this schedule. And the range unit is defined by `by_epoch` argument. If not specified, the `begin` is 0 and the `end` is the max epochs or iterations.
|
||||
|
||||
If the ranges for all schedules are not continuous, the learning rate will stay constant in ignored range, otherwise all valid schedulers will be executed in order in a specific stage, which behaves the same as PyTorch [`ChainedScheduler`](torch.optim.lr_scheduler.ChainedScheduler).
|
||||
|
||||
|
|
|
@ -142,7 +142,7 @@ Formatting
|
|||
MMCV transforms
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
We also provides many transforms in MMCV. You can use them directly in the config files. Here are some frequently used transforms, and the whole transforms list can be found in :external:mod:`mmcv.transforms`.
|
||||
We also provides many transforms in MMCV. You can use them directly in the config files. Here are some frequently used transforms, and the whole transforms list can be found in :external+mmcv:doc:`api/transforms`.
|
||||
|
||||
.. list-table::
|
||||
:widths: 50 50
|
||||
|
|
|
@ -1,23 +1,81 @@
|
|||
# Changelog
|
||||
|
||||
## v1.0.0rc3(21/11/2022)
|
||||
|
||||
### Highlights
|
||||
|
||||
- Add **Switch Recipe** Hook, Now we can modify training pipeline, mixup and loss settings during training, see [#1101](https://github.com/open-mmlab/mmclassification/pull/1101).
|
||||
- Add **TIMM and HuggingFace** wrappers. Now you can train/use models in TIMM/HuggingFace directly, see [#1102](https://github.com/open-mmlab/mmclassification/pull/1102).
|
||||
- Support **retrieval tasks**, see [#1055](https://github.com/open-mmlab/mmclassification/pull/1055).
|
||||
- Reproduce **mobileone** training accuracy. See [#1191](https://github.com/open-mmlab/mmclassification/pull/1191)
|
||||
|
||||
### New Features
|
||||
|
||||
- Add checkpoints from EfficientNets NoisyStudent & L2. ([#1122](https://github.com/open-mmlab/mmclassification/pull/1122))
|
||||
- Migrate CSRA head to 1.x. ([#1177](https://github.com/open-mmlab/mmclassification/pull/1177))
|
||||
- Support RepLKnet backbone. ([#1129](https://github.com/open-mmlab/mmclassification/pull/1129))
|
||||
- Add Switch Recipe Hook. ([#1101](https://github.com/open-mmlab/mmclassification/pull/1101))
|
||||
- Add adan optimizer. ([#1180](https://github.com/open-mmlab/mmclassification/pull/1180))
|
||||
- Support DaViT. ([#1105](https://github.com/open-mmlab/mmclassification/pull/1105))
|
||||
- Support Activation Checkpointing for ConvNeXt. ([#1153](https://github.com/open-mmlab/mmclassification/pull/1153))
|
||||
- Add TIMM and HuggingFace wrappers to build classifiers from them directly. ([#1102](https://github.com/open-mmlab/mmclassification/pull/1102))
|
||||
- Add reduction for neck ([#978](https://github.com/open-mmlab/mmclassification/pull/978))
|
||||
- Support HorNet Backbone for dev1.x. ([#1094](https://github.com/open-mmlab/mmclassification/pull/1094))
|
||||
- Add arcface head. ([#926](https://github.com/open-mmlab/mmclassification/pull/926))
|
||||
- Add Base Retriever and Image2Image Retriever for retrieval tasks. ([#1055](https://github.com/open-mmlab/mmclassification/pull/1055))
|
||||
- Support MobileViT backbone. ([#1068](https://github.com/open-mmlab/mmclassification/pull/1068))
|
||||
|
||||
### Improvements
|
||||
|
||||
- [Enhance] Enhance ArcFaceClsHead. ([#1181](https://github.com/open-mmlab/mmclassification/pull/1181))
|
||||
- [Refactor] Refactor to use new fileio API in MMEngine. ([#1176](https://github.com/open-mmlab/mmclassification/pull/1176))
|
||||
- [Enhance] Reproduce mobileone training accuracy. ([#1191](https://github.com/open-mmlab/mmclassification/pull/1191))
|
||||
- [Enhance] add deleting params info in swinv2. ([#1142](https://github.com/open-mmlab/mmclassification/pull/1142))
|
||||
- [Enhance] Add more mobilenetv3 pretrains. ([#1154](https://github.com/open-mmlab/mmclassification/pull/1154))
|
||||
- [Enhancement] RepVGG for YOLOX-PAI for dev-1.x. ([#1126](https://github.com/open-mmlab/mmclassification/pull/1126))
|
||||
- [Improve] Speed up data preprocessor. ([#1064](https://github.com/open-mmlab/mmclassification/pull/1064))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Fix the torchserve. ([#1143](https://github.com/open-mmlab/mmclassification/pull/1143))
|
||||
- Fix configs due to api refactor of `num_classes`. ([#1184](https://github.com/open-mmlab/mmclassification/pull/1184))
|
||||
- Update mmcls2torchserve. ([#1189](https://github.com/open-mmlab/mmclassification/pull/1189))
|
||||
- Fix for `inference_model` cannot get classes information in checkpoint. ([#1093](https://github.com/open-mmlab/mmclassification/pull/1093))
|
||||
|
||||
### Docs Update
|
||||
|
||||
- Add not-found page extension. ([#1207](https://github.com/open-mmlab/mmclassification/pull/1207))
|
||||
- update visualization doc. ([#1160](https://github.com/open-mmlab/mmclassification/pull/1160))
|
||||
- Support sort and search the Model Summary table. ([#1100](https://github.com/open-mmlab/mmclassification/pull/1100))
|
||||
- Improve the ResNet model page. ([#1118](https://github.com/open-mmlab/mmclassification/pull/1118))
|
||||
- update the readme of convnext. ([#1156](https://github.com/open-mmlab/mmclassification/pull/1156))
|
||||
- Fix the installation docs link in README. ([#1164](https://github.com/open-mmlab/mmclassification/pull/1164))
|
||||
- Improve ViT and MobileViT model pages. ([#1155](https://github.com/open-mmlab/mmclassification/pull/1155))
|
||||
- Improve Swin Doc and Add Tabs enxtation. ([#1145](https://github.com/open-mmlab/mmclassification/pull/1145))
|
||||
- Add MMEval projects link in README. ([#1162](https://github.com/open-mmlab/mmclassification/pull/1162))
|
||||
- Add runtime configuration docs. ([#1128](https://github.com/open-mmlab/mmclassification/pull/1128))
|
||||
- Add custom evaluation docs ([#1130](https://github.com/open-mmlab/mmclassification/pull/1130))
|
||||
- Add custom pipeline docs. ([#1124](https://github.com/open-mmlab/mmclassification/pull/1124))
|
||||
- Add MMYOLO projects link in MMCLS1.x. ([#1117](https://github.com/open-mmlab/mmclassification/pull/1117))
|
||||
|
||||
## v1.0.0rc2(12/10/2022)
|
||||
|
||||
### New Features
|
||||
|
||||
- \[Feature\] Support DeiT3. ([#1065](https://github.com/open-mmlab/mmclassification/pull/1065))
|
||||
- [Feature] Support DeiT3. ([#1065](https://github.com/open-mmlab/mmclassification/pull/1065))
|
||||
|
||||
### Improvements
|
||||
|
||||
- \[Enhance\] Update `analyze_results.py` for dev-1.x. ([#1071](https://github.com/open-mmlab/mmclassification/pull/1071))
|
||||
- \[Enhance\] Get scores from inference api. ([#1070](https://github.com/open-mmlab/mmclassification/pull/1070))
|
||||
- [Enhance] Update `analyze_results.py` for dev-1.x. ([#1071](https://github.com/open-mmlab/mmclassification/pull/1071))
|
||||
- [Enhance] Get scores from inference api. ([#1070](https://github.com/open-mmlab/mmclassification/pull/1070))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- \[Fix\] Update requirements. ([#1083](https://github.com/open-mmlab/mmclassification/pull/1083))
|
||||
- [Fix] Update requirements. ([#1083](https://github.com/open-mmlab/mmclassification/pull/1083))
|
||||
|
||||
### Docs Update
|
||||
|
||||
- \[Docs\] Add 1x docs schedule. ([#1015](https://github.com/open-mmlab/mmclassification/pull/1015))
|
||||
- [Docs] Add 1x docs schedule. ([#1015](https://github.com/open-mmlab/mmclassification/pull/1015))
|
||||
|
||||
## v1.0.0rc1(30/9/2022)
|
||||
|
||||
|
@ -33,10 +91,10 @@
|
|||
|
||||
### Improvements
|
||||
|
||||
- \[Refactor\] Fix visualization tools. ([#1045](https://github.com/open-mmlab/mmclassification/pull/1045))
|
||||
- \[Improve\] Update benchmark scripts ([#1028](https://github.com/open-mmlab/mmclassification/pull/1028))
|
||||
- \[Improve\] Update tools to enable `pin_memory` and `persistent_workers` by default. ([#1024](https://github.com/open-mmlab/mmclassification/pull/1024))
|
||||
- \[CI\] Update circle-ci and github workflow. ([#1018](https://github.com/open-mmlab/mmclassification/pull/1018))
|
||||
- [Refactor] Fix visualization tools. ([#1045](https://github.com/open-mmlab/mmclassification/pull/1045))
|
||||
- [Improve] Update benchmark scripts ([#1028](https://github.com/open-mmlab/mmclassification/pull/1028))
|
||||
- [Improve] Update tools to enable `pin_memory` and `persistent_workers` by default. ([#1024](https://github.com/open-mmlab/mmclassification/pull/1024))
|
||||
- [CI] Update circle-ci and github workflow. ([#1018](https://github.com/open-mmlab/mmclassification/pull/1018))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
|
@ -95,13 +153,13 @@ And there are some BC-breaking changes. Please check [the migration tutorial](ht
|
|||
|
||||
### New Features
|
||||
|
||||
- \[Feature\] Support resize relative position embedding in `SwinTransformer`. ([#749](https://github.com/open-mmlab/mmclassification/pull/749))
|
||||
- \[Feature\] Add PoolFormer backbone and checkpoints. ([#746](https://github.com/open-mmlab/mmclassification/pull/746))
|
||||
- [Feature] Support resize relative position embedding in `SwinTransformer`. ([#749](https://github.com/open-mmlab/mmclassification/pull/749))
|
||||
- [Feature] Add PoolFormer backbone and checkpoints. ([#746](https://github.com/open-mmlab/mmclassification/pull/746))
|
||||
|
||||
### Improvements
|
||||
|
||||
- \[Enhance\] Improve CPE performance by reduce memory copy. ([#762](https://github.com/open-mmlab/mmclassification/pull/762))
|
||||
- \[Enhance\] Add extra dataloader settings in configs. ([#752](https://github.com/open-mmlab/mmclassification/pull/752))
|
||||
- [Enhance] Improve CPE performance by reduce memory copy. ([#762](https://github.com/open-mmlab/mmclassification/pull/762))
|
||||
- [Enhance] Add extra dataloader settings in configs. ([#752](https://github.com/open-mmlab/mmclassification/pull/752))
|
||||
|
||||
## v0.22.0(30/3/2022)
|
||||
|
||||
|
@ -113,29 +171,29 @@ And there are some BC-breaking changes. Please check [the migration tutorial](ht
|
|||
|
||||
### New Features
|
||||
|
||||
- \[Feature\] Add CSPNet and backbone and checkpoints ([#735](https://github.com/open-mmlab/mmclassification/pull/735))
|
||||
- \[Feature\] Add `CustomDataset`. ([#738](https://github.com/open-mmlab/mmclassification/pull/738))
|
||||
- \[Feature\] Add diff seeds to diff ranks. ([#744](https://github.com/open-mmlab/mmclassification/pull/744))
|
||||
- \[Feature\] Support ConvMixer. ([#716](https://github.com/open-mmlab/mmclassification/pull/716))
|
||||
- \[Feature\] Our `dist_train` & `dist_test` tools support distributed training on multiple machines. ([#734](https://github.com/open-mmlab/mmclassification/pull/734))
|
||||
- \[Feature\] Add RepMLP backbone and checkpoints. ([#709](https://github.com/open-mmlab/mmclassification/pull/709))
|
||||
- \[Feature\] Support CUB dataset. ([#703](https://github.com/open-mmlab/mmclassification/pull/703))
|
||||
- \[Feature\] Support ResizeMix. ([#676](https://github.com/open-mmlab/mmclassification/pull/676))
|
||||
- [Feature] Add CSPNet and backbone and checkpoints ([#735](https://github.com/open-mmlab/mmclassification/pull/735))
|
||||
- [Feature] Add `CustomDataset`. ([#738](https://github.com/open-mmlab/mmclassification/pull/738))
|
||||
- [Feature] Add diff seeds to diff ranks. ([#744](https://github.com/open-mmlab/mmclassification/pull/744))
|
||||
- [Feature] Support ConvMixer. ([#716](https://github.com/open-mmlab/mmclassification/pull/716))
|
||||
- [Feature] Our `dist_train` & `dist_test` tools support distributed training on multiple machines. ([#734](https://github.com/open-mmlab/mmclassification/pull/734))
|
||||
- [Feature] Add RepMLP backbone and checkpoints. ([#709](https://github.com/open-mmlab/mmclassification/pull/709))
|
||||
- [Feature] Support CUB dataset. ([#703](https://github.com/open-mmlab/mmclassification/pull/703))
|
||||
- [Feature] Support ResizeMix. ([#676](https://github.com/open-mmlab/mmclassification/pull/676))
|
||||
|
||||
### Improvements
|
||||
|
||||
- \[Enhance\] Use `--a-b` instead of `--a_b` in arguments. ([#754](https://github.com/open-mmlab/mmclassification/pull/754))
|
||||
- \[Enhance\] Add `get_cat_ids` and `get_gt_labels` to KFoldDataset. ([#721](https://github.com/open-mmlab/mmclassification/pull/721))
|
||||
- \[Enhance\] Set torch seed in `worker_init_fn`. ([#733](https://github.com/open-mmlab/mmclassification/pull/733))
|
||||
- [Enhance] Use `--a-b` instead of `--a_b` in arguments. ([#754](https://github.com/open-mmlab/mmclassification/pull/754))
|
||||
- [Enhance] Add `get_cat_ids` and `get_gt_labels` to KFoldDataset. ([#721](https://github.com/open-mmlab/mmclassification/pull/721))
|
||||
- [Enhance] Set torch seed in `worker_init_fn`. ([#733](https://github.com/open-mmlab/mmclassification/pull/733))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- \[Fix\] Fix the discontiguous output feature map of ConvNeXt. ([#743](https://github.com/open-mmlab/mmclassification/pull/743))
|
||||
- [Fix] Fix the discontiguous output feature map of ConvNeXt. ([#743](https://github.com/open-mmlab/mmclassification/pull/743))
|
||||
|
||||
### Docs Update
|
||||
|
||||
- \[Docs\] Add brief installation steps in README for copy&paste. ([#755](https://github.com/open-mmlab/mmclassification/pull/755))
|
||||
- \[Docs\] fix logo url link from mmocr to mmcls. ([#732](https://github.com/open-mmlab/mmclassification/pull/732))
|
||||
- [Docs] Add brief installation steps in README for copy&paste. ([#755](https://github.com/open-mmlab/mmclassification/pull/755))
|
||||
- [Docs] fix logo url link from mmocr to mmcls. ([#732](https://github.com/open-mmlab/mmclassification/pull/732))
|
||||
|
||||
## v0.21.0(04/03/2022)
|
||||
|
||||
|
@ -238,18 +296,18 @@ And there are some BC-breaking changes. Please check [the migration tutorial](ht
|
|||
|
||||
### Improvements
|
||||
|
||||
- \[Reproduction\] Reproduce RegNetX training accuracy. ([#587](https://github.com/open-mmlab/mmclassification/pull/587))
|
||||
- \[Reproduction\] Reproduce training results of T2T-ViT. ([#610](https://github.com/open-mmlab/mmclassification/pull/610))
|
||||
- \[Enhance\] Provide high-acc training settings of ResNet. ([#572](https://github.com/open-mmlab/mmclassification/pull/572))
|
||||
- \[Enhance\] Set a random seed when the user does not set a seed. ([#554](https://github.com/open-mmlab/mmclassification/pull/554))
|
||||
- \[Enhance\] Added `NumClassCheckHook` and unit tests. ([#559](https://github.com/open-mmlab/mmclassification/pull/559))
|
||||
- \[Enhance\] Enhance feature extraction function. ([#593](https://github.com/open-mmlab/mmclassification/pull/593))
|
||||
- \[Enhance\] Improve efficiency of precision, recall, f1_score and support. ([#595](https://github.com/open-mmlab/mmclassification/pull/595))
|
||||
- \[Enhance\] Improve accuracy calculation performance. ([#592](https://github.com/open-mmlab/mmclassification/pull/592))
|
||||
- \[Refactor\] Refactor `analysis_log.py`. ([#529](https://github.com/open-mmlab/mmclassification/pull/529))
|
||||
- \[Refactor\] Use new API of matplotlib to handle blocking input in visualization. ([#568](https://github.com/open-mmlab/mmclassification/pull/568))
|
||||
- \[CI\] Cancel previous runs that are not completed. ([#583](https://github.com/open-mmlab/mmclassification/pull/583))
|
||||
- \[CI\] Skip build CI if only configs or docs modification. ([#575](https://github.com/open-mmlab/mmclassification/pull/575))
|
||||
- [Reproduction] Reproduce RegNetX training accuracy. ([#587](https://github.com/open-mmlab/mmclassification/pull/587))
|
||||
- [Reproduction] Reproduce training results of T2T-ViT. ([#610](https://github.com/open-mmlab/mmclassification/pull/610))
|
||||
- [Enhance] Provide high-acc training settings of ResNet. ([#572](https://github.com/open-mmlab/mmclassification/pull/572))
|
||||
- [Enhance] Set a random seed when the user does not set a seed. ([#554](https://github.com/open-mmlab/mmclassification/pull/554))
|
||||
- [Enhance] Added `NumClassCheckHook` and unit tests. ([#559](https://github.com/open-mmlab/mmclassification/pull/559))
|
||||
- [Enhance] Enhance feature extraction function. ([#593](https://github.com/open-mmlab/mmclassification/pull/593))
|
||||
- [Enhance] Improve efficiency of precision, recall, f1_score and support. ([#595](https://github.com/open-mmlab/mmclassification/pull/595))
|
||||
- [Enhance] Improve accuracy calculation performance. ([#592](https://github.com/open-mmlab/mmclassification/pull/592))
|
||||
- [Refactor] Refactor `analysis_log.py`. ([#529](https://github.com/open-mmlab/mmclassification/pull/529))
|
||||
- [Refactor] Use new API of matplotlib to handle blocking input in visualization. ([#568](https://github.com/open-mmlab/mmclassification/pull/568))
|
||||
- [CI] Cancel previous runs that are not completed. ([#583](https://github.com/open-mmlab/mmclassification/pull/583))
|
||||
- [CI] Skip build CI if only configs or docs modification. ([#575](https://github.com/open-mmlab/mmclassification/pull/575))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ and make sure you fill in all required information in the template.
|
|||
|
||||
| MMClassification version | MMCV version |
|
||||
| :----------------------: | :--------------------: |
|
||||
| 1.0.0rc2 (1.x) | mmcv>=2.0.0rc1 |
|
||||
| 1.0.0rc3 (1.x) | mmcv>=2.0.0rc1 |
|
||||
| 0.24.0 (master) | mmcv>=1.4.2, \<1.7.0 |
|
||||
| 0.23.1 | mmcv>=1.4.2, \<1.6.0 |
|
||||
| 0.22.1 | mmcv>=1.4.2, \<1.6.0 |
|
||||
|
|
|
@ -17,5 +17,5 @@ Some of the papers are published in top-tier conferences (CVPR, ICCV, and ECCV),
|
|||
To make this list also a reference for the community to develop and compare new image classification algorithms, we list them following the time order of top-tier conferences.
|
||||
Methods already supported and maintained by MMClassification are not listed.
|
||||
|
||||
- Involution: Inverting the Inherence of Convolution for Visual Recognition, CVPR21. [\[paper\]](https://arxiv.org/abs/2103.06255)[\[github\]](https://github.com/d-li14/involution)
|
||||
- Convolution of Convolution: Let Kernels Spatially Collaborate, CVPR22. [\[paper\]](https://openaccess.thecvf.com/content/CVPR2022/papers/Zhao_Convolution_of_Convolution_Let_Kernels_Spatially_Collaborate_CVPR_2022_paper.pdf)[\[github\]](https://github.com/Genera1Z/ConvolutionOfConvolution)
|
||||
- Involution: Inverting the Inherence of Convolution for Visual Recognition, CVPR21. [[paper]](https://arxiv.org/abs/2103.06255)[[github]](https://github.com/d-li14/involution)
|
||||
- Convolution of Convolution: Let Kernels Spatially Collaborate, CVPR22. [[paper]](https://openaccess.thecvf.com/content/CVPR2022/papers/Zhao_Convolution_of_Convolution_Let_Kernels_Spatially_Collaborate_CVPR_2022_paper.pdf)[[github]](https://github.com/Genera1Z/ConvolutionOfConvolution)
|
||||
|
|
|
@ -223,7 +223,7 @@ optim_wrapper = dict(
|
|||
]
|
||||
```
|
||||
|
||||
注意这里增加了 `begin` 和 `end` 参数,这两个参数指定了调度器的**生效区间**。生效区间通常只在多个调度器组合时才需要去设置,使用单个调度器时可以忽略。当指定了 `begin` 和 `end` 参数时,表示该调度器只在 \[begin, end) 区间内生效,其单位是由 `by_epoch` 参数决定。在组合不同调度器时,各调度器的 `by_epoch` 参数不必相同。如果没有指定的情况下,`begin` 为 0, `end` 为最大迭代轮次或者最大迭代次数。
|
||||
注意这里增加了 `begin` 和 `end` 参数,这两个参数指定了调度器的**生效区间**。生效区间通常只在多个调度器组合时才需要去设置,使用单个调度器时可以忽略。当指定了 `begin` 和 `end` 参数时,表示该调度器只在 [begin, end) 区间内生效,其单位是由 `by_epoch` 参数决定。在组合不同调度器时,各调度器的 `by_epoch` 参数不必相同。如果没有指定的情况下,`begin` 为 0, `end` 为最大迭代轮次或者最大迭代次数。
|
||||
|
||||
如果相邻两个调度器的生效区间没有紧邻,而是有一段区间没有被覆盖,那么这段区间的学习率维持不变。而如果两个调度器的生效区间发生了重叠,则对多组调度器叠加使用,学习率的调整会按照调度器配置文件中的顺序触发(行为与 PyTorch 中 [`ChainedScheduler`](torch.optim.lr_scheduler.ChainedScheduler) 一致)。
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
|
||||
| MMClassification version | MMCV version |
|
||||
| :----------------------: | :--------------------: |
|
||||
| 1.0.0rc2 (1.x) | mmcv>=2.0.0rc1 |
|
||||
| 1.0.0rc3 (1.x) | mmcv>=2.0.0rc1 |
|
||||
| 0.24.0 (master) | mmcv>=1.4.2, \<1.7.0 |
|
||||
| 0.23.1 | mmcv>=1.4.2, \<1.6.0 |
|
||||
| 0.22.1 | mmcv>=1.4.2, \<1.6.0 |
|
||||
|
|
|
@ -28,7 +28,7 @@ CUDA_VISIBLE_DEVICES=-1 python tools/train.py ${CONFIG_FILE} [ARGS]
|
|||
| `--amp` | 启用混合精度训练。 |
|
||||
| `--no-validate` | **不建议** 在训练过程中不进行验证集上的精度验证。 |
|
||||
| `--auto-scale-lr` | 自动根据实际的批次大小(batch size)和预设的批次大小对学习率进行缩放。 |
|
||||
| `--cfg-options CFG_OPTIONS` | 重载配置文件中的一些设置。使用类似 `xxx=yyy` 的键值对形式指定,这些设置会被融合入从配置文件读取的配置。你可以使用 `key="[a,b]"` 或者 `key=a,b` 的格式来指定列表格式的值,且支持嵌套,例如 \`key="\[(a,b),(c,d)\]",这里的引号是不可省略的。另外每个重载项内部不可出现空格。 |
|
||||
| `--cfg-options CFG_OPTIONS` | 重载配置文件中的一些设置。使用类似 `xxx=yyy` 的键值对形式指定,这些设置会被融合入从配置文件读取的配置。你可以使用 `key="[a,b]"` 或者 `key=a,b` 的格式来指定列表格式的值,且支持嵌套,例如 \`key="[(a,b),(c,d)]",这里的引号是不可省略的。另外每个重载项内部不可出现空格。 |
|
||||
| `--launcher {none,pytorch,slurm,mpi}` | 启动器,默认为 "none"。 |
|
||||
|
||||
### 单机多卡训练
|
||||
|
@ -141,7 +141,7 @@ CUDA_VISIBLE_DEVICES=-1 python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [
|
|||
| `--work-dir WORK_DIR` | 用来保存测试指标结果的文件夹。 |
|
||||
| `--out OUT` | 用来保存测试指标结果的文件。 |
|
||||
| `--dump DUMP` | 用来保存所有模型输出的文件,这些数据可以用于离线测评。 |
|
||||
| `--cfg-options CFG_OPTIONS` | 重载配置文件中的一些设置。使用类似 `xxx=yyy` 的键值对形式指定,这些设置会被融合入从配置文件读取的配置。你可以使用 `key="[a,b]"` 或者 `key=a,b` 的格式来指定列表格式的值,且支持嵌套,例如 \`key="\[(a,b),(c,d)\]",这里的引号是不可省略的。另外每个重载项内部不可出现空格。 |
|
||||
| `--cfg-options CFG_OPTIONS` | 重载配置文件中的一些设置。使用类似 `xxx=yyy` 的键值对形式指定,这些设置会被融合入从配置文件读取的配置。你可以使用 `key="[a,b]"` 或者 `key=a,b` 的格式来指定列表格式的值,且支持嵌套,例如 \`key="[(a,b),(c,d)]",这里的引号是不可省略的。另外每个重载项内部不可出现空格。 |
|
||||
| `--show-dir SHOW_DIR` | 用于保存可视化预测结果图像的文件夹。 |
|
||||
| `--show` | 在窗口中显示预测结果图像。 |
|
||||
| `--interval INTERVAL` | 每隔多少样本进行一次预测结果可视化。 |
|
||||
|
|
|
@ -250,7 +250,7 @@ class HorNetBlock(nn.Module):
|
|||
|
||||
@MODELS.register_module()
|
||||
class HorNet(BaseBackbone):
|
||||
"""HorNet.
|
||||
"""HorNet backbone.
|
||||
|
||||
A PyTorch implementation of paper `HorNet: Efficient High-Order Spatial
|
||||
Interactions with Recursive Gated Convolutions
|
||||
|
@ -262,6 +262,7 @@ class HorNet(BaseBackbone):
|
|||
|
||||
If use string, choose from 'tiny', 'small', 'base' and 'large'.
|
||||
If use dict, it should have below keys:
|
||||
|
||||
- **base_dim** (int): The base dimensions of embedding.
|
||||
- **depths** (List[int]): The number of blocks in each stage.
|
||||
- **orders** (List[int]): The number of order of gnConv in each
|
||||
|
@ -273,7 +274,7 @@ class HorNet(BaseBackbone):
|
|||
drop_path_rate (float): Stochastic depth rate. Defaults to 0.
|
||||
scale (float): Scaling parameter of gflayer outputs. Defaults to 1/3.
|
||||
use_layer_scale (bool): Whether to use use_layer_scale in HorNet
|
||||
block. Defaults to True.
|
||||
block. Defaults to True.
|
||||
out_indices (Sequence[int]): Output from which stages.
|
||||
Default: ``(3, )``.
|
||||
frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
|
||||
|
|
|
@ -309,43 +309,48 @@ class RepVGG(BaseBackbone):
|
|||
<https://arxiv.org/abs/2101.03697>`_
|
||||
|
||||
Args:
|
||||
arch (str | dict): RepVGG architecture. If use string,
|
||||
choose from 'A0', 'A1`', 'A2', 'B0', 'B1', 'B1g2', 'B1g4', 'B2'
|
||||
, 'B2g2', 'B2g4', 'B3', 'B3g2', 'B3g4' or 'D2se'. If use dict,
|
||||
it should have below keys:
|
||||
- num_blocks (Sequence[int]): Number of blocks in each stage.
|
||||
- width_factor (Sequence[float]): Width deflator in each stage.
|
||||
- group_layer_map (dict | None): RepVGG Block that declares
|
||||
arch (str | dict): RepVGG architecture. If use string, choose from
|
||||
'A0', 'A1`', 'A2', 'B0', 'B1', 'B1g2', 'B1g4', 'B2', 'B2g2',
|
||||
'B2g4', 'B3', 'B3g2', 'B3g4' or 'D2se'. If use dict, it should
|
||||
have below keys:
|
||||
|
||||
- **num_blocks** (Sequence[int]): Number of blocks in each stage.
|
||||
- **width_factor** (Sequence[float]): Width deflator in each stage.
|
||||
- **group_layer_map** (dict | None): RepVGG Block that declares
|
||||
the need to apply group convolution.
|
||||
- se_cfg (dict | None): Se Layer config.
|
||||
- stem_channels (int, optional): The stem channels, the final
|
||||
stem channels will be
|
||||
``min(stem_channels, base_channels*width_factor[0])``.
|
||||
If not set here, 64 is used by default in the code.
|
||||
in_channels (int): Number of input image channels. Default: 3.
|
||||
- **se_cfg** (dict | None): SE Layer config.
|
||||
- **stem_channels** (int, optional): The stem channels, the final
|
||||
stem channels will be
|
||||
``min(stem_channels, base_channels*width_factor[0])``.
|
||||
If not set here, 64 is used by default in the code.
|
||||
|
||||
in_channels (int): Number of input image channels. Defaults to 3.
|
||||
base_channels (int): Base channels of RepVGG backbone, work with
|
||||
width_factor together. Defaults to 64.
|
||||
out_indices (Sequence[int]): Output from which stages. Default: (3, ).
|
||||
out_indices (Sequence[int]): Output from which stages.
|
||||
Defaults to ``(3, )``.
|
||||
strides (Sequence[int]): Strides of the first block of each stage.
|
||||
Default: (2, 2, 2, 2).
|
||||
Defaults to ``(2, 2, 2, 2)``.
|
||||
dilations (Sequence[int]): Dilation of each stage.
|
||||
Default: (1, 1, 1, 1).
|
||||
Defaults to ``(1, 1, 1, 1)``.
|
||||
frozen_stages (int): Stages to be frozen (all param fixed). -1 means
|
||||
not freezing any parameters. Default: -1.
|
||||
conv_cfg (dict | None): The config dict for conv layers. Default: None.
|
||||
not freezing any parameters. Defaults to -1.
|
||||
conv_cfg (dict | None): The config dict for conv layers.
|
||||
Defaults to None.
|
||||
norm_cfg (dict): The config dict for norm layers.
|
||||
Default: dict(type='BN').
|
||||
Defaults to ``dict(type='BN')``.
|
||||
act_cfg (dict): Config dict for activation layer.
|
||||
Default: dict(type='ReLU').
|
||||
Defaults to ``dict(type='ReLU')``.
|
||||
with_cp (bool): Use checkpoint or not. Using checkpoint will save some
|
||||
memory while slowing down the training speed. Default: False.
|
||||
memory while slowing down the training speed. Defaults to False.
|
||||
deploy (bool): Whether to switch the model structure to deployment
|
||||
mode. Default: False.
|
||||
mode. Defaults to False.
|
||||
norm_eval (bool): Whether to set norm layers to eval mode, namely,
|
||||
freeze running stats (mean and var). Note: Effect on Batch Norm
|
||||
and its variants only. Default: False.
|
||||
add_ppf (bool): Whether to use the MTSPPF block. Default: False.
|
||||
and its variants only. Defaults to False.
|
||||
add_ppf (bool): Whether to use the MTSPPF block. Defaults to False.
|
||||
init_cfg (dict or list[dict], optional): Initialization config dict.
|
||||
Defaults to None.
|
||||
"""
|
||||
|
||||
groupwise_layers = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26]
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Copyright (c) OpenMMLab. All rights reserved
|
||||
|
||||
__version__ = '1.0.0rc2'
|
||||
__version__ = '1.0.0rc3'
|
||||
|
||||
|
||||
def parse_version_info(version_str):
|
||||
|
|
Loading…
Reference in New Issue