[Docs] Update readme (#1449)

* update readme

* update

* refine

* refine

* update cn version

* update installation

* update modelzoo table

* fix lint

* update

* update

* update

* update

* fix lint

* update

* update

* update changelog

* remove gif

* fix typo

* update announcement

* update

* fixtypo

* update
pull/1464/head
Yixiao Fang 2023-04-06 17:17:56 +08:00 committed by GitHub
parent 75dceaa78f
commit 3069e43f77
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 291 additions and 193 deletions

238
README.md
View File

@ -19,7 +19,7 @@
</div>
<div>&nbsp;</div>
[![PyPI](https://img.shields.io/pypi/v/mmcls)](https://pypi.org/project/mmcls)
[![PyPI](https://img.shields.io/pypi/v/mmpretrain)](https://pypi.org/project/mmpretrain)
[![Docs](https://img.shields.io/badge/docs-latest-blue)](https://mmclassification.readthedocs.io/en/1.x/)
[![Build Status](https://github.com/open-mmlab/mmclassification/workflows/build/badge.svg)](https://github.com/open-mmlab/mmclassification/actions)
[![codecov](https://codecov.io/gh/open-mmlab/mmclassification/branch/1.x/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmclassification)
@ -33,6 +33,10 @@
[🆕 Update News](https://mmclassification.readthedocs.io/en/1.x/notes/changelog.html) |
[🤔 Reporting Issues](https://github.com/open-mmlab/mmclassification/issues/new/choose)
<img src="https://user-images.githubusercontent.com/36138628/230307505-4727ad0a-7d71-4069-939d-b499c7e272b7.png" width="400"/>
English | [简体中文](/README_zh-CN.md)
</div>
</div>
@ -59,55 +63,43 @@
## Introduction
English | [简体中文](/README_zh-CN.md)
MMPreTrain is an open source pre-training toolbox based on PyTorch. It is a part of the [OpenMMLab](https://openmmlab.com/) project.
MMClassification is an open source image classification toolbox based on PyTorch. It is
a part of the [OpenMMLab](https://openmmlab.com/) project.
The `1.x` branch works with **PyTorch 1.6+**.
<div align="center">
<img src="https://user-images.githubusercontent.com/9102141/87268895-3e0d0780-c4fe-11ea-849e-6140b7e0d4de.gif" width="70%"/>
</div>
The `main` branch works with **PyTorch 1.8+**.
### Major features
- Various backbones and pretrained models
- Rich training strategies(supervised learning, self-supervised learning, etc.)
- Bag of training tricks
- Large-scale training configs
- High efficiency and extensibility
- Powerful toolkits
- Powerful toolkits for model analysis and experiments
## What's new
v1.0.0rc5 was released in 30/12/2022
🌟 v1.0.0rc6 was released in 06/04/2023
- Support **EVA**, **RevViT**, **EfficientnetV2**, **CLIP**, **TinyViT** and **MixMIM** backbones.
- Integrated Self-supervised leanrning algorithms from **MMSelfSup**, such as `MAE`, `BEiT`, `MILAN`, etc.
- Add t-SNE visualization.
- Refactor dataset pipeline visualization.
Previous version update
- Support **LeViT**, **XCiT**, **ViG**, **ConvNeXt-V2**, **EVA**, **RevViT**, **EfficientnetV2**, **CLIP**, **TinyViT** and **MixMIM** backbones.
- Reproduce the training accuracy of **ConvNeXt** and **RepVGG**.
- Support **multi-task** training and testing. See [#1229](https://github.com/open-mmlab/mmclassification/pull/1229) for more details.
- Support Test-time Augmentation. See [#1161](https://github.com/open-mmlab/mmclassification/pull/1161) for
more details.
v1.0.0rc4 was released in 06/12/2022.
- Upgrade API to get pre-defined models of MMClassification. See [#1236](https://github.com/open-mmlab/mmclassification/pull/1236) for more details.
- Refactor BEiT backbone and support v1/v2 inference. See [#1144](https://github.com/open-mmlab/mmclassification/pull/1144).
v1.0.0rc3 was released in 21/11/2022.
- Add **Switch Recipe** Hook, Now we can modify training pipeline, mixup and loss settings during training, see [#1101](https://github.com/open-mmlab/mmclassification/pull/1101).
- Add **TIMM and HuggingFace** wrappers. Now you can train/use models in TIMM/HuggingFace directly, see [#1102](https://github.com/open-mmlab/mmclassification/pull/1102).
- Support **retrieval tasks**, see [#1055](https://github.com/open-mmlab/mmclassification/pull/1055).
- Reproduce **mobileone** training accuracy. See [#1191](https://github.com/open-mmlab/mmclassification/pull/1191)
- Support confusion matrix calculation and plot.
- Support **multi-task** training and testing.
- Support Test-time Augmentation.
- Upgrade API to get pre-defined models of MMClassification.
- Refactor BEiT backbone and support v1/v2 inference.
This release introduced a brand new and flexible training & test engine, but it's still in progress. Welcome
to try according to [the documentation](https://mmclassification.readthedocs.io/en/1.x/).
And there are some BC-breaking changes. Please check [the migration tutorial](https://mmclassification.readthedocs.io/en/1.x/migration.html).
The release candidate will last until the end of 2022, and during the release candidate, we will develop on the `1.x` branch. And we will still maintain 0.x version still at least the end of 2023.
Please refer to [changelog.md](https://mmclassification.readthedocs.io/en/1.x/notes/changelog.html) for more details and other release history.
Please refer to [changelog](https://mmclassification.readthedocs.io/en/1.x/notes/changelog.html) for more details and other release history.
## Installation
@ -117,87 +109,145 @@ Below are quick steps for installation:
conda create -n open-mmlab python=3.8 pytorch==1.10.1 torchvision==0.11.2 cudatoolkit=11.3 -c pytorch -y
conda activate open-mmlab
pip install openmim
git clone -b 1.x https://github.com/open-mmlab/mmclassification.git
cd mmclassification
git clone https://github.com/open-mmlab/mmpretrain.git
cd mmpretrain
mim install -e .
```
Please refer to [install.md](https://mmclassification.readthedocs.io/en/1.x/get_started.html) for more detailed installation and dataset preparation.
Please refer to [installation documentation](https://mmclassification.readthedocs.io/en/1.x/get_started.html) for more detailed installation and dataset preparation.
## User Guides
We provided a series of tutorials about the basic usage of MMClassification for new users:
We provided a series of tutorials about the basic usage of MMPreTrain for new users:
- [Inference with existing models](https://mmclassification.readthedocs.io/en/1.x/user_guides/inference.html)
- [Prepare Dataset](https://mmclassification.readthedocs.io/en/1.x/user_guides/dataset_prepare.html)
- [Training and Test](https://mmclassification.readthedocs.io/en/1.x/user_guides/train_test.html)
- [Learn about Configs](https://mmclassification.readthedocs.io/en/1.x/user_guides/config.html)
- [Fine-tune Models](https://mmclassification.readthedocs.io/en/1.x/user_guides/finetune.html)
- [Analysis Tools](https://mmclassification.readthedocs.io/en/1.x/user_guides/analysis.html)
- [Visualization Tools](https://mmclassification.readthedocs.io/en/1.x/user_guides/visualization.html)
- [Other Useful Tools](https://mmclassification.readthedocs.io/en/1.x/user_guides/useful_tools.html)
- [Prepare Dataset](https://mmclassification.readthedocs.io/en/1.x/user_guides/dataset_prepare.html)
- [Inference with existing models](https://mmclassification.readthedocs.io/en/1.x/user_guides/inference.html)
- [Train](https://mmclassification.readthedocs.io/en/pretrain/user_guides/train.html)
- [Test](https://mmclassification.readthedocs.io/en/pretrain/user_guides/test.html)
- [Downstream tasks](https://mmclassification.readthedocs.io/en/pretrain/user_guides/downstream.html)
For more information, please refer to [our documentation](https://mmclassification.readthedocs.io/en/pretrain/).
## Model zoo
Results and models are available in the [model zoo](https://mmclassification.readthedocs.io/en/1.x/modelzoo_statistics.html).
<details open>
<summary>Supported backbones</summary>
- [x] [VGG](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/vgg)
- [x] [ResNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/resnet)
- [x] [ResNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/resnext)
- [x] [SE-ResNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/seresnet)
- [x] [SE-ResNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/seresnet)
- [x] [RegNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/regnet)
- [x] [ShuffleNetV1](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/shufflenet_v1)
- [x] [ShuffleNetV2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/shufflenet_v2)
- [x] [MobileNetV2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobilenet_v2)
- [x] [MobileNetV3](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobilenet_v3)
- [x] [Swin-Transformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/swin_transformer)
- [x] [Swin-Transformer V2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/swin_transformer_v2)
- [x] [RepVGG](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/repvgg)
- [x] [Vision-Transformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/vision_transformer)
- [x] [Transformer-in-Transformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/tnt)
- [x] [Res2Net](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/res2net)
- [x] [MLP-Mixer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mlp_mixer)
- [x] [DeiT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/deit)
- [x] [DeiT-3](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/deit3)
- [x] [Conformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/conformer)
- [x] [T2T-ViT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/t2t_vit)
- [x] [Twins](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/twins)
- [x] [EfficientNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/efficientnet)
- [x] [EdgeNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/edgenext)
- [x] [ConvNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/convnext)
- [x] [HRNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/hrnet)
- [x] [VAN](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/van)
- [x] [ConvMixer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/convmixer)
- [x] [CSPNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/cspnet)
- [x] [PoolFormer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/poolformer)
- [x] [Inception V3](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/inception_v3)
- [x] [MobileOne](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobileone)
- [x] [EfficientFormer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/efficientformer)
- [x] [MViT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mvit)
- [x] [HorNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/hornet)
- [x] [MobileViT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobilevit)
- [x] [DaViT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/davit)
- [x] [RepLKNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/replknet)
- [x] [BEiT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/beit) / [BEiT v2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/beitv2)
- [x] [EVA](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/eva)
- [x] [MixMIM](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mixmim)
- [x] [EfficientNetV2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/efficientnet_v2)
</details>
<div align="center">
<b>Overview</b>
</div>
<table align="center">
<tbody>
<tr align="center" valign="bottom">
<td>
<b>Supported Backbones</b>
</td>
<td>
<b>Self-supervised Learning</b>
</td>
<td>
<b>Others</b>
</td>
</tr>
<tr valign="top">
<td>
<ul>
<li><a href="configs/vgg">VGG</a></li>
<li><a href="configs/resnet">ResNet</a></li>
<li><a href="configs/resnext">ResNeXt</a></li>
<li><a href="configs/seresnet">SE-ResNet</a></li>
<li><a href="configs/seresnet">SE-ResNeXt</a></li>
<li><a href="configs/regnet">RegNet</a></li>
<li><a href="configs/shufflenet_v1">ShuffleNet V1</a></li>
<li><a href="configs/shufflenet_v2">ShuffleNet V2</a></li>
<li><a href="configs/mobilenet_v2">MobileNet V2</a></li>
<li><a href="configs/mobilenet_v3">MobileNet V3</a></li>
<li><a href="configs/swin_transformer">Swin-Transformer</a></li>
<li><a href="configs/swin_transformer_v2">Swin-Transformer V2</a></li>
<li><a href="configs/repvgg">RepVGG</a></li>
<li><a href="configs/vision_transformer">Vision-Transformer</a></li>
<li><a href="configs/tnt">Transformer-in-Transformer</a></li>
<li><a href="configs/res2net">Res2Net</a></li>
<li><a href="configs/mlp_mixer">MLP-Mixer</a></li>
<li><a href="configs/deit">DeiT</a></li>
<li><a href="configs/deit3">DeiT-3</a></li>
<li><a href="configs/conformer">Conformer</a></li>
<li><a href="configs/t2t_vit">T2T-ViT</a></li>
<li><a href="configs/twins">Twins</a></li>
<li><a href="configs/efficientnet">EfficientNet</a></li>
<li><a href="configs/edgenext">EdgeNeXt</a></li>
<li><a href="configs/convnext">ConvNeXt</a></li>
<li><a href="configs/hrnet">HRNet</a></li>
<li><a href="configs/van">VAN</a></li>
<li><a href="configs/convmixer">ConvMixer</a></li>
<li><a href="configs/cspnet">CSPNet</a></li>
<li><a href="configs/poolformer">PoolFormer</a></li>
<li><a href="configs/inception_v3">Inception V3</a></li>
<li><a href="configs/mobileone">MobileOne</a></li>
<li><a href="configs/efficientformer">EfficientFormer</a></li>
<li><a href="configs/mvit">MViT</a></li>
<li><a href="configs/hornet">HorNet</a></li>
<li><a href="configs/mobilevit">MobileViT</a></li>
<li><a href="configs/davit">DaViT</a></li>
<li><a href="configs/replknet">RepLKNet</a></li>
<li><a href="configs/beit">BEiT</a></li>
<li><a href="configs/mixmim">MixMIM</a></li>
<li><a href="configs/efficientnet_v2">EfficientNet V2</a></li>
<li><a href="configs/revvit">RevViT</a></li>
<li><a href="configs/convnext_v2">ConvNeXt V2</a></li>
<li><a href="configs/vig">ViG</a></li>
<li><a href="configs/xcit">XCiT</a></li>
<li><a href="configs/levit">LeViT</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="configs/mocov2">MoCo V1 (CVPR'2020)</a></li>
<li><a href="configs/simclr">SimCLR (ICML'2020)</a></li>
<li><a href="configs/mocov2">MoCo V2 (arXiv'2020)</a></li>
<li><a href="configs/byol">BYOL (NeurIPS'2020)</a></li>
<li><a href="configs/swav">SwAV (NeurIPS'2020)</a></li>
<li><a href="configs/densecl">DenseCL (CVPR'2021)</a></li>
<li><a href="configs/simsiam">SimSiam (CVPR'2021)</a></li>
<li><a href="configs/barlowtwins">Barlow Twins (ICML'2021)</a></li>
<li><a href="configs/mocov3">MoCo V3 (ICCV'2021)</a></li>
<li><a href="configs/beit">BEiT (ICLR'2022)</a></li>
<li><a href="configs/mae">MAE (CVPR'2022)</a></li>
<li><a href="configs/simmim">SimMIM (CVPR'2022)</a></li>
<li><a href="configs/maskfeat">MaskFeat (CVPR'2022)</a></li>
<li><a href="configs/cae">CAE (arXiv'2022)</a></li>
<li><a href="configs/milan">MILAN (arXiv'2022)</a></li>
<li><a href="configs/beitv2">BEiT V2 (arXiv'2022)</a></li>
<li><a href="configs/eva">EVA (CVPR'2023)</a></li>
<li><a href="configs/mixmim">MixMIM (arXiv'2022)</a></li>
</ul>
</td>
<td>
Image Retrieval Task:
<ul>
<li><a href="configs/arcface">ArcFace (CVPR'2019)</a></li>
</ul>
Training&Test Tips:
<ul>
<li><a href="https://arxiv.org/abs/1909.13719">RandAug</a></li>
<li><a href="https://arxiv.org/abs/1805.09501">AutoAug</a></li>
<li><a href="mmpretrain/datasets/samplers/repeat_aug.py">RepeatAugSampler</a></li>
<li><a href="mmpretrain/models/tta/score_tta.py">TTA</a></li>
<li>...</li>
</ul>
</td>
</tbody>
</table>
## Contributing
We appreciate all contributions to improve MMClassification.
Please refer to [CONTRUBUTING.md](https://mmclassification.readthedocs.io/en/1.x/notes/contribution_guide.html) for the contributing guideline.
We appreciate all contributions to improve MMPreTrain.
Please refer to [CONTRUBUTING](https://mmclassification.readthedocs.io/en/1.x/notes/contribution_guide.html) for the contributing guideline.
## Acknowledgement
MMClassification is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new classifiers.
MMPreTrain is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and supporting their own academic research.
## Citation
@ -222,7 +272,7 @@ This project is released under the [Apache 2.0 license](LICENSE).
- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
- [MMEval](https://github.com/open-mmlab/mmeval): A unified evaluation library for multiple machine learning libraries.
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
- [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab pre-training toolbox and benchmark.
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.

View File

@ -19,7 +19,7 @@
</div>
<div>&nbsp;</div>
[![PyPI](https://img.shields.io/pypi/v/mmcls)](https://pypi.org/project/mmcls)
[![PyPI](https://img.shields.io/pypi/v/mmpretrain)](https://pypi.org/project/mmpretrain)
[![Docs](https://img.shields.io/badge/docs-latest-blue)](https://mmclassification.readthedocs.io/zh_CN/1.x/)
[![Build Status](https://github.com/open-mmlab/mmclassification/workflows/build/badge.svg)](https://github.com/open-mmlab/mmclassification/actions)
[![codecov](https://codecov.io/gh/open-mmlab/mmclassification/branch/1.x/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmclassification)
@ -30,9 +30,13 @@
[📘 中文文档](https://mmclassification.readthedocs.io/zh_CN/1.x/) |
[🛠️ 安装教程](https://mmclassification.readthedocs.io/zh_CN/1.x/get_started.html) |
[👀 模型库](https://mmclassification.readthedocs.io/zh_CN/1.x/modelzoo_statistics.html) |
[🆕 更新日志](https://mmclassification.readthedocs.io/en/1.x/notes/changelog.html) |
[🆕 更新日志](https://mmclassification.readthedocs.io/zh_CN/1.x/notes/changelog.html) |
[🤔 报告问题](https://github.com/open-mmlab/mmclassification/issues/new/choose)
<img src="https://user-images.githubusercontent.com/36138628/230307505-4727ad0a-7d71-4069-939d-b499c7e272b7.png" width="400"/>
[English](/README.md) | 简体中文
</div>
<div align="center">
@ -57,55 +61,42 @@
## Introduction
[English](/README.md) | 简体中文
MMPreTrain 是一款基于 PyTorch 的开源深度学习预训练工具箱,是 [OpenMMLab](https://openmmlab.com/) 项目的成员之一
MMClassification 是一款基于 PyTorch 的开源图像分类工具箱,是 [OpenMMLab](https://openmmlab.com/) 项目的成员之一
主分支代码目前支持 PyTorch 1.5 以上的版本。
<div align="center">
<img src="https://user-images.githubusercontent.com/9102141/87268895-3e0d0780-c4fe-11ea-849e-6140b7e0d4de.gif" width="70%"/>
</div>
`主分支`代码目前支持 PyTorch 1.8 以上的版本。
### 主要特性
- 支持多样的主干网络与预训练模型
- 支持配置多种训练技巧
- 支持多种训练策略(有监督学习,无监督学习等)
- 提供多种训练技巧
- 大量的训练配置文件
- 高效率和高可扩展性
- 功能强大的工具箱
- 功能强大的工具箱,有助于模型分析和实验
## 更新日志
2022/12/30 发布了 v1.0.0rc5 版本
🌟 2023/4/6 发布了 v1.0.0rc6 版本
- 支持了**EVA**, **RevViT**, **EfficientnetV2**, **CLIP**, **TinyViT****MixMIM** 等骨干网络结构
- 整和来自 MMSelfSup 的自监督学习算法,例如 `MAE`, `BEiT`, `MILAN`
- 支持 t-SNE 可视化
- 重构数据管道可视化
之前版本更新内容
- 支持了 **LeViT**, **XCiT**, **ViG**, **ConvNeXt-V2**, **EVA**, **RevViT**, **EfficientnetV2**, **CLIP**, **TinyViT****MixMIM** 等骨干网络结构
- 复现了 ConvNeXt 和 RepVGG 的训练精度。
- 支持了 **多任务** 训练和测试,详见 [#1229](https://github.com/open-mmlab/mmclassification/pull/1229)
- 支持了测试时增强TTA详见 [#1161](https://github.com/open-mmlab/mmclassification/pull/1161)
2022/12/06 发布了 v1.0.0rc4 版本
- 更新了主要 API 接口,用以方便地获取 MMClassification 中预定义的模型。详见 [#1236](https://github.com/open-mmlab/mmclassification/pull/1236)。
- 支持混淆矩阵计算和画图。
- 支持了 **多任务** 训练和测试。
- 支持了测试时增强TTA
- 更新了主要 API 接口,用以方便地获取 MMClassification 中预定义的模型。
- 重构 BEiT 主干网络结构,并支持 v1 和 v2 模型的推理。
2022/11/21 发布了 v1.0.0rc3 版本
这个版本引入一个全新的,可扩展性强的训练和测试引擎,但目前仍在开发中。欢迎根据 [文档](https://mmclassification.readthedocs.io/zh_CN/1.x/) 进行试用。
- 添加了 **Switch Recipe Hook**现在我们可以在训练过程中修改数据增强、Mixup设置、loss设置等
- 添加了 **TIMM 和 HuggingFace** 包装器,现在我们可以直接训练、使用 TIMM 和 HuggingFace 中的模型
- 支持了检索任务
- 复现了 **MobileOne** 训练精度
同时,新版本中存在一些与旧版本不兼容的修改。请查看 [迁移文档](https://mmclassification.readthedocs.io/zh_CN/1.x/migration.html) 来详细了解这些变动。
2022/8/31 发布了 v1.0.0rc0 版本
这个版本引入一个全新的,可扩展性强的训练和测试引擎,但目前仍在开发中。欢迎根据[文档](https://mmclassification.readthedocs.io/zh_CN/1.x/)进行试用。
同时,新版本中存在一些与旧版本不兼容的修改。请查看[迁移文档](https://mmclassification.readthedocs.io/zh_CN/1.x/migration.html)来详细了解这些变动。
新版本的公测将持续到 2022 年末,在此期间,我们将基于 `1.x` 分支进行更新,不会合入到 `master` 分支。另外,至少
到 2023 年末,我们会保持对 0.x 版本的维护。
发布历史和更新细节请参考 [更新日志](https://mmclassification.readthedocs.io/zh_CN/1.x/notes/changelog.html)
发布历史和更新细节请参考 [更新日志](https://mmclassification.readthedocs.io/zh_CN/1.x/notes/changelog.html)。
## 安装
@ -115,8 +106,8 @@ MMClassification 是一款基于 PyTorch 的开源图像分类工具箱,是 [O
conda create -n open-mmlab python=3.8 pytorch==1.10.1 torchvision==0.11.2 cudatoolkit=11.3 -c pytorch -y
conda activate open-mmlab
pip3 install openmim
git clone -b 1.x https://github.com/open-mmlab/mmclassification.git
cd mmclassification
git clone https://github.com/open-mmlab/mmpretrain.git
cd mmpretrain
mim install -e .
```
@ -126,79 +117,136 @@ mim install -e .
我们为新用户提供了一系列基础教程:
- [使用现有模型推理](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/inference.html)
- [准备数据集](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/dataset_prepare.html)
- [训练与测试](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/train_test.html)
- [学习配置文件](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/config.html)
- [如何微调模型](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/finetune.html)
- [分析工具](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/analysis.html)
- [可视化工具](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/visualization.html)
- [其他工具](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/useful_tools.html)
- [准备数据集](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/dataset_prepare.html)
- [使用现有模型推理](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/inference.html)
- [训练](https://mmclassification.readthedocs.io/zh_CN/pretrain/user_guides/train.html)
- [测试](https://mmclassification.readthedocs.io/zh_CN/pretrain/user_guides/test.html)
- [下游任务](https://mmclassification.readthedocs.io/zh_CN/pretrain/user_guides/downstream.html)
关于更多的信息,请查阅我们的 [相关文档](https://mmclassification.readthedocs.io/zh_CN/pretrain/)。
## 模型库
相关结果和模型可在 [model zoo](https://mmclassification.readthedocs.io/zh_CN/1.x/modelzoo_statistics.html) 中获得
相关结果和模型可在 [模型库](https://mmclassification.readthedocs.io/zh_CN/1.x/modelzoo_statistics.html) 中获得
<details open>
<summary>支持的主干网络</summary>
- [x] [VGG](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/vgg)
- [x] [ResNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/resnet)
- [x] [ResNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/resnext)
- [x] [SE-ResNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/seresnet)
- [x] [SE-ResNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/seresnet)
- [x] [RegNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/regnet)
- [x] [ShuffleNetV1](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/shufflenet_v1)
- [x] [ShuffleNetV2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/shufflenet_v2)
- [x] [MobileNetV2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobilenet_v2)
- [x] [MobileNetV3](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobilenet_v3)
- [x] [Swin-Transformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/swin_transformer)
- [x] [Swin-Transformer V2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/swin_transformer_v2)
- [x] [RepVGG](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/repvgg)
- [x] [Vision-Transformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/vision_transformer)
- [x] [Transformer-in-Transformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/tnt)
- [x] [Res2Net](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/res2net)
- [x] [MLP-Mixer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mlp_mixer)
- [x] [DeiT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/deit)
- [x] [DeiT-3](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/deit3)
- [x] [Conformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/conformer)
- [x] [T2T-ViT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/t2t_vit)
- [x] [Twins](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/twins)
- [x] [EfficientNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/efficientnet)
- [x] [EdgeNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/edgenext)
- [x] [ConvNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/convnext)
- [x] [HRNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/hrnet)
- [x] [VAN](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/van)
- [x] [ConvMixer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/convmixer)
- [x] [CSPNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/cspnet)
- [x] [PoolFormer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/poolformer)
- [x] [Inception V3](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/inception_v3)
- [x] [MobileOne](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobileone)
- [x] [EfficientFormer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/efficientformer)
- [x] [MViT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mvit)
- [x] [HorNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/hornet)
- [x] [MobileViT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobilevit)
- [x] [DaViT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/davit)
- [x] [RepLKNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/replknet)
- [x] [BEiT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/beit) / [BEiT v2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/beitv2)
- [x] [EVA](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/eva)
- [x] [MixMIM](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mixmim)
</details>
<div align="center">
<b>概览</b>
</div>
<table align="center">
<tbody>
<tr align="center" valign="bottom">
<td>
<b>支持的主干网络</b>
</td>
<td>
<b>自监督学习</b>
</td>
<td>
<b>其它</b>
</td>
</tr>
<tr valign="top">
<td>
<ul>
<li><a href="configs/vgg">VGG</a></li>
<li><a href="configs/resnet">ResNet</a></li>
<li><a href="configs/resnext">ResNeXt</a></li>
<li><a href="configs/seresnet">SE-ResNet</a></li>
<li><a href="configs/seresnet">SE-ResNeXt</a></li>
<li><a href="configs/regnet">RegNet</a></li>
<li><a href="configs/shufflenet_v1">ShuffleNet V1</a></li>
<li><a href="configs/shufflenet_v2">ShuffleNet V2</a></li>
<li><a href="configs/mobilenet_v2">MobileNet V2</a></li>
<li><a href="configs/mobilenet_v3">MobileNet V3</a></li>
<li><a href="configs/swin_transformer">Swin-Transformer</a></li>
<li><a href="configs/swin_transformer_v2">Swin-Transformer V2</a></li>
<li><a href="configs/repvgg">RepVGG</a></li>
<li><a href="configs/vision_transformer">Vision-Transformer</a></li>
<li><a href="configs/tnt">Transformer-in-Transformer</a></li>
<li><a href="configs/res2net">Res2Net</a></li>
<li><a href="configs/mlp_mixer">MLP-Mixer</a></li>
<li><a href="configs/deit">DeiT</a></li>
<li><a href="configs/deit3">DeiT-3</a></li>
<li><a href="configs/conformer">Conformer</a></li>
<li><a href="configs/t2t_vit">T2T-ViT</a></li>
<li><a href="configs/twins">Twins</a></li>
<li><a href="configs/efficientnet">EfficientNet</a></li>
<li><a href="configs/edgenext">EdgeNeXt</a></li>
<li><a href="configs/convnext">ConvNeXt</a></li>
<li><a href="configs/hrnet">HRNet</a></li>
<li><a href="configs/van">VAN</a></li>
<li><a href="configs/convmixer">ConvMixer</a></li>
<li><a href="configs/cspnet">CSPNet</a></li>
<li><a href="configs/poolformer">PoolFormer</a></li>
<li><a href="configs/inception_v3">Inception V3</a></li>
<li><a href="configs/mobileone">MobileOne</a></li>
<li><a href="configs/efficientformer">EfficientFormer</a></li>
<li><a href="configs/mvit">MViT</a></li>
<li><a href="configs/hornet">HorNet</a></li>
<li><a href="configs/mobilevit">MobileViT</a></li>
<li><a href="configs/davit">DaViT</a></li>
<li><a href="configs/replknet">RepLKNet</a></li>
<li><a href="configs/beit">BEiT</a></li>
<li><a href="configs/mixmim">MixMIM</a></li>
<li><a href="configs/revvit">RevViT</a></li>
<li><a href="configs/convnext_v2">ConvNeXt V2</a></li>
<li><a href="configs/vig">ViG</a></li>
<li><a href="configs/xcit">XCiT</a></li>
<li><a href="configs/levit">LeViT</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="configs/mocov2">MoCo V1 (CVPR'2020)</a></li>
<li><a href="configs/simclr">SimCLR (ICML'2020)</a></li>
<li><a href="configs/mocov2">MoCo V2 (arXiv'2020)</a></li>
<li><a href="configs/byol">BYOL (NeurIPS'2020)</a></li>
<li><a href="configs/swav">SwAV (NeurIPS'2020)</a></li>
<li><a href="configs/densecl">DenseCL (CVPR'2021)</a></li>
<li><a href="configs/simsiam">SimSiam (CVPR'2021)</a></li>
<li><a href="configs/barlowtwins">Barlow Twins (ICML'2021)</a></li>
<li><a href="configs/mocov3">MoCo V3 (ICCV'2021)</a></li>
<li><a href="configs/beit">BEiT (ICLR'2022)</a></li>
<li><a href="configs/mae">MAE (CVPR'2022)</a></li>
<li><a href="configs/simmim">SimMIM (CVPR'2022)</a></li>
<li><a href="configs/maskfeat">MaskFeat (CVPR'2022)</a></li>
<li><a href="configs/cae">CAE (arXiv'2022)</a></li>
<li><a href="configs/milan">MILAN (arXiv'2022)</a></li>
<li><a href="configs/beitv2">BEiT V2 (arXiv'2022)</a></li>
<li><a href="configs/eva">EVA (CVPR'2023)</a></li>
<li><a href="configs/mixmim">MixMIM (arXiv'2022)</a></li>
</ul>
</td>
<td>
图像检索任务:
<ul>
<li><a href="configs/arcface">ArcFace (CVPR'2019)</a></li>
</ul>
训练和测试 Tips:
<ul>
<li><a href="https://arxiv.org/abs/1909.13719">RandAug</a></li>
<li><a href="https://arxiv.org/abs/1805.09501">AutoAug</a></li>
<li><a href="mmpretrain/datasets/samplers/repeat_aug.py">RepeatAugSampler</a></li>
<li><a href="mmpretrain/models/tta/score_tta.py">TTA</a></li>
<li>...</li>
</ul>
</td>
</tbody>
</table>
## 参与贡献
我们非常欢迎任何有助于提升 MMClassification 的贡献,请参考 [贡献指南](https://mmclassification.readthedocs.io/zh_CN/1.x/notes/contribution_guide.html) 来了解如何参与贡献。
我们非常欢迎任何有助于提升 MMPreTrain 的贡献,请参考 [贡献指南](https://mmclassification.readthedocs.io/zh_CN/1.x/notes/contribution_guide.html) 来了解如何参与贡献。
## 致谢
MMClassification 是一款由不同学校和公司共同贡献的开源项目。我们感谢所有为项目提供算法复现和新功能支持的贡献者,以及提供宝贵反馈的用户。
MMPreTrain 是一款由不同学校和公司共同贡献的开源项目。我们感谢所有为项目提供算法复现和新功能支持的贡献者,以及提供宝贵反馈的用户。
我们希望该工具箱和基准测试可以为社区提供灵活的代码工具,供用户复现现有算法并开发自己的新模型,从而不断为开源社区提供贡献。
## 引用
如果你在研究中使用了本项目的代码或者性能基准,请参考如下 bibtex 引用 MMClassification。
如果你在研究中使用了本项目的代码或者性能基准,请参考如下 bibtex 引用 MMPreTrain。
```BibTeX
@misc{2020mmclassification,
@ -219,7 +267,7 @@ MMClassification 是一款由不同学校和公司共同贡献的开源项目。
- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab 计算机视觉基础库
- [MIM](https://github.com/open-mmlab/mim): MIM 是 OpenMMlab 项目、算法、模型的统一入口
- [MMEval](https://github.com/open-mmlab/mmeval): 统一开放的跨框架算法评测库
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱
- [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab 深度学习预训练工具箱
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 目标检测工具箱
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用 3D 目标检测平台
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab 旋转框检测工具箱与测试基准
@ -243,7 +291,7 @@ MMClassification 是一款由不同学校和公司共同贡献的开源项目。
扫描下方的二维码可关注 OpenMMLab 团队的 [知乎官方账号](https://www.zhihu.com/people/openmmlab),加入 OpenMMLab 团队的 [官方交流 QQ 群](https://jq.qq.com/?_wv=1027&k=aCvMxdr3) 或联络 OpenMMLab 官方微信小助手
<div align="center">
<img src="https://github.com/open-mmlab/mmcv/raw/master/docs/en/_static/zhihu_qrcode.jpg" height="400" /> <img src="https://github.com/open-mmlab/mmcv/raw/master/docs/en/_static/qq_group_qrcode.jpg" height="400" /> <img src="https://github.com/open-mmlab/mmcv/raw/master/docs/en/_static/wechat_qrcode.jpg" height="400" />
<img src="./resources/zhihu_qrcode.jpg" height="400"/> <img src="./resources/xiaozhushou_weixin_qrcode.jpeg" height="400"/>
</div>
我们会在 OpenMMLab 社区为大家

View File

@ -1,7 +1,7 @@
Welcome to MMPretrain's documentation!
============================================
MMPretrain is a newly upgraded open-source framework for visual pre-training.
MMPretrain is a newly upgraded open-source framework for pre-training.
It has set out to provide multiple powerful pre-trained backbones and
support different pre-training strategies. MMPretrain originated from the
famous open-source projects

View File

@ -5,7 +5,7 @@ MMPretrain 是一个全新升级的预训练开源算法框架,旨在提供各
并支持了不同的预训练策略。MMPretrain 源自著名的开源项目
`MMClassification <https://github.com/open-mmlab/mmclassification/tree/1.x>`_
`MMSelfSup <https://github.com/open-mmlab/mmselfsup>`_,并开发了许多令人兴奋的新功能。
目前,预训练阶段对于视觉识别至关重要,凭借丰富而强大的预训练模型,我们能够改进各种下游视觉任务。
目前,预训练阶段对于视觉识别至关重要,凭借丰富而强大的预训练模型,我们能够改进各种下游视觉任务。
我们的代码库旨在成为一个易于使用和用户友好的代码库库,并简化学术研究活动和工程任务。
我们在以下不同部分中详细介绍了 MMPretrain 的特性和设计。

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 388 KiB