Bump to v1.0.0rc0 (#1007)

* Update docs.

* Update requirements.

* Update config readme and docstring.

* Update CONTRIBUTING.md

* Update README

* Update requirements/mminstall.txt

Co-authored-by: Yifei Yang <2744335995@qq.com>

* Update MMEngine docs link and add to readthedocs requirement.

Co-authored-by: Yifei Yang <2744335995@qq.com>
pull/1024/head
Ma Zerun 2022-08-31 23:57:51 +08:00 committed by GitHub
parent c95ab99289
commit 85b1eae7f1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
47 changed files with 286 additions and 273 deletions

View File

@ -17,7 +17,7 @@ Thanks for your interest in contributing to MMClassification! All kinds of contr
We recommend the potential contributors follow this workflow for contribution.
1. Fork and pull the latest MMClassification repository, follow [get_started](./docs/en/get_started.md) to setup the environment.
1. Fork and pull the latest MMClassification repository, follow [get started](https://mmclassification.readthedocs.io/en/1.x/get_started.html) to setup the environment.
2. Checkout a new branch (**do not use the master or dev branch** for PRs)
```bash
@ -44,7 +44,7 @@ We use the following tools for linting and formatting:
- [mdformat](https://github.com/executablebooks/mdformat): Mdformat is an opinionated Markdown formatter that can be used to enforce a consistent style in Markdown files.
- [docformatter](https://github.com/myint/docformatter): A formatter to format docstring.
Style configurations of yapf and isort can be found in [setup.cfg](./setup.cfg).
Style configurations of yapf and isort can be found in [setup.cfg](https://github.com/open-mmlab/mmclassification/blob/1.x/setup.cfg).
### C++ and CUDA
@ -54,7 +54,7 @@ We follow the [Google C++ Style Guide](https://google.github.io/styleguide/cppgu
We use [pre-commit hook](https://pre-commit.com/) that checks and formats for `flake8`, `yapf`, `isort`, `trailing whitespaces`, `markdown files`,
fixes `end-of-files`, `double-quoted-strings`, `python-encoding-pragma`, `mixed-line-ending`, sorts `requirments.txt` automatically on every commit.
The config for a pre-commit hook is stored in [.pre-commit-config](./.pre-commit-config.yaml).
The config for a pre-commit hook is stored in [.pre-commit-config](https://github.com/open-mmlab/mmclassification/blob/1.x/.pre-commit-config.yaml).
After you clone the repository, you will need to install initialize pre-commit hook.

129
README.md
View File

@ -20,17 +20,17 @@
<div>&nbsp;</div>
[![PyPI](https://img.shields.io/pypi/v/mmcls)](https://pypi.org/project/mmcls)
[![Docs](https://img.shields.io/badge/docs-latest-blue)](https://mmclassification.readthedocs.io/en/latest/)
[![Docs](https://img.shields.io/badge/docs-latest-blue)](https://mmclassification.readthedocs.io/en/1.x/)
[![Build Status](https://github.com/open-mmlab/mmclassification/workflows/build/badge.svg)](https://github.com/open-mmlab/mmclassification/actions)
[![codecov](https://codecov.io/gh/open-mmlab/mmclassification/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmclassification)
[![license](https://img.shields.io/github/license/open-mmlab/mmclassification.svg)](https://github.com/open-mmlab/mmclassification/blob/master/LICENSE)
[![codecov](https://codecov.io/gh/open-mmlab/mmclassification/branch/1.x/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmclassification)
[![license](https://img.shields.io/github/license/open-mmlab/mmclassification.svg)](https://github.com/open-mmlab/mmclassification/blob/1.x/LICENSE)
[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmclassification.svg)](https://github.com/open-mmlab/mmclassification/issues)
[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmclassification.svg)](https://github.com/open-mmlab/mmclassification/issues)
[📘 Documentation](https://mmclassification.readthedocs.io/en/latest/) |
[🛠️ Installation](https://mmclassification.readthedocs.io/en/latest/install.html) |
[👀 Model Zoo](https://mmclassification.readthedocs.io/en/latest/model_zoo.html) |
[🆕 Update News](https://mmclassification.readthedocs.io/en/latest/changelog.html) |
[📘 Documentation](https://mmclassification.readthedocs.io/en/1.x/) |
[🛠️ Installation](https://mmclassification.readthedocs.io/en/1.xget_started.html) |
[👀 Model Zoo](https://mmclassification.readthedocs.io/en/1.x/modelzoo_statistics.html) |
[🆕 Update News](https://mmclassification.readthedocs.io/en/1.x/notes/changelog.html) |
[🤔 Reporting Issues](https://github.com/open-mmlab/mmclassification/issues/new/choose)
</div>
@ -42,7 +42,7 @@ English | [简体中文](/README_zh-CN.md)
MMClassification is an open source image classification toolbox based on PyTorch. It is
a part of the [OpenMMLab](https://openmmlab.com/) project.
The master branch works with **PyTorch 1.5+**.
The `1.x` branch works with **PyTorch 1.6+**.
<div align="center">
<img src="https://user-images.githubusercontent.com/9102141/87268895-3e0d0780-c4fe-11ea-849e-6140b7e0d4de.gif" width="70%"/>
@ -58,97 +58,87 @@ The master branch works with **PyTorch 1.5+**.
## What's new
v0.23.0 was released in 1/5/2022.
Highlights of the new version:
v1.0.0rc0 was released in 31/8/2022.
- Support **DenseNet**, **VAN** and **PoolFormer**, and provide pre-trained models.
- Support training on IPU.
- New style API docs, welcome [view it](https://mmclassification.readthedocs.io/en/master/api/models.html).
This release introduced a brand new and flexible training & test engine, but it's still in progress. Welcome
to try according to [the documentation](https://mmclassification.readthedocs.io/en/1.x/).
v0.22.0 was released in 30/3/2022.
And there are some BC-breaking changes. Please check [the migration tutorial](https://mmclassification.readthedocs.io/en/1.x/migration.html).
Highlights of the new version:
The release candidate will last until the end of 2022, and during the release candidate, we will develop on the `1.x` branch. And we will still maintain 0.x version still at least the end of 2023.
- Support a series of **CSP Network**, such as CSP-ResNet, CSP-ResNeXt and CSP-DarkNet.
- A new `CustomDataset` class to help you **build dataset of yourself**!
- Support new backbones - **ConvMixer**, **RepMLP** and new dataset - **CUB dataset**.
Please refer to [changelog.md](docs/en/changelog.md) for more details and other release history.
Please refer to [changelog.md](https://mmclassification.readthedocs.io/en/1.x/notes/changelog.html) for more details and other release history.
## Installation
Below are quick steps for installation:
```shell
conda create -n open-mmlab python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y
conda create -n open-mmlab python=3.8 pytorch==1.10.1 torchvision==0.11.2 cudatoolkit=11.3 -c pytorch -y
conda activate open-mmlab
pip3 install openmim
mim install mmcv-full
git clone https://github.com/open-mmlab/mmclassification.git
pip install openmim
git clone -b 1.x https://github.com/open-mmlab/mmclassification.git
cd mmclassification
pip3 install -e .
mim install -e .
```
Please refer to [install.md](https://mmclassification.readthedocs.io/en/latest/install.html) for more detailed installation and dataset preparation.
Please refer to [install.md](https://mmclassification.readthedocs.io/en/1.x/get_started.html) for more detailed installation and dataset preparation.
## Getting Started
## User Guides
Please see [Getting Started](https://mmclassification.readthedocs.io/en/latest/getting_started.html) for the basic usage of MMClassification. There are also tutorials:
We provided a series of tutorials about the basic usage of MMClassification for new users:
- [Learn about Configs](https://mmclassification.readthedocs.io/en/latest/tutorials/config.html)
- [Fine-tune Models](https://mmclassification.readthedocs.io/en/latest/tutorials/finetune.html)
- [Add New Dataset](https://mmclassification.readthedocs.io/en/latest/tutorials/new_dataset.html)
- [Customizie Data Pipeline](https://mmclassification.readthedocs.io/en/latest/tutorials/data_pipeline.html)
- [Add New Modules](https://mmclassification.readthedocs.io/en/latest/tutorials/new_modules.html)
- [Customizie Schedule](https://mmclassification.readthedocs.io/en/latest/tutorials/schedule.html)
- [Customizie Runtime Settings](https://mmclassification.readthedocs.io/en/latest/tutorials/runtime.html)
Colab tutorials are also provided:
- Learn about MMClassification **Python API**: [Preview the notebook](https://github.com/open-mmlab/mmclassification/blob/master/docs/en/tutorials/MMClassification_python.ipynb) or directly [run on Colab](https://colab.research.google.com/github/open-mmlab/mmclassification/blob/master/docs/en/tutorials/MMClassification_python.ipynb).
- Learn about MMClassification **CLI tools**: [Preview the notebook](https://github.com/open-mmlab/mmclassification/blob/master/docs/en/tutorials/MMClassification_tools.ipynb) or directly [run on Colab](https://colab.research.google.com/github/open-mmlab/mmclassification/blob/master/docs/en/tutorials/MMClassification_tools.ipynb).
- [Inference with existing models](https://mmclassification.readthedocs.io/en/1.x/user_guides/inference.html)
- [Prepare Dataset](https://mmclassification.readthedocs.io/en/1.x/user_guides/dataset_prepare.html)
- [Training and Test](https://mmclassification.readthedocs.io/en/1.x/user_guides/train_test.html)
- [Learn about Configs](https://mmclassification.readthedocs.io/en/1.x/user_guides/config.html)
- [Fine-tune Models](https://mmclassification.readthedocs.io/en/1.x/user_guides/finetune.html)
- [Analysis Tools](https://mmclassification.readthedocs.io/en/1.x/user_guides/analysis.html)
- [Visualization Tools](https://mmclassification.readthedocs.io/en/1.x/user_guides/visualization.html)
- [Other Useful Tools](https://mmclassification.readthedocs.io/en/1.x/user_guides/useful_tools.html)
## Model zoo
Results and models are available in the [model zoo](https://mmclassification.readthedocs.io/en/latest/model_zoo.html).
Results and models are available in the [model zoo](https://mmclassification.readthedocs.io/en/1.x/modelzoo_statistics.html).
<details open>
<summary>Supported backbones</summary>
- [x] [VGG](https://github.com/open-mmlab/mmclassification/tree/master/configs/vgg)
- [x] [ResNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/resnet)
- [x] [ResNeXt](https://github.com/open-mmlab/mmclassification/tree/master/configs/resnext)
- [x] [SE-ResNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/seresnet)
- [x] [SE-ResNeXt](https://github.com/open-mmlab/mmclassification/tree/master/configs/seresnet)
- [x] [RegNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/regnet)
- [x] [ShuffleNetV1](https://github.com/open-mmlab/mmclassification/tree/master/configs/shufflenet_v1)
- [x] [ShuffleNetV2](https://github.com/open-mmlab/mmclassification/tree/master/configs/shufflenet_v2)
- [x] [MobileNetV2](https://github.com/open-mmlab/mmclassification/tree/master/configs/mobilenet_v2)
- [x] [MobileNetV3](https://github.com/open-mmlab/mmclassification/tree/master/configs/mobilenet_v3)
- [x] [Swin-Transformer](https://github.com/open-mmlab/mmclassification/tree/master/configs/swin_transformer)
- [x] [RepVGG](https://github.com/open-mmlab/mmclassification/tree/master/configs/repvgg)
- [x] [Vision-Transformer](https://github.com/open-mmlab/mmclassification/tree/master/configs/vision_transformer)
- [x] [Transformer-in-Transformer](https://github.com/open-mmlab/mmclassification/tree/master/configs/tnt)
- [x] [Res2Net](https://github.com/open-mmlab/mmclassification/tree/master/configs/res2net)
- [x] [MLP-Mixer](https://github.com/open-mmlab/mmclassification/tree/master/configs/mlp_mixer)
- [x] [DeiT](https://github.com/open-mmlab/mmclassification/tree/master/configs/deit)
- [x] [Conformer](https://github.com/open-mmlab/mmclassification/tree/master/configs/conformer)
- [x] [T2T-ViT](https://github.com/open-mmlab/mmclassification/tree/master/configs/t2t_vit)
- [x] [Twins](https://github.com/open-mmlab/mmclassification/tree/master/configs/twins)
- [x] [EfficientNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/efficientnet)
- [x] [ConvNeXt](https://github.com/open-mmlab/mmclassification/tree/master/configs/convnext)
- [x] [HRNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/hrnet)
- [x] [VAN](https://github.com/open-mmlab/mmclassification/tree/master/configs/van)
- [x] [ConvMixer](https://github.com/open-mmlab/mmclassification/tree/master/configs/convmixer)
- [x] [CSPNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/cspnet)
- [x] [PoolFormer](https://github.com/open-mmlab/mmclassification/tree/master/configs/poolformer)
- [x] [VGG](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/vgg)
- [x] [ResNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/resnet)
- [x] [ResNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/resnext)
- [x] [SE-ResNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/seresnet)
- [x] [SE-ResNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/seresnet)
- [x] [RegNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/regnet)
- [x] [ShuffleNetV1](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/shufflenet_v1)
- [x] [ShuffleNetV2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/shufflenet_v2)
- [x] [MobileNetV2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobilenet_v2)
- [x] [MobileNetV3](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobilenet_v3)
- [x] [Swin-Transformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/swin_transformer)
- [x] [RepVGG](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/repvgg)
- [x] [Vision-Transformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/vision_transformer)
- [x] [Transformer-in-Transformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/tnt)
- [x] [Res2Net](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/res2net)
- [x] [MLP-Mixer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mlp_mixer)
- [x] [DeiT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/deit)
- [x] [Conformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/conformer)
- [x] [T2T-ViT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/t2t_vit)
- [x] [Twins](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/twins)
- [x] [EfficientNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/efficientnet)
- [x] [ConvNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/convnext)
- [x] [HRNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/hrnet)
- [x] [VAN](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/van)
- [x] [ConvMixer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/convmixer)
- [x] [CSPNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/cspnet)
- [x] [PoolFormer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/poolformer)
- [x] [Inception V3](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/inception_v3)
</details>
## Contributing
We appreciate all contributions to improve MMClassification.
Please refer to [CONTRUBUTING.md](https://mmclassification.readthedocs.io/en/latest/community/CONTRIBUTING.html) for the contributing guideline.
Please refer to [CONTRUBUTING.md](https://mmclassification.readthedocs.io/en/1.x/notes/contribution_guide.html) for the contributing guideline.
## Acknowledgement
@ -174,6 +164,7 @@ This project is released under the [Apache 2.0 license](LICENSE).
## Projects in OpenMMLab
- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.
- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.

View File

@ -20,17 +20,17 @@
<div>&nbsp;</div>
[![PyPI](https://img.shields.io/pypi/v/mmcls)](https://pypi.org/project/mmcls)
[![Docs](https://img.shields.io/badge/docs-latest-blue)](https://mmclassification.readthedocs.io/zh_CN/latest/)
[![Docs](https://img.shields.io/badge/docs-latest-blue)](https://mmclassification.readthedocs.io/zh_CN/1.x/)
[![Build Status](https://github.com/open-mmlab/mmclassification/workflows/build/badge.svg)](https://github.com/open-mmlab/mmclassification/actions)
[![codecov](https://codecov.io/gh/open-mmlab/mmclassification/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmclassification)
[![license](https://img.shields.io/github/license/open-mmlab/mmclassification.svg)](https://github.com/open-mmlab/mmclassification/blob/master/LICENSE)
[![codecov](https://codecov.io/gh/open-mmlab/mmclassification/branch/1.x/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmclassification)
[![license](https://img.shields.io/github/license/open-mmlab/mmclassification.svg)](https://github.com/open-mmlab/mmclassification/blob/1.x/LICENSE)
[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmclassification.svg)](https://github.com/open-mmlab/mmclassification/issues)
[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmclassification.svg)](https://github.com/open-mmlab/mmclassification/issues)
[📘 中文文档](https://mmclassification.readthedocs.io/zh_CN/latest/) |
[🛠️ 安装教程](https://mmclassification.readthedocs.io/zh_CN/latest/install.html) |
[👀 模型库](https://mmclassification.readthedocs.io/zh_CN/latest/model_zoo.html) |
[🆕 更新日志](https://mmclassification.readthedocs.io/en/latest/changelog.html) |
[📘 中文文档](https://mmclassification.readthedocs.io/zh_CN/1.x/) |
[🛠️ 安装教程](https://mmclassification.readthedocs.io/zh_CN/1.x/get_started.html) |
[👀 模型库](https://mmclassification.readthedocs.io/zh_CN/1.x/modelzoo_statistics.html) |
[🆕 更新日志](https://mmclassification.readthedocs.io/en/1.x/notes/changelog.html) |
[🤔 报告问题](https://github.com/open-mmlab/mmclassification/issues/new/choose)
</div>
@ -57,97 +57,86 @@ MMClassification 是一款基于 PyTorch 的开源图像分类工具箱,是 [O
## 更新日志
2022/5/1 发布了 v0.23.0 版本
2022/8/31 发布了 v1.0.0rc0 版本
新版本亮点:
这个版本引入一个全新的,可扩展性强的训练和测试引擎,但目前仍在开发中。欢迎根据[文档](https://mmclassification.readthedocs.io/zh_CN/1.x/)进行试用。
- 支持了 **DenseNet****VAN** 和 **PoolFormer** 三个网络,并提供了预训练模型。
- 支持在 IPU 上进行训练。
- 更新了 API 文档的样式,更方便查阅,[欢迎查阅](https://mmclassification.readthedocs.io/en/master/api/models.html)。
同时,新版本中存在一些与旧版本不兼容的修改。请查看[迁移文档](https://mmclassification.readthedocs.io/zh_CN/1.x/migration.html)来详细了解这些变动。
2022/3/30 发布了 v0.22.0 版本
新版本的公测将持续到 2022 年末,在此期间,我们将基于 `1.x` 分支进行更新,不会合入到 `master` 分支。另外,至少
到 2023 年末,我们会保持对 0.x 版本的维护。
新版本亮点:
- 支持了一系列 **CSP Net**,包括 CSP-ResNetCSP-ResNeXt 和 CSP-DarkNet。
- 我们提供了一个新的 `CustomDataset` 类,这个类将帮助你轻松使用**自己的数据集**
- 支持了新的主干网络 **ConvMixer**、**RepMLP** 和一个新的数据集 **CUB dataset**
发布历史和更新细节请参考 [更新日志](docs/en/changelog.md)
发布历史和更新细节请参考 [更新日志](https://mmclassification.readthedocs.io/zh_CN/1.x/notes/changelog.html)
## 安装
以下是安装的简要步骤:
```shell
conda create -n open-mmlab python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y
conda create -n open-mmlab python=3.8 pytorch==1.10.1 torchvision==0.11.2 cudatoolkit=11.3 -c pytorch -y
conda activate open-mmlab
pip3 install openmim
mim install mmcv-full
git clone https://github.com/open-mmlab/mmclassification.git
git clone -b 1.x https://github.com/open-mmlab/mmclassification.git
cd mmclassification
pip3 install -e .
mim install -e .
```
更详细的步骤请参考 [安装指南](https://mmclassification.readthedocs.io/zh_CN/latest/install.html) 进行安装。
更详细的步骤请参考 [安装指南](https://mmclassification.readthedocs.io/zh_CN/1.x/get_started.html) 进行安装。
## 基础教程
请参考 [基础教程](https://mmclassification.readthedocs.io/zh_CN/latest/getting_started.html) 来了解 MMClassification 的基本使用。MMClassification 也提供了其他更详细的教程:
我们为新用户提供了一系列基础教程:
- [如何编写配置文件](https://mmclassification.readthedocs.io/zh_CN/latest/tutorials/config.html)
- [如何微调模型](https://mmclassification.readthedocs.io/zh_CN/latest/tutorials/finetune.html)
- [如何增加新数据集](https://mmclassification.readthedocs.io/zh_CN/latest/tutorials/new_dataset.html)
- [如何设计数据处理流程](https://mmclassification.readthedocs.io/zh_CN/latest/tutorials/data_pipeline.html)
- [如何增加新模块](https://mmclassification.readthedocs.io/zh_CN/latest/tutorials/new_modules.html)
- [如何自定义优化策略](https://mmclassification.readthedocs.io/zh_CN/latest/tutorials/schedule.html)
- [如何自定义运行参数](https://mmclassification.readthedocs.io/zh_CN/latest/tutorials/runtime.html)
我们也提供了相应的中文 Colab 教程:
- 了解 MMClassification **Python API**[预览 Notebook](https://github.com/open-mmlab/mmclassification/blob/master/docs/zh_CN/tutorials/MMClassification_python_cn.ipynb) 或者直接[在 Colab 上运行](https://colab.research.google.com/github/open-mmlab/mmclassification/blob/master/docs/zh_CN/tutorials/MMClassification_python_cn.ipynb)。
- 了解 MMClassification **命令行工具**[预览 Notebook](https://github.com/open-mmlab/mmclassification/blob/master/docs/zh_CN/tutorials/MMClassification_tools_cn.ipynb) 或者直接[在 Colab 上运行](https://colab.research.google.com/github/open-mmlab/mmclassification/blob/master/docs/zh_CN/tutorials/MMClassification_tools_cn.ipynb)。
- [使用现有模型推理](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/inference.html)
- [准备数据集](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/dataset_prepare.html)
- [训练与测试](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/train_test.html)
- [学习配置文件](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/config.html)
- [如何微调模型](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/finetune.html)
- [分析工具](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/analysis.html)
- [可视化工具](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/visualization.html)
- [其他工具](https://mmclassification.readthedocs.io/zh_CN/1.x/user_guides/useful_tools.html)
## 模型库
相关结果和模型可在 [model zoo](https://mmclassification.readthedocs.io/en/latest/model_zoo.html) 中获得
相关结果和模型可在 [model zoo](https://mmclassification.readthedocs.io/zh_CN/1.x/modelzoo_statistics.html) 中获得
<details open>
<summary>支持的主干网络</summary>
- [x] [VGG](https://github.com/open-mmlab/mmclassification/tree/master/configs/vgg)
- [x] [ResNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/resnet)
- [x] [ResNeXt](https://github.com/open-mmlab/mmclassification/tree/master/configs/resnext)
- [x] [SE-ResNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/seresnet)
- [x] [SE-ResNeXt](https://github.com/open-mmlab/mmclassification/tree/master/configs/seresnet)
- [x] [RegNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/regnet)
- [x] [ShuffleNetV1](https://github.com/open-mmlab/mmclassification/tree/master/configs/shufflenet_v1)
- [x] [ShuffleNetV2](https://github.com/open-mmlab/mmclassification/tree/master/configs/shufflenet_v2)
- [x] [MobileNetV2](https://github.com/open-mmlab/mmclassification/tree/master/configs/mobilenet_v2)
- [x] [MobileNetV3](https://github.com/open-mmlab/mmclassification/tree/master/configs/mobilenet_v3)
- [x] [Swin-Transformer](https://github.com/open-mmlab/mmclassification/tree/master/configs/swin_transformer)
- [x] [RepVGG](https://github.com/open-mmlab/mmclassification/tree/master/configs/repvgg)
- [x] [Vision-Transformer](https://github.com/open-mmlab/mmclassification/tree/master/configs/vision_transformer)
- [x] [Transformer-in-Transformer](https://github.com/open-mmlab/mmclassification/tree/master/configs/tnt)
- [x] [Res2Net](https://github.com/open-mmlab/mmclassification/tree/master/configs/res2net)
- [x] [MLP-Mixer](https://github.com/open-mmlab/mmclassification/tree/master/configs/mlp_mixer)
- [x] [DeiT](https://github.com/open-mmlab/mmclassification/tree/master/configs/deit)
- [x] [Conformer](https://github.com/open-mmlab/mmclassification/tree/master/configs/conformer)
- [x] [T2T-ViT](https://github.com/open-mmlab/mmclassification/tree/master/configs/t2t_vit)
- [x] [Twins](https://github.com/open-mmlab/mmclassification/tree/master/configs/twins)
- [x] [EfficientNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/efficientnet)
- [x] [ConvNeXt](https://github.com/open-mmlab/mmclassification/tree/master/configs/convnext)
- [x] [HRNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/hrnet)
- [x] [VAN](https://github.com/open-mmlab/mmclassification/tree/master/configs/van)
- [x] [ConvMixer](https://github.com/open-mmlab/mmclassification/tree/master/configs/convmixer)
- [x] [CSPNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/cspnet)
- [x] [PoolFormer](https://github.com/open-mmlab/mmclassification/tree/master/configs/poolformer)
- [x] [VGG](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/vgg)
- [x] [ResNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/resnet)
- [x] [ResNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/resnext)
- [x] [SE-ResNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/seresnet)
- [x] [SE-ResNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/seresnet)
- [x] [RegNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/regnet)
- [x] [ShuffleNetV1](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/shufflenet_v1)
- [x] [ShuffleNetV2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/shufflenet_v2)
- [x] [MobileNetV2](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobilenet_v2)
- [x] [MobileNetV3](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobilenet_v3)
- [x] [Swin-Transformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/swin_transformer)
- [x] [RepVGG](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/repvgg)
- [x] [Vision-Transformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/vision_transformer)
- [x] [Transformer-in-Transformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/tnt)
- [x] [Res2Net](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/res2net)
- [x] [MLP-Mixer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mlp_mixer)
- [x] [DeiT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/deit)
- [x] [Conformer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/conformer)
- [x] [T2T-ViT](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/t2t_vit)
- [x] [Twins](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/twins)
- [x] [EfficientNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/efficientnet)
- [x] [ConvNeXt](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/convnext)
- [x] [HRNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/hrnet)
- [x] [VAN](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/van)
- [x] [ConvMixer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/convmixer)
- [x] [CSPNet](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/cspnet)
- [x] [PoolFormer](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/poolformer)
- [x] [Inception V3](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/inception_v3)
</details>
## 参与贡献
我们非常欢迎任何有助于提升 MMClassification 的贡献,请参考 [贡献指南](https://mmclassification.readthedocs.io/zh_CN/latest/community/CONTRIBUTING.html) 来了解如何参与贡献。
我们非常欢迎任何有助于提升 MMClassification 的贡献,请参考 [贡献指南](https://mmclassification.readthedocs.io/zh_CN/1.x/notes/contribution_guide.html) 来了解如何参与贡献。
## 致谢
@ -174,6 +163,7 @@ MMClassification 是一款由不同学校和公司共同贡献的开源项目。
## OpenMMLab 的其他项目
- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab 深度学习模型训练基础库
- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab 计算机视觉基础库
- [MIM](https://github.com/open-mmlab/mim): MIM 是 OpenMMlab 项目、算法、模型的统一入口
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱

View File

@ -18,9 +18,9 @@ Representing features at multiple scales is of great importance for numerous vis
| Model | resolution | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :------------------: | :--------: | :-------: | :------: | :-------: | :-------: | :----------------------------------------------------------------: | :-------------------------------------------------------------------: |
| Res2Net-50-14w-8s\* | 224x224 | 25.06 | 4.22 | 78.14 | 93.85 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/res2net/res2net50-w14-s8_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w14-s8_3rdparty_8xb32_in1k_20210927-bc967bf1.pth) \| [log](<>) |
| Res2Net-50-26w-8s\* | 224x224 | 48.40 | 8.39 | 79.20 | 94.36 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/res2net/res2net50-w26-s8_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w26-s8_3rdparty_8xb32_in1k_20210927-f547a94b.pth) \| [log](<>) |
| Res2Net-101-26w-4s\* | 224x224 | 45.21 | 8.12 | 79.19 | 94.44 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/res2net/res2net101-w26-s4_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/res2net/res2net101-w26-s4_3rdparty_8xb32_in1k_20210927-870b6c36.pth) \| [log](<>) |
| Res2Net-50-14w-8s\* | 224x224 | 25.06 | 4.22 | 78.14 | 93.85 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/res2net/res2net50-w14-s8_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w14-s8_3rdparty_8xb32_in1k_20210927-bc967bf1.pth) |
| Res2Net-50-26w-8s\* | 224x224 | 48.40 | 8.39 | 79.20 | 94.36 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/res2net/res2net50-w26-s8_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/res2net/res2net50-w26-s8_3rdparty_8xb32_in1k_20210927-f547a94b.pth) |
| Res2Net-101-26w-4s\* | 224x224 | 45.21 | 8.12 | 79.19 | 94.44 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/res2net/res2net101-w26-s4_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/res2net/res2net101-w26-s4_3rdparty_8xb32_in1k_20210927-870b6c36.pth) |
*Models with * are converted from the [official repo](https://github.com/Res2Net/Res2Net-PretrainedModels). The config files of these models are only for validation. We don't ensure these config files' training accuracy and welcome you to contribute your reproduction results.*

View File

@ -0,0 +1 @@
# Data Flow (TODO)

View File

@ -0,0 +1 @@
# Custom evaluation metrics (TODO)

View File

@ -1,4 +1,4 @@
# Tutorial 7: Customize Runtime Settings
# Customize Runtime Settings (TODO)
In this tutorial, we will introduce some methods about how to customize workflow and hooks when running your own settings for the project.
@ -255,4 +255,4 @@ By default, the hook's priority is set as `NORMAL` during registration.
- `resume_from` : not only import model weights, but also optimizer information, current epoch information, mainly used to continue training from the checkpoint.
- `init_cfg.Pretrained` : Load weights during weight initialization, and you can specify which module to load. This is usually used when fine-tuning a model, refer to [Tutorial 2: Fine-tune Models](./finetune.md).
- `init_cfg.Pretrained` : Load weights during weight initialization, and you can specify which module to load. This is usually used when fine-tuning a model, refer to [Tutorial 2: Fine-tune Models](../user_guides/finetune.md).

View File

@ -1,4 +1,4 @@
# Customize models
# Customize Models
In our design, a complete model is defined as an ImageClassifier which contains 4 types of model components based on their functionalities.

View File

@ -1,4 +1,4 @@
# Tutorial 4: Custom Data Pipelines
# Customize Data Pipeline (TODO)
## Design of Data pipelines
@ -98,7 +98,7 @@ More supported backends can be found in [mmcv.fileio.FileClient](https://github.
- remove: all other keys except for those specified by `keys`
For more information about other data transformation classes, please refer to [Data Transformations](../api/transforms.rst)
For more information about other data transformation classes, please refer to [Data Transforms](mmcls.datasets.transforms)
## Extend and use custom pipelines
@ -147,4 +147,4 @@ For more information about other data transformation classes, please refer to [D
## Pipeline visualization
After designing data pipelines, you can use the [visualization tools](../tools/visualization.md) to view the performance.
After designing data pipelines, you can use the [visualization tools](../user_guides/visualization.md) to view the performance.

View File

@ -1,4 +1,4 @@
# Tutorial 6: Customize Schedule
# Customize Training Schedule (TODO)
In this tutorial, we will introduce some methods about how to construct optimizers, customize learning rate and momentum schedules, parameter-wise finely configuration, gradient clipping, gradient accumulation, and customize self-implemented methods for the project.

View File

@ -43,16 +43,13 @@ for example:
....
)
Every item of a pipeline list is one of the following data transforms class. And if you want to add a custom data transformation class, the tutorial :doc:`Custom Data Pipelines </advanced_guides/pipeline.md>` will help you.
Every item of a pipeline list is one of the following data transforms class. And if you want to add a custom data transformation class, the tutorial :doc:`Custom Data Pipelines </advanced_guides/pipeline>` will help you.
.. contents::
:depth: 1
:local:
:backlinks: top
.. module:: mmcls.datasets.transforms
Processing and Augmentation
^^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@ -52,8 +52,6 @@ extensions = [
'sphinx_copybutton',
]
autodoc_mock_imports = ['mmcv._ext', 'matplotlib']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@ -219,22 +217,33 @@ StandaloneHTMLBuilder.supported_image_types = [
# Ignore >>> when copying code
copybutton_prompt_text = r'>>> |\.\.\. '
copybutton_prompt_is_regexp = True
# Auto-generated header anchors
myst_heading_anchors = 3
# Enable "colon_fence" extension of myst.
myst_enable_extensions = ['colon_fence']
# Configuration for intersphinx
intersphinx_mapping = {
'python': ('https://docs.python.org/3', None),
'numpy': ('https://numpy.org/doc/stable', None),
'torch': ('https://pytorch.org/docs/stable/', None),
'mmcv': ('https://mmcv.readthedocs.io/en/master/', None),
'mmcv': ('https://mmcv.readthedocs.io/en/2.x/', None),
'mmengine': ('https://mmengine.readthedocs.io/en/latest/', None),
}
napoleon_custom_sections = [
# Custom sections for data elements.
('Meta fields', 'params_style'),
('Data fields', 'params_style'),
]
# Disable docstring inheritance
autodoc_inherit_docstrings = False
# Mock some imports during generate API docs.
autodoc_mock_imports = ['mmcv._ext', 'matplotlib']
# Disable displaying type annotations, these can be very verbose
autodoc_typehints = 'none'
def builder_inited_handler(app):
subprocess.run(['./stat.py'])

View File

@ -45,7 +45,7 @@ We recommend that users follow our best practices to install MMClassification. H
```shell
pip install -U openmim
mim install mmengine "mmcv-full>=2.0rc0"
mim install mmengine "mmcv>=2.0.0rc1"
```
**Step 2.** Install MMClassification.
@ -80,7 +80,7 @@ git checkout dev-1.x
Just install with pip.
```shell
pip install "mmcls>=1.0rc0"
pip install "mmcls>=1.0.0rc0"
```
## Verify the installation
@ -145,14 +145,13 @@ MMCV contains C++ and CUDA extensions, thus depending on PyTorch in a complex
way. MIM solves such dependencies automatically and makes the installation
easier. However, it is not a must.
To install MMCV with pip instead of MIM, please follow
[MMCV installation guides](https://mmcv.readthedocs.io/en/dev-2.x/get_started/installation.html).
To install MMCV with pip instead of MIM, please follow {external+mmcv:doc}`MMCV installation guides <get_started/installation>`.
This requires manually specifying a find-url based on PyTorch version and its CUDA version.
For example, the following command install mmcv-full built for PyTorch 1.10.x and CUDA 11.3.
For example, the following command install mmcv built for PyTorch 1.10.x and CUDA 11.3.
```shell
pip install "mmcv-full>=2.0rc0" -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/index.html
pip install "mmcv>=2.0.0rc1" -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/index.html
```
### Install on CPU-only platforms
@ -172,7 +171,7 @@ commands.
```shell
!pip3 install openmim
!mim install mmengine "mmcv-full>=2.0rc0"
!mim install mmengine "mmcv>=2.0.0rc1"
```
**Step 2.** Install MMClassification from the source.
@ -215,6 +214,6 @@ docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmclassification/data mmc
## Trouble shooting
If you have some issues during the installation, please first view the [FAQ](faq.md) page.
If you have some issues during the installation, please first view the [FAQ](./notes/faq.md) page.
You may [open an issue](https://github.com/open-mmlab/mmclassification/issues/new/choose)
on GitHub if no solution is found.

View File

@ -10,7 +10,7 @@ according to the [install tutorial](./get_started.md). Or install the below pack
1. [MMEngine](https://github.com/open-mmlab/mmengine): MMEngine is the core the OpenMMLab 2.0 architecture,
and we splited many compentents unrelated to computer vision from MMCV to MMEngine.
2. [MMCV](https://github.com/open-mmlab/mmcv): The computer vision package of OpenMMLab. This is not a new
dependency, but you need to upgrade it to above `2.0.0rc0` version.
dependency, but you need to upgrade it to above `2.0.0rc1` version.
3. [rich](https://github.com/Textualize/rich): A terminal formatting package, and we use it to beautify some
outputs in the terminal.
@ -29,9 +29,9 @@ No changes in `model.backbone`, `model.neck` and `model.head` fields.
Changes in **`model.train_cfg`**:
- `BatchMixup` is renamed to [`Mixup`](Mixup).
- `BatchCutMix` is renamed to [`CutMix`](CutMix).
- `BatchResizeMix` is renamed to [`ResizeMix`](ResizeMix).
- `BatchMixup` is renamed to [`Mixup`](mmcls.models.utils.batch_augments.Mixup).
- `BatchCutMix` is renamed to [`CutMix`](mmcls.models.utils.batch_augments.CutMix).
- `BatchResizeMix` is renamed to [`ResizeMix`](mmcls.models.utils.batch_augments.ResizeMix).
- The `prob` argument is removed from all augments settings, and you can use the `probs` field in `train_cfg` to
specify probabilities of every augemnts. If no `probs` field, randomly choose one by the same probability.
@ -121,14 +121,14 @@ test_dataloader = val_dataloader
Changes in **`pipeline`**:
- The original formatting transforms **`ToTensor`**、**`ImageToTensor`**、**`Collect`** are combined as [`PackClsInputs`](PackClsInputs).
- The original formatting transforms **`ToTensor`**、**`ImageToTensor`**、**`Collect`** are combined as [`PackClsInputs`](mmcls.datasets.transforms.PackClsInputs).
- We don't recommend to do **`Normalize`** in the dataset pipeline. Please remove it from pipelines and set it in the `data_preprocessor` field.
- The argument `flip_prob` in [**`RandomFlip`**](RandomFlip) is renamed to `flip`.
- The argument `size` in [**`RandomCrop`**](RandomCrop) is renamed to `crop_size`.
- The argument `size` in [**`RandomResizedCrop`**](RandomResizedCrop) is renamed to `scale`.
- The argument `size` in [**`Resize`**](Resize) is renamed to `scale`. And `Resize` won't support size like `(256, -1)`, please use [`ResizeEdge`](ResizeEdge) to replace it.
- The argument `policies` in [**`AutoAugment`**](AutoAugment) and [**`RandAugment`**](RandAugment) supports using string to specify preset policies. `AutoAugment` supports "imagenet" and `RandAugment` supports "timm_increasing".
- **`RandomResizedCrop`** and **`CenterCrop`** won't supports `efficientnet_style`, and please use [`EfficientnNetRandomCrop`](EfficientnNetRandomCrop) and [`EfficientNetCenterCrop`](EfficientNetCenterCrop) to replace them.
- The argument `flip_prob` in [**`RandomFlip`**](mmcv.transforms.RandomFlip) is renamed to `flip`.
- The argument `size` in [**`RandomCrop`**](mmcls.datasets.transforms.RandomCrop) is renamed to `crop_size`.
- The argument `size` in [**`RandomResizedCrop`**](mmcls.datasets.transforms.RandomResizedCrop) is renamed to `scale`.
- The argument `size` in [**`Resize`**](mmcv.transforms.Resize) is renamed to `scale`. And `Resize` won't support size like `(256, -1)`, please use [`ResizeEdge`](mmcls.datasets.transforms.ResizeEdge) to replace it.
- The argument `policies` in [**`AutoAugment`**](mmcls.datasets.transforms.AutoAugment) and [**`RandAugment`**](mmcls.datasets.transforms.RandAugment) supports using string to specify preset policies. `AutoAugment` supports "imagenet" and `RandAugment` supports "timm_increasing".
- **`RandomResizedCrop`** and **`CenterCrop`** won't supports `efficientnet_style`, and please use [`EfficientNetRandomCrop`](mmcls.datasets.transforms.EfficientNetRandomCrop) and [`EfficientNetCenterCrop`](mmcls.datasets.transforms.EfficientNetCenterCrop) to replace them.
```{note}
We move some work of data transforms to the data preprocessor, like normalization, see [the documentation](mmcls.models.utils.data_preprocessor) for
@ -521,15 +521,15 @@ the combination of parameter schedulers, see [the tutorial](./advanced_guides/sc
The documentation can be found [here](mmcls.datasets).
| Dataset class | Changes |
| :-----------------------------------------------: | :----------------------------------------------------------------------------- |
| [`CustomDataset`](CustomDataset) | Add `data_root` argument as the common prefix of `data_prefix` and `ann_file`. |
| [`ImageNet`](ImageNet) | Same as `CustomDataset`. |
| [`ImageNet21k`](ImageNet21k) | Same as `CustomDataset`. |
| [`CIFAR10`](CIFAR10) & [`CIFAR100`](CIFAR100) | The `test_mode` argument is a required argument now. |
| [`MNIST`](MNIST) & [`FashionMNIST`](FashionMNIST) | The `test_mode` argument is a required argument now. |
| [`VOC`](VOC) | Requires `data_root`, `image_set_path` and `test_mode` now. |
| [`CUB`](CUB) | Requires `data_root` and `test_mode` now. |
| Dataset class | Changes |
| :-----------------------------------------------------------------------------: | :----------------------------------------------------------------------------- |
| [`CustomDataset`](mmcls.datasets.CustomDataset) | Add `data_root` argument as the common prefix of `data_prefix` and `ann_file`. |
| [`ImageNet`](mmcls.datasets.ImageNet) | Same as `CustomDataset`. |
| [`ImageNet21k`](mmcls.datasets.ImageNet21k) | Same as `CustomDataset`. |
| [`CIFAR10`](mmcls.datasets.CIFAR10) & [`CIFAR100`](mmcls.datasets.CIFAR100) | The `test_mode` argument is a required argument now. |
| [`MNIST`](mmcls.datasets.MNIST) & [`FashionMNIST`](mmcls.datasets.FashionMNIST) | The `test_mode` argument is a required argument now. |
| [`VOC`](mmcls.datasets.VOC) | Requires `data_root`, `image_set_path` and `test_mode` now. |
| [`CUB`](mmcls.datasets.CUB) | Requires `data_root` and `test_mode` now. |
The `mmcls.datasets.pipelines` is renamed to `mmcls.datasets.transforms`.
@ -538,9 +538,9 @@ The `mmcls.datasets.pipelines` is renamed to `mmcls.datasets.transforms`.
| `LoadImageFromFile` | Removed, use [`mmcv.transforms.LoadImageFromFile`](mmcv.transforms.LoadImageFromFile). |
| `RandomFlip` | Removed, use [`mmcv.transforms.RandomFlip`](mmcv.transforms.RandomFlip). The argument `flip_prob` is renamed to `prob`. |
| `RandomCrop` | The argument `size` is renamed to `crop_size`. |
| `RandomResizedCrop` | The argument `size` is renamed to `scale`. The argument `scale` is renamed to `crop_ratio_range`. Won't support `efficientnet_style`, use [`EfficientNetRandomCrop`](EfficientNetRandomCrop). |
| `CenterCrop` | Removed, use [`mmcv.transforms.CenterCrop`](mmcv.transforms.CenterCrop). Won't support `efficientnet_style`, use [`EfficientNetCenterCrop`](EfficientNetCenterCrop). |
| `Resize` | Removed, use [`mmcv.transforms.Resize`](mmcv.transforms.Resize). The argument `size` is renamed to `scale`. Won't support size like `(256, -1)`, use [`ResizeEdge`](ResizeEdge). |
| `RandomResizedCrop` | The argument `size` is renamed to `scale`. The argument `scale` is renamed to `crop_ratio_range`. Won't support `efficientnet_style`, use [`EfficientNetRandomCrop`](mmcls.datasets.transforms.EfficientNetRandomCrop). |
| `CenterCrop` | Removed, use [`mmcv.transforms.CenterCrop`](mmcv.transforms.CenterCrop). Won't support `efficientnet_style`, use [`EfficientNetCenterCrop`](mmcls.datasets.transforms.EfficientNetCenterCrop). |
| `Resize` | Removed, use [`mmcv.transforms.Resize`](mmcv.transforms.Resize). The argument `size` is renamed to `scale`. Won't support size like `(256, -1)`, use [`ResizeEdge`](mmcls.datasets.transforms.ResizeEdge). |
| `AutoAugment` & `RandomAugment` | The argument `policies` supports using string to specify preset policies. |
| `Compose` | Removed, use [`mmcv.transforms.Compose`](mmcv.transforms.Compose). |
@ -548,27 +548,27 @@ The `mmcls.datasets.pipelines` is renamed to `mmcls.datasets.transforms`.
The documentation can be found [here](mmcls.models). The interface of all **backbones**, **necks** and **losses** didn't change.
Changes in [`ImageClassifier`](ImageClassifier):
Changes in [`ImageClassifier`](mmcls.models.classifiers.ImageClassifier):
| Method of classifiers | Changes |
| :-------------------: | :---------------------------------------------------------------------------------------------------------------------------------------- |
| `extract_feat` | No changes |
| `forward` | Now only accepts three arguments: `inputs`, `data_samples` and `mode`. See [the documentation](ImageClassifier.forward) for more details. |
| `forward_train` | Replaced by `loss`. |
| `simple_test` | Replaced by `predict`. |
| `train_step` | The `optimizer` argument is replaced by `optim_wrapper` and it accepts [`OptimWrapper`](mmengine.optim.OptimWrapper). |
| `val_step` | The original `val_step` is the same as `train_step`, now it calls `predict`. |
| `test_step` | New method, and it's the same as `val_step`. |
| Method of classifiers | Changes |
| :-------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `extract_feat` | No changes |
| `forward` | Now only accepts three arguments: `inputs`, `data_samples` and `mode`. See [the documentation](mmcls.models.classifiers.ImageClassifier.forward) for more details. |
| `forward_train` | Replaced by `loss`. |
| `simple_test` | Replaced by `predict`. |
| `train_step` | The `optimizer` argument is replaced by `optim_wrapper` and it accepts [`OptimWrapper`](mmengine.optim.OptimWrapper). |
| `val_step` | The original `val_step` is the same as `train_step`, now it calls `predict`. |
| `test_step` | New method, and it's the same as `val_step`. |
Changes in [heads](mmcls.models.heads):
| Method of heads | Changes |
| :-------------: | :---------------------------------------------------------------------------------------------------------------------------------- |
| `pre_logits` | No changes |
| `forward_train` | Replaced by `loss`. |
| `simple_test` | Replaced by `predict`. |
| `loss` | It accepts `data_samples` instead of `gt_labels` to calculate loss. The `data_samples` is a list of [ClsDataSample](ClsDataSample). |
| `forward` | New method, and it returns the output of the classification head without any post-processs like softmax or sigmoid. |
| Method of heads | Changes |
| :-------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------- |
| `pre_logits` | No changes |
| `forward_train` | Replaced by `loss`. |
| `simple_test` | Replaced by `predict`. |
| `loss` | It accepts `data_samples` instead of `gt_labels` to calculate loss. The `data_samples` is a list of [ClsDataSample](mmcls.structures.ClsDataSample). |
| `forward` | New method, and it returns the output of the classification head without any post-processs like softmax or sigmoid. |
### `mmcls.utils`

View File

@ -1,5 +1,13 @@
# Changelog
## v1.0.0rc0(31/8/2022)
MMClassification 1.0.0rc0 is the first version of MMClassification 1.x, a part of the OpenMMLab 2.0 projects.
Built upon the new [training engine](https://github.com/open-mmlab/mmengine), MMClassification 1.x unifies the interfaces of dataset, models, evaluation, and visualization.
And there are some BC-breaking changes. Please check [the migration tutorial](https://mmclassification.readthedocs.io/en/1.x/migration.html) for more details.
## v0.23.1(2/6/2022)
### New Features

View File

@ -17,7 +17,7 @@ and make sure you fill in all required information in the template.
| MMClassification version | MMCV version |
| :----------------------: | :--------------------: |
| dev | mmcv>=1.5.0, \<1.6.0 |
| 1.0.0rc0 (1.x) | mmcv>=2.0.0rc1 |
| 0.23.1 (master) | mmcv>=1.4.2, \<1.6.0 |
| 0.22.1 | mmcv>=1.4.2, \<1.6.0 |
| 0.21.0 | mmcv>=1.4.2, \<=1.5.0 |
@ -59,7 +59,7 @@ and make sure you fill in all required information in the template.
- Do I need to reinstall mmcls after some code modifications?
If you follow [the best practice](install.md) and install mmcls from source,
If you follow [the best practice](../get_started.md#best-practices) and install mmcls from source,
any local modifications made to the code will take effect without
reinstallation.

View File

@ -1,4 +1,4 @@
# Analysis Tools
# Analysis Tools (TODO)
<!-- TOC -->
@ -127,7 +127,7 @@ Description of all arguments:
- `config` : The path of the model config file.
- `result`: The Output result file in json/pickle format from `tools/test.py`.
- `--metrics` : Evaluation metrics, the acceptable values depend on the dataset.
- `--cfg-options`: If specified, the key-value pair config will be merged into the config file, for more details please refer to [Tutorial 1: Learn about Configs](../tutorials/config.md)
- `--cfg-options`: If specified, the key-value pair config will be merged into the config file, for more details please refer to [Learn about Configs](./config.md)
- `--metric-options`: If specified, the key-value pair arguments will be passed to the `metric_options` argument of dataset's `evaluate` function.
```{note}
@ -159,7 +159,7 @@ python tools/analysis_tools/analyze_results.py \
- `result`: Output result file in json/pickle format from `tools/test.py`.
- `--out_dir`: Directory to store output files.
- `--topk`: The number of images in successful or failed prediction with the highest `topk` scores to save. If not specified, it will be set to 20.
- `--cfg-options`: If specified, the key-value pair config will be merged into the config file, for more details please refer to [Tutorial 1: Learn about Configs](../tutorials/config.md)
- `--cfg-options`: If specified, the key-value pair config will be merged into the config file, for more details please refer to [Learn about Configs](./config.md)
```{note}
In `tools/test.py`, we support using `--out-items` option to select which kind of results will be saved. Please ensure the result file includes "pred_score", "pred_label" and "pred_class" to use this tool.

View File

@ -6,7 +6,7 @@ MMClassification supports following datasets:
- [ImageNet](#imagenet)
- [CIFAR](#cifar)
- [MINIST](#mnist)
- [OpenMMLab 2.0 Standard Dataset](#openmmlab-2-0-standard-dataset)
- [OpenMMLab 2.0 Standard Dataset](#openmmlab-20-standard-dataset)
- [Other Datasets](#other-datasets)
- [Dataset Wrappers](#dataset-wrappers)
@ -99,7 +99,7 @@ train_dataloader = dict(
data_root='path/to/data_root',
ann_file='meta/train_annfile.txt',
data_prefix='train',
classes=['A', 'B', 'C', 'D', ....],
classes=['A', 'B', 'C', 'D', ...],
pipeline=...,
)
)

View File

@ -3,7 +3,7 @@
MMClassification provides pre-trained models for classification in [Model Zoo](../model_zoo.md).
This note will show **how to use existing models to inference on given images**.
As for how to test existing models on standard datasets, please see this [guide](./train_test.md#Test)
As for how to test existing models on standard datasets, please see this [guide](./train_test.md#test)
## Inference on a given image

View File

@ -39,11 +39,11 @@ We provide a shell script to start a multi-GPUs task with `torch.distributed.lau
bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [PY_ARGS]
```
| ARGS | Description |
| ------------- | ---------------------------------------------------------------------------------- |
| `CONFIG_FILE` | The path to the config file. |
| `GPU_NUM` | The number of GPUs to be used. |
| `[PY_ARGS]` | The other optional arguments of `tools/train.py`, see [here](#train-with-your-pc). |
| ARGS | Description |
| ------------- | ------------------------------------------------------------------------------------- |
| `CONFIG_FILE` | The path to the config file. |
| `GPU_NUM` | The number of GPUs to be used. |
| `[PY_ARGS]` | The other optional arguments of `tools/train.py`, see [here](#training-with-your-pc). |
You can also specify extra arguments of the launcher by environment variables. For example, change the
communication port of the launcher to 29666 by the below command:
@ -99,13 +99,13 @@ If you run MMClassification on a cluster managed with [slurm](https://slurm.sche
Here are the arguments description of the script.
| ARGS | Description |
| ------------- | ---------------------------------------------------------------------------------- |
| `PARTITION` | The partition to use in your cluster. |
| `JOB_NAME` | The name of your job, you can name it as you like. |
| `CONFIG_FILE` | The path to the config file. |
| `WORK_DIR` | The target folder to save logs and checkpoints. |
| `[PY_ARGS]` | The other optional arguments of `tools/train.py`, see [here](#train-with-your-pc). |
| ARGS | Description |
| ------------- | ------------------------------------------------------------------------------------- |
| `PARTITION` | The partition to use in your cluster. |
| `JOB_NAME` | The name of your job, you can name it as you like. |
| `CONFIG_FILE` | The path to the config file. |
| `WORK_DIR` | The target folder to save logs and checkpoints. |
| `[PY_ARGS]` | The other optional arguments of `tools/train.py`, see [here](#training-with-your-pc). |
Here are the environment variables can be used to configure the slurm job.

View File

@ -1,4 +1,4 @@
# Miscellaneous
# Useful Tools (TODO)
<!-- TOC -->
@ -19,7 +19,7 @@ python tools/misc/print_config.py ${CONFIG} [--cfg-options ${CFG_OPTIONS}]
Description of all arguments:
- `config` : The path of the model config file.
- `--cfg-options`: If specified, the key-value pair config will be merged into the config file, for more details please refer to [Tutorial 1: Learn about Configs](../tutorials/config.md)
- `--cfg-options`: If specified, the key-value pair config will be merged into the config file, for more details please refer to [Learn about Configs](./config.md)
**Examples**:
@ -46,7 +46,7 @@ python tools/print_config.py \
- `--out-path` : The path to save the verification result, if not set, defaults to 'brokenfiles.log'.
- `--phase` : Phase of dataset to verify, accept "train" "test" and "val", if not set, defaults to "train".
- `--num-process` : number of process to use, if not set, defaults to 1.
- `--cfg-options`: If specified, the key-value pair config will be merged into the config file, for more details please refer to [Tutorial 1: Learn about Configs](../tutorials/config.md)
- `--cfg-options`: If specified, the key-value pair config will be merged into the config file, for more details please refer to [Learn about Configs](./config.md)
**Examples**:

View File

@ -1,4 +1,4 @@
# Visualization
# Visualization Tools (TODO)
<!-- TOC -->

View File

@ -0,0 +1 @@
# 数据流(待更新)

View File

@ -1,4 +1,4 @@
# 如何添加新数据集
# 添加新数据集
用户可以编写一个继承自 [`BasesDataset`](https://mmclassification.readthedocs.io/zh_CN/latest/_modules/mmcls/datasets/base_dataset.html#BaseDataset) 的新数据集类,并重载 `load_data_list(self)` 方法,类似 [CIFAR10](https://github.com/open-mmlab/mmclassification/blob/master/mmcls/datasets/cifar.py) 和 [ImageNet](https://github.com/open-mmlab/mmclassification/blob/master/mmcls/datasets/imagenet.py)。

View File

@ -0,0 +1 @@
# 自定义评估指标(待更新)

View File

@ -1,4 +1,4 @@
# 教程 7如何自定义模型运行参数
# 自定义模型运行参数(待更新)
在本教程中,我们将介绍如何在运行自定义模型时,进行自定义工作流和钩子的方法。
@ -258,4 +258,4 @@ custom_hooks = [
- `resume_from` 不仅导入模型权重还会导入优化器信息当前轮次epoch信息主要用于从断点继续训练。
- `init_cfg.Pretrained` :在权重初始化期间加载权重,您可以指定要加载的模块。 这通常在微调模型时使用,请参阅[教程 2如何微调模型](./finetune.md)
- `init_cfg.Pretrained` :在权重初始化期间加载权重,您可以指定要加载的模块。 这通常在微调模型时使用,请参阅[如何微调模型](../user_guides/finetune.md)

View File

@ -1,4 +1,4 @@
# 如何增加自定义模型
# 自定义模型
在我们的设计中,我们定义一个完整的模型为`ImageClassifer`。根据功能的不同,一个`ImageClassifer`基本由以下4种类型的模型组件组成。

View File

@ -1,4 +1,4 @@
# 教程 4如何设计数据处理流程
# 自定义数据处理流程(待更新)
## 设计数据流水线
@ -145,4 +145,4 @@ train_pipeline = [
## 流水线可视化
设计好数据流水线后,可以使用[可视化工具](../tools/visualization.md)查看具体的效果。
设计好数据流水线后,可以使用[可视化工具](../user_guides/visualization.md)查看具体的效果。

View File

@ -1,4 +1,4 @@
# 教程 6如何自定义优化策略
# 自定义优化策略(待更新)
在本教程中,我们将介绍如何在运行自定义模型时,进行构造优化器、定制学习率及动量调整策略、梯度裁剪、梯度累计以及用户自定义优化方法等。

View File

@ -52,8 +52,6 @@ extensions = [
'sphinx_copybutton',
]
autodoc_mock_imports = ['mmcv._ext', 'matplotlib']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@ -206,22 +204,33 @@ StandaloneHTMLBuilder.supported_image_types = [
# Ignore >>> when copying code
copybutton_prompt_text = r'>>> |\.\.\. '
copybutton_prompt_is_regexp = True
# Auto-generated header anchors
myst_heading_anchors = 3
# Enable "colon_fence" extension of myst.
myst_enable_extensions = ['colon_fence']
# Configuration for intersphinx
intersphinx_mapping = {
'python': ('https://docs.python.org/3', None),
'numpy': ('https://numpy.org/doc/stable', None),
'torch': ('https://pytorch.org/docs/stable/', None),
'mmcv': ('https://mmcv.readthedocs.io/en/master/', None),
'mmcv': ('https://mmcv.readthedocs.io/zh_CN/2.x/', None),
'mmengine': ('https://mmengine.readthedocs.io/zh_CN/latest/', None),
}
napoleon_custom_sections = [
# Custom sections for data elements.
('Meta fields', 'params_style'),
('Data fields', 'params_style'),
]
# Disable docstring inheritance
autodoc_inherit_docstrings = False
# Mock some imports during generate API docs.
autodoc_mock_imports = ['mmcv._ext', 'matplotlib']
# Disable displaying type annotations, these can be very verbose
autodoc_typehints = 'none'
def builder_inited_handler(app):
subprocess.run(['./stat.py'])

View File

@ -209,5 +209,5 @@ docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmclassification/data mmc
## 故障解决
如果你在安装过程中遇到了什么问题,请先查阅[常见问题](faq.md)。如果没有找到解决方法,可以在 GitHub
如果你在安装过程中遇到了什么问题,请先查阅[常见问题](./notes/faq.md)。如果没有找到解决方法,可以在 GitHub
上[提出 issue](https://github.com/open-mmlab/mmclassification/issues/new/choose)。

View File

@ -0,0 +1 @@
# 从 MMClassification 0.x 迁移(待更新)

View File

@ -32,12 +32,12 @@
- [mdformat](https://github.com/executablebooks/mdformat): 检查 markdown 文件的工具
- [docformatter](https://github.com/myint/docformatter): 一个 docstring 格式化工具。
yapf 和 isort 的格式设置位于 [setup.cfg](https://github.com/open-mmlab/mmclassification/blob/master/setup.cfg)
yapf 和 isort 的格式设置位于 [setup.cfg](https://github.com/open-mmlab/mmclassification/blob/1.x/setup.cfg)
我们使用 [pre-commit hook](https://pre-commit.com/) 来保证每次提交时自动进行代
码检查和格式化,启用的功能包括 `flake8`, `yapf`, `isort`, `trailing whitespaces`, `markdown files`, 修复 `end-of-files`, `double-quoted-strings`,
`python-encoding-pragma`, `mixed-line-ending`, 对 `requirments.txt`的排序等。
pre-commit hook 的配置文件位于 [.pre-commit-config](https://github.com/open-mmlab/mmclassification/blob/master/.pre-commit-config.yaml)
pre-commit hook 的配置文件位于 [.pre-commit-config](https://github.com/open-mmlab/mmclassification/blob/1.x/.pre-commit-config.yaml)
在你克隆仓库后,你需要按照如下步骤安装并初始化 pre-commit hook。

View File

@ -15,7 +15,7 @@
| MMClassification version | MMCV version |
| :----------------------: | :--------------------: |
| dev | mmcv>=1.5.0, \<1.6.0 |
| 1.0.0rc0 (1.x) | mmcv>=2.0.0rc1 |
| 0.23.1 (master) | mmcv>=1.4.2, \<1.6.0 |
| 0.22.1 | mmcv>=1.4.2, \<1.6.0 |
| 0.21.0 | mmcv>=1.4.2, \<=1.5.0 |
@ -55,7 +55,7 @@
- 如果我对源码进行了改动,需要重新安装以使改动生效吗?
如果你遵照[最佳实践](install.md)的指引,从源码安装 mmcls那么任何本地修改都不需要重新安装即可生效。
如果你遵照[最佳实践](../get_started.md#最佳实践)的指引,从源码安装 mmcls那么任何本地修改都不需要重新安装即可生效。
- 如何在多个 MMClassification 版本下进行开发?

View File

@ -0,0 +1 @@
# 基于 MMClassification 的项目列表(待更新)

View File

@ -1,4 +1,4 @@
# 分析
# 分析工具(待更新)
<!-- TOC -->
@ -127,7 +127,7 @@ python tools/analysis_tools/eval_metric.py \
- `config` :配置文件的路径。
- `result` `tools/test.py` 的输出结果文件。
- `metrics` 评估的衡量指标,可接受的值取决于数据集类。
- `--cfg-options`: 额外的配置选项,会被合入配置文件,参考[教程 1如何编写配置文件](https://mmclassification.readthedocs.io/zh_CN/latest/tutorials/config.html)。
- `--cfg-options`: 额外的配置选项,会被合入配置文件,参考[如何编写配置文件](./config.md)。
- `--metric-options`: 如果指定了,这些选项将被传递给数据集 `evaluate` 函数的 `metric_options` 参数。
```{note}
@ -159,7 +159,7 @@ python tools/analysis_tools/analyze_results.py \
- `result` `tools/test.py` 的输出结果文件。
- `--out_dir` :保存结果分析的文件夹路径。
- `--topk` :分别保存多少张预测成功/失败的图像。如果不指定,默认为 `20`
- `--cfg-options`: 额外的配置选项,会被合入配置文件,参考[教程 1如何编写配置文件](https://mmclassification.readthedocs.io/zh_CN/latest/tutorials/config.html)。
- `--cfg-options`: 额外的配置选项,会被合入配置文件,参考[如何编写配置文件](./config.md)。
```{note}
`tools/test.py` 中,我们支持使用 `--out-items` 选项来选择保存哪些结果。为了使用本工具,请确保结果文件中包含 "pred_score"、"pred_label" 和 "pred_class"。

View File

@ -163,7 +163,7 @@ test_dataloader = val_dataloader # test dataloader配置这里直接与 val_
test_evaluator = val_evaluator # 测试集的评估配置,这里直接与 val_evaluator 相同
```
```note
```{note}
'model.data_preprocessor' 既可以在 `model=dict(data_preprocessor=dict())`中定义,也可以使用此处的 `data_preprocessor` 定义, 同时配置时,优先使用 `model.data_preprocessor` 的配置。
```

View File

@ -6,17 +6,17 @@
- [ImageNet](#imagenet)
- [CIFAR](#cifar)
- [MINIST](#mnist)
- [OpenMMLab 2.0 Standard Dataset](#openmmlab-2-0-standard-dataset)
- [Other Datasets](#other-datasets)
- [Dataset Wrappers](#dataset-wrappers)
- [OpenMMLab 2.0 标准数据集](#openmmlab-20-标准数据集)
- [其他数据集](#其他数据集)
- [数据集包装](#数据集包装)
如果你使用的数据集不在以上所列公开数据集中,需要转换数据集格式来适配 **`CustomDataset`**。
## 适配CustomDataset
## CustomDataset
[`CustomDataset`](mmcls.datasets.CustomDataset) 是一个通用的数据集类,供您使用自己的数据集。目前 `CustomDataset` 支持以下两种数据格式
[`CustomDataset`](mmcls.datasets.CustomDataset) 是一个通用的数据集类,供您使用自己的数据集。目前 `CustomDataset` 支持以下两种方式组织你的数据集文件
### 子文件夹
### 子文件夹
文件夹格式通过文件来区别图片的类别,如下, class_1 和 class_2 就代表了区分了不同的类别。
@ -40,13 +40,13 @@ train_dataloader = dict(
# 训练数据集配置
dataset=dict(
type='CustomDataset',
data_prefix='path/to/data_prefix,
data_prefix='path/to/data_prefix',
pipeline=...
)
)
```
### 标注文件
### 标注文件
标注文件格式主要使用文本文件来保存类别信息,`data_prefix` 存放图片,`ann_file` 存放标注类别信息。
@ -59,8 +59,7 @@ data_root/
│ └── ...
├── data_prefix/
│ ├── folder_1
│ │ ├── xxx.png
│ │ ├── xxy.png
│ │ ├── xxx.png │ │ ├── xxy.png
│ │ └── ...
│ ├── 123.png
│ ├── nsdf3.png
@ -91,10 +90,10 @@ train_dataloader = dict(
type='CustomDataset',
ann_file='path/to/ann_file_path',
data_prefix='path/to/images',
classes=['A', 'B', 'C', 'D'....]
pipeline=transfrom_list
classes=['A', 'B', 'C', 'D', ...]
pipeline=...,
)
)
```
```{note}
@ -221,7 +220,7 @@ val_dataloader = dict(
test_dataloader = val_dataloader
```
### OpenMMLab 2.0 规范数据集
## OpenMMLab 2.0 标准数据集
为了统一不同任务的数据集接口便于多任务的算法模型训练OpenMMLab 制定了 **OpenMMLab 2.0 数据集格式规范** 数据集标注文件需符合该规范,数据集基类基于该规范去读取与解析数据标注文件。如果用户提供的数据标注文件不符合规定格式,用户可以选择将其转化为规定格式,并使用 OpenMMLab 的算法库基于该数据标注文件进行算法训练和测试。
@ -281,7 +280,7 @@ dataset_cfg=dict(
MMCLassification 还是支持更多其他的数据集,可以通过查阅[数据集文档](mmcls.datasets)获取它们的配置信息。
## 数据集包装
## 数据集包装
MMEngine 中支持以下数据包装器,您可以参考 [MMEngine 教程](TODO:) 了解如何使用它。

View File

@ -3,7 +3,7 @@
MMClassification 在 [Model Zoo](../model_zoo.md) 中提供了用于分类的预训练模型。
本说明将展示**如何使用现有模型对给定图像进行推理**。
至于如何在标准数据集上测试现有模型,请看这个[指南](./train_test.md#推理)
至于如何在标准数据集上测试现有模型,请看这个[指南](./train_test.md#测试)
## 推理单张图片

View File

@ -1,4 +1,4 @@
# 其他工具
# 其他工具(待更新)
<!-- TOC -->
@ -19,7 +19,7 @@ python tools/misc/print_config.py ${CONFIG} [--cfg-options ${CFG_OPTIONS}]
**所有参数说明**
- `config` :配置文件的路径。
- `--cfg-options`: 额外的配置选项,会被合入配置文件,参考[教程 1如何编写配置文件](https://mmclassification.readthedocs.io/zh_CN/latest/tutorials/config.html)。
- `--cfg-options`: 额外的配置选项,会被合入配置文件,参考[如何编写配置文件](./config.md)。
**示例**
@ -46,7 +46,7 @@ python tools/print_config.py \
- `--out-path` 输出结果路径,默认为 'brokenfiles.log'。
- `--phase` 检查哪个阶段的数据集,可用值为 "train" 、"test" 或者 "val" 默认为 "train"。
- `--num-process` 指定的进程数默认为1。
- `--cfg-options`: 额外的配置选项,会被合入配置文件,参考[教程 1如何编写配置文件](https://mmclassification.readthedocs.io/zh_CN/latest/tutorials/config.html)。
- `--cfg-options`: 额外的配置选项,会被合入配置文件,参考[如何编写配置文件](./config.md)。
**示例**:

View File

@ -1,4 +1,4 @@
# 可视化
# 可视化工具(待更新)
<!-- TOC -->

View File

@ -5,7 +5,7 @@ from mmengine.utils import digit_version
from .version import __version__
mmcv_minimum_version = '2.0.0rc0'
mmcv_minimum_version = '2.0.0rc1'
mmcv_maximum_version = '2.0.0'
mmcv_version = digit_version(mmcv.__version__)

View File

@ -597,7 +597,7 @@ class RandomErasing(BaseTransform):
@TRANSFORMS.register_module()
class EfficientNetCenterCrop(BaseTransform):
"""EfficientNet style center crop.
r"""EfficientNet style center crop.
**Required Keys:**
@ -624,9 +624,12 @@ class EfficientNetCenterCrop(BaseTransform):
image.
- The pipeline will be to first
to perform the center crop with the ``crop_size_`` as:
.. math::
\text{crop_size_} = \frac{\text{crop_size}}{\text{crop_size} +
\text{crop_padding}} \times \text{short_edge}
And then the pipeline resizes the img to the input crop size.
"""

View File

@ -1 +1,2 @@
mmcv-full>=2.0.0rc0
mmcv>=2.0.0rc1
mmengine

View File

@ -1,3 +1,4 @@
mmcv>=1.4.2
mmcv>=2.0.0rc1
mmengine
torch
torchvision

View File

@ -2,7 +2,7 @@ codecov
flake8
interrogate
isort==4.3.21
mmdet
mmdet>=3.0.0rc0
pytest
sklearn
xdoctest >= 0.10.0