[Docs] Update migration.md (#1417)

* update migration

* refine table

* update zh_cn

* fix lint

* Polish the documentation by ChatGPT.

* Update sphinx version and fix some warning.

---------

Co-authored-by: mzr1996 <mzr1996@163.com>
pull/1424/head
Yixiao Fang 2023-03-17 10:30:09 +08:00 committed by GitHub
parent 76a1f3f735
commit 8875e9da92
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 869 additions and 524 deletions

View File

@ -4,7 +4,7 @@ formats:
- epub
python:
version: 3.7
version: 3.8
install:
- requirements: requirements/docs.txt
- requirements: requirements/readthedocs.txt

View File

@ -53,6 +53,7 @@ extensions = [
'sphinx_copybutton',
'sphinx_tabs.tabs',
'notfound.extension',
'sphinxcontrib.jquery',
]
# Add any paths that contain templates here, relative to this directory.

View File

@ -1,24 +1,311 @@
# Migration from MMClassification 0.x
# Migration
We introduce some modifications in MMClassification 1.x, and some of them are BC-breading. To migrate your projects from MMClassification 0.x smoothly, please read this tutorial.
We introduce some modifications in MMPretrain 1.x, and some of them are BC-breacking. To migrate your projects from **MMClassification 0.x** or **MMSelfSup 0.x** smoothly, please read this tutorial.
- [Migration](#migration)
- [New dependencies](#new-dependencies)
- [General change of config](#general-change-of-config)
- [Schedule settings](#schedule-settings)
- [Runtime settings](#runtime-settings)
- [Other changes](#other-changes)
- [Migration from MMClassification 0.x](#migration-from-mmclassification-0x)
- [Config files](#config-files)
- [Model settings](#model-settings)
- [Data settings](#data-settings)
- [Packages](#packages)
- [`mmpretrain.apis`](#mmpretrainapis)
- [`mmpretrain.core`](#mmpretraincore)
- [`mmpretrain.datasets`](#mmpretraindatasets)
- [`mmpretrain.models`](#mmpretrainmodels)
- [`mmpretrain.utils`](#mmpretrainutils)
- [Migration from MMSelfSup 0.x](#migration-from-mmselfsup-0x)
- [Config](#config)
- [Dataset settings](#dataset-settings)
- [Model settings](#model-settings-1)
- [Package](#package)
## New dependencies
MMClassification 1.x depends on some new packages, you can prepare a new clean environment and install again
according to the [install tutorial](./get_started.md). Or install the below packages manually.
```{warning}
MMPretrain 1.x has new package dependencies, and a new environment should be created for MMPretrain 1.x even if you already have a well-rounded MMClassification 0.x or MMSelfSup 0.x environment. Please refer to the [installation tutorial](./get_started.md) for the required package installation or install the packages manually.
```
1. [MMEngine](https://github.com/open-mmlab/mmengine): MMEngine is the core the OpenMMLab 2.0 architecture,
and we splited many compentents unrelated to computer vision from MMCV to MMEngine.
and we have split many compentents unrelated to computer vision from MMCV to MMEngine.
2. [MMCV](https://github.com/open-mmlab/mmcv): The computer vision package of OpenMMLab. This is not a new
dependency, but you need to upgrade it to above `2.0.0rc1` version.
3. [rich](https://github.com/Textualize/rich): A terminal formatting package, and we use it to beautify some
dependency, but it should be upgraded to version `2.0.0rc1` or above.
3. [rich](https://github.com/Textualize/rich): A terminal formatting package, and we use it to enhance some
outputs in the terminal.
4. [einops](https://github.com/arogozhnikov/einops): Operators for Einstein notations.
## Configuration files
# General change of config
In MMClassification 1.x, we refactored the structure of configuration files, and the original files are not usable.
In this section, we introduce the general difference between old version(**MMClassification 0.x** or **MMSelfSup 0.x**) and **MMPretrain 1.x**.
<!-- TODO: migration tool -->
## Schedule settings
| MMCls or MMSelfSup 0.x | MMPretrain 1.x | Remark |
| ---------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| optimizer_config | / | It has been **removed**. |
| / | optim_wrapper | The `optim_wrapper` provides a common interface for updating parameters. |
| lr_config | param_scheduler | The `param_scheduler` is a list to set learning rate or other parameters, which is more flexible. |
| runner | train_cfg | The loop setting (`EpochBasedTrainLoop`, `IterBasedTrainLoop`) in `train_cfg` controls the work flow of the algorithm training. |
Changes in **`optimizer`** and **`optimizer_config`**:
- Now we use `optim_wrapper` field to specify all configurations related to optimization process. The
`optimizer` has become a subfield of `optim_wrapper`.
- The `paramwise_cfg` field is also a subfield of `optim_wrapper`, instead of `optimizer`.
- The `optimizer_config` field has been removed, and all configurations has been moved to `optim_wrapper`.
- The `grad_clip` field has been renamed to `clip_grad`.
<table class="docutils">
<tr>
<td>Original</td>
<td>
```python
optimizer = dict(
type='AdamW',
lr=0.0015,
weight_decay=0.3,
paramwise_cfg = dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
))
optimizer_config = dict(grad_clip=dict(max_norm=1.0))
```
</td>
<tr>
<td>New</td>
<td>
```python
optim_wrapper = dict(
optimizer=dict(type='AdamW', lr=0.0015, weight_decay=0.3),
paramwise_cfg = dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
),
clip_grad=dict(max_norm=1.0),
)
```
</td>
</tr>
</table>
Changes in **`lr_config`**:
- The `lr_config` field has been removed and replaced by the new `param_scheduler`.
- The `warmup` related arguments have also been removed since we use a combination of schedulers to implement this
functionality.
The new scheduler combination mechanism is highly flexible and enables the design of various learning rate/momentum curves.
For more details, see the {external+mmengine:doc}`parameter schedulers tutorial <tutorials/param_scheduler>`.
<table class="docutils">
<tr>
<td>Original</td>
<td>
```python
lr_config = dict(
policy='CosineAnnealing',
min_lr=0,
warmup='linear',
warmup_iters=5,
warmup_ratio=0.01,
warmup_by_epoch=True)
```
</td>
<tr>
<td>New</td>
<td>
```python
param_scheduler = [
# warmup
dict(
type='LinearLR',
start_factor=0.01,
by_epoch=True,
end=5,
# Update the learning rate after every iters.
convert_to_iter_based=True),
# main learning rate scheduler
dict(type='CosineAnnealingLR', by_epoch=True, begin=5),
]
```
</td>
</tr>
</table>
Changes in **`runner`**:
Most of the configurations that were originally in the `runner` field have been moved to `train_cfg`, `val_cfg`, and `test_cfg`.
These fields are used to configure the loop for training, validation, and testing.
<table class="docutils">
<tr>
<td>Original</td>
<td>
```python
runner = dict(type='EpochBasedRunner', max_epochs=100)
```
</td>
<tr>
<td>New</td>
<td>
```python
# The `val_interval` is the original `evaluation.interval`.
train_cfg = dict(by_epoch=True, max_epochs=100, val_interval=1)
val_cfg = dict() # Use the default validation loop.
test_cfg = dict() # Use the default test loop.
```
</td>
</tr>
</table>
In OpenMMLab 2.0, we introduced `Loop` to control the behaviors in training, validation and testing. As a result, the functionalities of `Runner` have also been changed.
More details can be found in the {external+mmengine:doc}`MMEngine tutorials <design/runner>`.
## Runtime settings
Changes in **`checkpoint_config`** and **`log_config`**:
The `checkpoint_config` has been moved to `default_hooks.checkpoint`, and `log_config` has been moved to
`default_hooks.logger`. Additionally, many hook settings that were previously included in the script code have
been moved to the `default_hooks` field in the runtime configuration.
```python
default_hooks = dict(
# record the time of every iterations.
timer=dict(type='IterTimerHook'),
# print log every 100 iterations.
logger=dict(type='LoggerHook', interval=100),
# enable the parameter scheduler.
param_scheduler=dict(type='ParamSchedulerHook'),
# save checkpoint per epoch, and automatically save the best checkpoint.
checkpoint=dict(type='CheckpointHook', interval=1, save_best='auto'),
# set sampler seed in distributed evrionment.
sampler_seed=dict(type='DistSamplerSeedHook'),
# validation results visualization, set True to enable it.
visualization=dict(type='VisualizationHook', enable=False),
)
```
In OpenMMLab 2.0, we have split the original logger into logger and visualizer. The logger is used to record
information, while the visualizer is used to display the logger in different backends such as terminal,
TensorBoard, and Wandb.
<table class="docutils">
<tr>
<td>Original</td>
<td>
```python
log_config = dict(
interval=100,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook'),
])
```
</td>
<tr>
<td>New</td>
<td>
```python
default_hooks = dict(
...
logger=dict(type='LoggerHook', interval=100),
)
visualizer = dict(
type='UniversalVisualizer',
vis_backends=[dict(type='LocalVisBackend'), dict(type='TensorboardVisBackend')],
)
```
</td>
</tr>
</table>
Changes in **`load_from`** and **`resume_from`**:
- The `resume_from` is removed. And we use `resume` and `load_from` to replace it.
- If `resume=True` and `load_from` is not None, resume training from the checkpoint in `load_from`.
- If `resume=True` and `load_from` is None, try to resume from the latest checkpoint in the work directory.
- If `resume=False` and `load_from` is not None, only load the checkpoint, not resume training.
- If `resume=False` and `load_from` is None, do not load nor resume.
the `resume_from` field has been removed, and we use `resume` and `load_from` instead.
- If `resume=True` and `load_from` is not None, training is resumed from the checkpoint in `load_from`.
- If `resume=True` and `load_from` is None, the latest checkpoint in the work directory is used for resuming.
- If `resume=False` and `load_from` is not None, only the checkpoint is loaded without resuming training.
- If `resume=False` and `load_from` is None, neither checkpoint is loaded nor is training resumed.
Changes in **`dist_params`**: The `dist_params` field has become a subfield of `env_cfg` now.
Additionally, some new configurations have been added to `env_cfg`.
```python
env_cfg = dict(
# whether to enable cudnn benchmark
cudnn_benchmark=False,
# set multi process parameters
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
# set distributed parameters
dist_cfg=dict(backend='nccl'),
)
```
Changes in **`workflow`**: `workflow` related functionalities are removed.
New field **`visualizer`**: The visualizer is a new design in OpenMMLab 2.0 architecture. The runner uses an
instance of the visualizer to handle result and log visualization, as well as to save to different backends.
For more information, please refer to the {external+mmengine:doc}`MMEngine tutorial <advanced_tutorials/visualization>`.
```python
visualizer = dict(
type='UniversalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
# Uncomment the below line to save the log and visualization results to TensorBoard.
# dict(type='TensorboardVisBackend')
]
)
```
New field **`default_scope`**: The start point to search module for all registries. The `default_scope` in MMPretrain is `mmpretrain`. See {external+mmengine:doc}`the registry tutorial <advanced_tutorials/registry>` for more details.
## Other changes
We moved the definition of all registries in different packages to the `mmpretrain.registry` package.
# Migration from MMClassification 0.x
## Config files
In MMPretrain 1.x, we refactored the structure of configuration files, and the original files are not usable.
In this section, we will introduce all changes of the configuration files. And we assume you already have
ideas of the [config files](./user_guides/config.md).
@ -243,246 +530,6 @@ test_evaluator = val_evaluator
</tr>
</table>
### Schedule settings
Changes in **`optimizer`** and **`optimizer_config`**:
- Now we use `optim_wrapper` field to specify all configuration about the optimization process. And the
`optimizer` is a sub field of `optim_wrapper` now.
- `paramwise_cfg` is also a sub field of `optim_wrapper`, instead of `optimizer`.
- `optimizer_config` is removed now, and all configurations of it are moved to `optim_wrapper`.
- `grad_clip` is renamed to `clip_grad`.
<table class="docutils">
<tr>
<td>Original</td>
<td>
```python
optimizer = dict(
type='AdamW',
lr=0.0015,
weight_decay=0.3,
paramwise_cfg = dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
))
optimizer_config = dict(grad_clip=dict(max_norm=1.0))
```
</td>
<tr>
<td>New</td>
<td>
```python
optim_wrapper = dict(
optimizer=dict(type='AdamW', lr=0.0015, weight_decay=0.3),
paramwise_cfg = dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
),
clip_grad=dict(max_norm=1.0),
)
```
</td>
</tr>
</table>
Changes in **`lr_config`**:
- The `lr_config` field is removed and we use new `param_scheduler` to replace it.
- The `warmup` related arguments are removed, since we use schedulers combination to implement this
functionality.
The new schedulers combination mechanism is very flexible, and you can use it to design many kinds of learning
rate / momentum curves. See {external+mmengine:doc}`the tutorial <tutorials/param_scheduler>` for more details.
<table class="docutils">
<tr>
<td>Original</td>
<td>
```python
lr_config = dict(
policy='CosineAnnealing',
min_lr=0,
warmup='linear',
warmup_iters=5,
warmup_ratio=0.01,
warmup_by_epoch=True)
```
</td>
<tr>
<td>New</td>
<td>
```python
param_scheduler = [
# warmup
dict(
type='LinearLR',
start_factor=0.01,
by_epoch=True,
end=5,
# Update the learning rate after every iters.
convert_to_iter_based=True),
# main learning rate scheduler
dict(type='CosineAnnealingLR', by_epoch=True, begin=5),
]
```
</td>
</tr>
</table>
Changes in **`runner`**:
Most configuration in the original `runner` field is moved to `train_cfg`, `val_cfg` and `test_cfg`, which
configure the loop in training, validation and test.
<table class="docutils">
<tr>
<td>Original</td>
<td>
```python
runner = dict(type='EpochBasedRunner', max_epochs=100)
```
</td>
<tr>
<td>New</td>
<td>
```python
# The `val_interval` is the original `evaluation.interval`.
train_cfg = dict(by_epoch=True, max_epochs=100, val_interval=1)
val_cfg = dict() # Use the default validation loop.
test_cfg = dict() # Use the default test loop.
```
</td>
</tr>
</table>
In fact, in OpenMMLab 2.0, we introduced `Loop` to control the behaviors in training, validation and test. And
the functionalities of `Runner` are also changed. You can find more details in {external+mmengine:doc}`the MMEngine tutorials <design/runner>`.
### Runtime settings
Changes in **`checkpoint_config`** and **`log_config`**:
The `checkpoint_config` are moved to `default_hooks.checkpoint` and the `log_config` are moved to `default_hooks.logger`.
And we move many hooks settings from the script code to the `default_hooks` field in the runtime configuration.
```python
default_hooks = dict(
# record the time of every iterations.
timer=dict(type='IterTimerHook'),
# print log every 100 iterations.
logger=dict(type='LoggerHook', interval=100),
# enable the parameter scheduler.
param_scheduler=dict(type='ParamSchedulerHook'),
# save checkpoint per epoch, and automatically save the best checkpoint.
checkpoint=dict(type='CheckpointHook', interval=1, save_best='auto'),
# set sampler seed in distributed evrionment.
sampler_seed=dict(type='DistSamplerSeedHook'),
# validation results visualization, set True to enable it.
visualization=dict(type='VisualizationHook', enable=False),
)
```
In addition, we splited the original logger to logger and visualizer. The logger is used to record
information and the visualizer is used to show the logger in different backends, like terminal, TensorBoard
and Wandb.
<table class="docutils">
<tr>
<td>Original</td>
<td>
```python
log_config = dict(
interval=100,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook'),
])
```
</td>
<tr>
<td>New</td>
<td>
```python
default_hooks = dict(
...
logger=dict(type='LoggerHook', interval=100),
)
visualizer = dict(
type='UniversalVisualizer',
vis_backends=[dict(type='LocalVisBackend'), dict(type='TensorboardVisBackend')],
)
```
</td>
</tr>
</table>
Changes in **`load_from`** and **`resume_from`**:
- The `resume_from` is removed. And we use `resume` and `load_from` to replace it.
- If `resume=True` and `load_from` is not None, resume training from the checkpoint in `load_from`.
- If `resume=True` and `load_from` is None, try to resume from the latest checkpoint in the work directory.
- If `resume=False` and `load_from` is not None, only load the checkpoint, not resume training.
- If `resume=False` and `load_from` is None, do not load nor resume.
Changes in **`dist_params`**: The `dist_params` field is a sub field of `env_cfg` now. And there are some new
configurations in the `env_cfg`.
```python
env_cfg = dict(
# whether to enable cudnn benchmark
cudnn_benchmark=False,
# set multi process parameters
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
# set distributed parameters
dist_cfg=dict(backend='nccl'),
)
```
Changes in **`workflow`**: `workflow` related functionalities are removed.
New field **`visualizer`**: The visualizer is a new design in OpenMMLab 2.0 architecture. We use a
visualizer instance in the runner to handle results & log visualization and save to different backends.
See the {external+mmengine:doc}`MMEngine tutorial <advanced_tutorials/visualization>` for more details.
```python
visualizer = dict(
type='UniversalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
# Uncomment the below line to save the log and visualization results to TensorBoard.
# dict(type='TensorboardVisBackend')
]
)
```
New field **`default_scope`**: The start point to search module for all registries. The `default_scope` in MMClassification is `mmpretrain`. See {external+mmengine:doc}`the registry tutorial <advanced_tutorials/registry>` for more details.
## Packages
### `mmpretrain.apis`
@ -509,7 +556,7 @@ The `mmpretrain.core` package is renamed to [`mmpretrain.engine`](mmpretrain.eng
| `evaluation` | Removed, use the metrics in [`mmpretrain.evaluation`](mmpretrain.evaluation). |
| `hook` | Moved to [`mmpretrain.engine.hooks`](mmpretrain.engine.hooks) |
| `optimizers` | Moved to [`mmpretrain.engine.optimizers`](mmpretrain.engine.optimizers) |
| `utils` | Removed, the distributed environment related functions can be found in the [`mmengine.dist`](mmengine.dist) package. |
| `utils` | Removed, the distributed environment related functions can be found in the [`mmengine.dist`](api/dist) package. |
| `visualization` | Removed, the related functionalities are implemented in [`mmengine.visualization.Visualizer`](mmengine.visualization.Visualizer). |
The `MMClsWandbHook` in `hooks` package is waiting for implementation.
@ -521,15 +568,15 @@ the combination of parameter schedulers, see [the tutorial](./advanced_guides/sc
The documentation can be found [here](mmpretrain.datasets).
| Dataset class | Changes |
| :---------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------- |
| [`CustomDataset`](mmpretrain.datasets.CustomDataset) | Add `data_root` argument as the common prefix of `data_prefix` and `ann_file`. |
| [`ImageNet`](mmpretrain.datasets.ImageNet) | Same as `CustomDataset`. |
| [`ImageNet21k`](mmpretrain.datasets.ImageNet21k) | Same as `CustomDataset`. |
| [`CIFAR10`](mmpretrain.datasets.CIFAR10) & [`CIFAR100`](mmpretrain.datasets.CIFAR100) | The `test_mode` argument is a required argument now. |
| [`MNIST`](mmpretrain.datasets.MNIST) & [`FashionMNIST`](mmpretrain.datasets.FashionMNIST) | The `test_mode` argument is a required argument now. |
| [`VOC`](mmpretrain.datasets.VOC) | Requires `data_root`, `image_set_path` and `test_mode` now. |
| [`CUB`](mmpretrain.datasets.CUB) | Requires `data_root` and `test_mode` now. |
| Dataset class | Changes |
| :---------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------- |
| [`CustomDataset`](mmpretrain.datasets.CustomDataset) | Add `data_root` argument as the common prefix of `data_prefix` and `ann_file` and support to load unlabeled data. |
| [`ImageNet`](mmpretrain.datasets.ImageNet) | Same as `CustomDataset`. |
| [`ImageNet21k`](mmpretrain.datasets.ImageNet21k) | Same as `CustomDataset`. |
| [`CIFAR10`](mmpretrain.datasets.CIFAR10) & [`CIFAR100`](mmpretrain.datasets.CIFAR100) | The `test_mode` argument is a required argument now. |
| [`MNIST`](mmpretrain.datasets.MNIST) & [`FashionMNIST`](mmpretrain.datasets.FashionMNIST) | The `test_mode` argument is a required argument now. |
| [`VOC`](mmpretrain.datasets.VOC) | Requires `data_root`, `image_set_path` and `test_mode` now. |
| [`CUB`](mmpretrain.datasets.CUB) | Requires `data_root` and `test_mode` now. |
The `mmpretrain.datasets.pipelines` is renamed to `mmpretrain.datasets.transforms`.
@ -562,13 +609,13 @@ Changes in [`ImageClassifier`](mmpretrain.models.classifiers.ImageClassifier):
Changes in [heads](mmpretrain.models.heads):
| Method of heads | Changes |
| :-------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `pre_logits` | No changes |
| `forward_train` | Replaced by `loss`. |
| `simple_test` | Replaced by `predict`. |
| `loss` | It accepts `data_samples` instead of `gt_labels` to calculate loss. The `data_samples` is a list of [ClsDataSample](mmpretrain.structures.ClsDataSample). |
| `forward` | New method, and it returns the output of the classification head without any post-processs like softmax or sigmoid. |
| Method of heads | Changes |
| :-------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------- |
| `pre_logits` | No changes |
| `forward_train` | Replaced by `loss`. |
| `simple_test` | Replaced by `predict`. |
| `loss` | It accepts `data_samples` instead of `gt_labels` to calculate loss. The `data_samples` is a list of [ClsDataSample](mmpretrain.structures.DataSample). |
| `forward` | New method, and it returns the output of the classification head without any post-processs like softmax or sigmoid. |
### `mmpretrain.utils`
@ -582,6 +629,144 @@ Changes in [heads](mmpretrain.models.heads):
| `wrap_distributed_model` | Removed, we auto wrap the model in the runner. |
| `auto_select_device` | Removed, we auto select the device in the runner. |
### Other changes
# Migration from MMSelfSup 0.x
- We moved the definition of all registries in different packages to the `mmpretrain.registry` package.
## Config
This section illustrates the changes of our config files in the `_base_` folder, which includes three parts
- Datasets: `configs/_base_/datasets`
- Models: `configs/_base_/models`
- Schedules: `configs/_base_/schedules`
### Dataset settings
In **MMSelfSup 0.x**, we use key `data` to summarize all information, such as `samples_per_gpu`, `train`, `val`, etc.
In **MMPretrain 1.x**, we separate `train_dataloader`, `val_dataloader` to summarize information correspodingly and the key `data` has been **removed**.
<table class="docutils">
<tr>
<td>Original</td>
<td>
```python
data = dict(
samples_per_gpu=32, # total 32*8(gpu)=256
workers_per_gpu=4,
train=dict(
type=dataset_type,
data_source=dict(
type=data_source,
data_prefix='data/imagenet/train',
ann_file='data/imagenet/meta/train.txt',
),
num_views=[1, 1],
pipelines=[train_pipeline1, train_pipeline2],
prefetch=prefetch,
),
val=...)
```
</td>
<tr>
<td>New</td>
<td>
```python
train_dataloader = dict(
batch_size=32,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
collate_fn=dict(type='default_collate'),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='meta/train.txt',
data_prefix=dict(img_path='train/'),
pipeline=train_pipeline))
val_dataloader = ...
```
</td>
</tr>
</table>
Besides, we **remove** the key of `data_source` to keep the pipeline format consistent with that in other OpenMMLab projects. Please refer to [Config](user_guides/config.md) for more details.
Changes in **`pipeline`**:
Take MAE as an example of `pipeline`:
```python
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandomResizedCrop',
scale=224,
crop_ratio_range=(0.2, 1.0),
backend='pillow',
interpolation='bicubic'),
dict(type='RandomFlip', prob=0.5),
dict(type='PackInputs')
]
```
### Model settings
In the config of models, there are two main different parts from MMSeflSup 0.x.
1. There is a new key called `data_preprocessor`, which is responsible for preprocessing the data, like normalization, channel conversion, etc. For example:
```python
data_preprocessor=dict(
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True)
model = dict(
type='MAE',
data_preprocessor=dict(
mean=[127.5, 127.5, 127.5],
std=[127.5, 127.5, 127.5],
bgr_to_rgb=True),
backbone=...,
neck=...,
head=...,
init_cfg=...)
```
2. There is a new key `loss` in `head` in MMPretrain 1.x, to determine the loss function of the algorithm. For example:
```python
model = dict(
type='MAE',
backbone=...,
neck=...,
head=dict(
type='MAEPretrainHead',
norm_pix=True,
patch_size=16,
loss=dict(type='MAEReconstructionLoss')),
init_cfg=...)
```
## Package
The table below records the general modification of the folders and files.
| MMSelfSup 0.x | MMPretrain 1.x | Remark |
| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| apis | apis | The high level APIs are updated. |
| core | engine | The `core` folder has been renamed to `engine`, which includes `hooks`, `opimizers`. ([API link](mmpretrain.engine)) |
| datasets | datasets | The datasets is implemented according to different datasets, such as ImageNet, Places205. ([API link](mmpretrain.datasets)) |
| datasets/data_sources | / | The `data_sources` has been **removed** and the directory of `datasets` now is consistent with other OpenMMLab projects. |
| datasets/pipelines | datasets/transforms | The `pipelines` folder has been renamed to `transforms`. ([API link](mmpretrain.datasets.transforms)) |
| / | evaluation | The `evaluation` is created for some evaluation functions or classes. ([API link](mmpretrain.evaluation)) |
| models/algorithms | selfsup | The algorithms are moved to `selfsup` folder. ([API link](mmpretrain.models.selfsup)) |
| models/backbones | selfsup | The re-implemented backbones are moved to corresponding self-supervised learning algorithm `.py` files. ([API link](mmpretrain.models.selfsup)) |
| models/target_generators | selfsup | The target generators are moved to corresponding self-supervised learning algorithm `.py` files. ([API link](mmpretrain.models.selfsup)) |
| / | models/losses | The `losses` folder is created to provide different loss implementations, which is from `heads`. ([API link](mmpretrain.models.losses)) |
| / | structures | The `structures` folder is for the implementation of data structures. In MMPretrain, we implement a new data structure, `DataSample`, to pass and receive data throughout the training/val process. ([API link](mmpretrain.structures)) |
| / | visualization | The `visualization` folder contains the visualizer, which is responsible for some visualization tasks like visualizing data augmentation. ([API link](mmpretrain.visualization)) |

View File

@ -247,7 +247,7 @@ The OpenMMLab 2.0 Dataset Format Specification stipulates that the annotation fi
The following is an example of a JSON annotation file (in this example each raw data contains only one train/test sample):
```json
```
{
'metainfo':
{

View File

@ -53,6 +53,7 @@ extensions = [
'sphinx_copybutton',
'sphinx_tabs.tabs',
'notfound.extension',
'sphinxcontrib.jquery',
]
# Add any paths that contain templates here, relative to this directory.

View File

@ -1,18 +1,270 @@
# 从 MMClassification 0.x 迁移
# 迁移文档
我们在 MMClassification 1.x 版本中引入了一些修改,可能会产生兼容性问题。请按照本教程从 MMClassification 0.x 迁移您的项目。
我们在 MMPretrain 1.x 版本中引入了一些修改,可能会产生兼容性问题。请按照本教程从 MMClassification 0.x 或是 MMSelfSup 0.x 迁移您的项目。
## 新的依赖
MMClassification 1.x 依赖一些新的包。你可以准备一个干净的新环境,并按照[安装教程](./get_started.md)重新安装;或者手动安装以下软件包。
```{warning}
MMPretrain 1.x 版本依赖于一些新的代码包,您应该根据 [安装教程](./get_started.md) 来创建新的环境,尽管你可能已经拥有了一个可以正常运行 MMClassification 0.x 或 MMSelfSup 0.x 的环境。请参考[安装文档](./get_started.md) 对依赖库进行对应的安装。
```
1. [MMEngine](https://github.com/open-mmlab/mmengine)MMEngine 是 OpenMMLab 2.0 架构的核心库,我们将许多与计算机视觉无关的组件从 MMCV 拆分到了 MMEngine。
2. [MMCV](https://github.com/open-mmlab/mmcv)OpenMMLab 计算机视觉基础库,这不是一个新的依赖,但你需要将其升级到 `2.0.0rc1` 版本以上。
3. [rich](https://github.com/Textualize/rich):一个命令行美化库,用以在命令行中呈现更美观的输出。
# 配置文件的通用改变
在这个部分,我们将介绍一些旧版本 (**MMClassification 0.x** 或 **MMSelfSup 0.x**) 和 **MMPretrain 1.x** 之间通用的变化规范。
## 训练策略设置
| MMCls or MMSelfSup 0.x | MMPretrain 1.x | 备注 |
| ---------------------- | --------------- | -------------------------------------------------------------------------------------------------------- |
| optimizer_config | / | `optimizer_config` 已经被**移除**。 |
| / | optim_wrapper | `optim_wrapper` 提供了参数更新的相关字段。 |
| lr_config | param_scheduler | `param_scheduler` 是一个列表设置学习率或者是其它参数,这将比之前更加灵活。 |
| runner | train_cfg | `train_cfg` 中的循环设置(如 `EpochBasedTrainLoop``IterBasedTrainLoop`)将控制模型训练过程中的工作流。 |
**`optimizer`** 和 **`optimizer_config`** 字段的变化:
- 现在我们使用 `optim_wrapper` 字段指定与优化过程有关的所有配置。而 `optimizer` 字段是 `optim_wrapper` 的一个
子字段。
- `paramwise_cfg` 字段不再是 `optimizer` 的子字段,而是 `optim_wrapper` 的子字段。
- `optimizer_config` 字段被移除,其配置项被移入 `optim_wrapper` 字段。
- `grad_clip` 被重命名为 `clip_grad`
<table class="docutils">
<tr>
<td>原配置</td>
<td>
```python
optimizer = dict(
type='AdamW',
lr=0.0015,
weight_decay=0.3,
paramwise_cfg = dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
))
optimizer_config = dict(grad_clip=dict(max_norm=1.0))
```
</td>
<tr>
<td>新配置</td>
<td>
```python
optim_wrapper = dict(
optimizer=dict(type='AdamW', lr=0.0015, weight_decay=0.3),
paramwise_cfg = dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
),
clip_grad=dict(max_norm=1.0),
)
```
</td>
</tr>
</table>
**`lr_config`** 字段的变化:
- `lr_config` 字段被移除,我们使用新的 `param_scheduler` 配置取代。
- `warmup` 相关的字段都被移除,因为学习率预热可以通过多个学习率规划器的组合来实现,因此不再单独实现。
新的优化器参数规划器组合机制非常灵活,你可以使用它来设计多种学习率、动量曲线,详见{external+mmengine:doc}`MMEngine 中的教程 <tutorials/param_scheduler>`。
<table class="docutils">
<tr>
<td>原配置</td>
<td>
```python
lr_config = dict(
policy='CosineAnnealing',
min_lr=0,
warmup='linear',
warmup_iters=5,
warmup_ratio=0.01,
warmup_by_epoch=True)
```
</td>
<tr>
<td>新配置</td>
<td>
```python
param_scheduler = [
# 学习率预热
dict(
type='LinearLR',
start_factor=0.01,
by_epoch=True,
end=5,
# 每轮迭代都更新学习率,而不是每个 epoch
convert_to_iter_based=True),
# 主学习率规划器
dict(type='CosineAnnealingLR', by_epoch=True, begin=5),
]
```
</td>
</tr>
</table>
**`runner`** 字段的变化:
`runner` 字段被拆分为 `train_cfg``val_cfg` 和 `test_cfg` 三个字段,分别配置训练、验证和测试循环。
<table class="docutils">
<tr>
<td>原配置</td>
<td>
```python
runner = dict(type='EpochBasedRunner', max_epochs=100)
```
</td>
<tr>
<td>新配置</td>
<td>
```python
# `val_interval` 字段来自原配置中 `evaluation.interval` 字段
train_cfg = dict(by_epoch=True, max_epochs=100, val_interval=1)
val_cfg = dict() # 空字典表示使用默认验证配置
test_cfg = dict() # 空字典表示使用默认测试配置
```
</td>
</tr>
</table>
在 OpenMMLab 2.0 中,我们引入了“循环控制器”来控制训练、验证和测试行为,而原先 `Runner` 功能也相应地发生了变化。详细介绍参见 MMEngine 中的{external+mmengine:doc}`执行器教程 <design/runner>`。
## 运行设置
**`checkpoint_config`** 和 **`log_config`** 字段的变化:
`checkpoint_config` 被移动至 `default_hooks.checkpoint``log_config` 被移动至 `default_hooks.logger`。同时,
我们将很多原先在训练脚本中隐式定义的钩子移动到了 `default_hooks` 字段。
```python
default_hooks = dict(
# 记录每轮迭代的耗时
timer=dict(type='IterTimerHook'),
# 每 100 轮迭代打印一次日志
logger=dict(type='LoggerHook', interval=100),
# 启用优化器参数规划器
param_scheduler=dict(type='ParamSchedulerHook'),
# 每个 epoch 保存一次模型权重文件,并且自动保存最优权重文件
checkpoint=dict(type='CheckpointHook', interval=1, save_best='auto'),
# 在分布式环境中设置采样器种子
sampler_seed=dict(type='DistSamplerSeedHook'),
# 可视化验证结果,将 `enable` 设为 True 来启用这一功能。
visualization=dict(type='VisualizationHook', enable=False),
)
```
此外我们将原来的日志功能拆分为日志记录和可视化器。日志记录负责按照指定间隔保存日志数据以及进行数据平滑等处理可视化器用于在不同的后端记录日志如终端、TensorBoard 和 WandB。
<table class="docutils">
<tr>
<td>原配置</td>
<td>
```python
log_config = dict(
interval=100,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook'),
])
```
</td>
<tr>
<td>新配置</td>
<td>
```python
default_hooks = dict(
...
logger=dict(type='LoggerHook', interval=100),
)
visualizer = dict(
type='UniversalVisualizer',
vis_backends=[dict(type='LocalVisBackend'), dict(type='TensorboardVisBackend')],
)
```
</td>
</tr>
</table>
**`load_from`** 和 **`resume_from`** 字段的变动:
- `resume_from` 字段被移除。我们现在使用 `resume``load_from` 字段实现以下功能:
- 如 `resume=True``load_from` 不为 None`load_from` 指定的权重文件恢复训练。
- 如 `resume=True``load_from` 为 None尝试从工作目录中最新的权重文件恢复训练。
- 如 `resume=False``load_from` 不为 None仅加载指定的权重文件不恢复训练。
- 如 `resume=False``load_from` 为 None不进行任何操作。
**`dist_params`** 字段的变动:`dist_params` 字段被移动为 `env_cfg` 字段的一个子字段。以下为 `env_cfg` 字段的所
有配置项:
```python
env_cfg = dict(
# 是否启用 cudnn benchmark
cudnn_benchmark=False,
# 设置多进程相关参数
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
# 设置分布式相关参数
dist_cfg=dict(backend='nccl'),
)
```
**`workflow`** 字段的变动:`workflow` 相关的功能现已被移除。
新字段 **`visualizer`**:可视化器是 OpenMMLab 2.0 架构中的新设计,我们使用可视化器进行日志、结果的可视化与多后
端的存储。详见 MMEngine 中的{external+mmengine:doc}`可视化教程 <advanced_tutorials/visualization>`。
```python
visualizer = dict(
type='UniversalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
# 将下行取消注释,即可将日志和可视化结果保存至 TesnorBoard
# dict(type='TensorboardVisBackend')
]
)
```
新字段 **`default_scope`**指定所有注册器进行模块搜索默认的起点。MMPretrain 中的 `default_scope` 字段为 `mmpretrain`,大部分情况下不需要修改。详见 MMengine 中的{external+mmengine:doc}`注册器教程 <advanced_tutorials/registry>`。
## 其他变动
我们将所有注册器的定义从各个包移动到了 `mmpretrain.registry`
# 从 MMClassification 0.x 迁移
## 配置文件
在 MMClassification 1.x 中,我们重构了配置文件的结构,绝大部分原来的配置文件无法直接使用。
在 MMPretrain 1.x 中,我们重构了配置文件的结构,绝大部分原来的配置文件无法直接使用。
在本节中,我们将介绍配置文件的所有变化。我们假设您已经对[配置文件](./user_guides/config.md)有所了解。
@ -237,239 +489,6 @@ test_evaluator = val_evaluator
</tr>
</table>
### 训练策略设置
**`optimizer`** 和 **`optimizer_config`** 字段的变化:
- 现在我们使用 `optim_wrapper` 字段指定与优化过程有关的所有配置。而 `optimizer` 字段是 `optim_wrapper` 的一个
子字段。
- `paramwise_cfg` 字段不再是 `optimizer` 的子字段,而是 `optim_wrapper` 的子字段。
- `optimizer_config` 字段被移除,其配置项被移入 `optim_wrapper` 字段。
- `grad_clip` 被重命名为 `clip_grad`
<table class="docutils">
<tr>
<td>原配置</td>
<td>
```python
optimizer = dict(
type='AdamW',
lr=0.0015,
weight_decay=0.3,
paramwise_cfg = dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
))
optimizer_config = dict(grad_clip=dict(max_norm=1.0))
```
</td>
<tr>
<td>新配置</td>
<td>
```python
optim_wrapper = dict(
optimizer=dict(type='AdamW', lr=0.0015, weight_decay=0.3),
paramwise_cfg = dict(
norm_decay_mult=0.0,
bias_decay_mult=0.0,
),
clip_grad=dict(max_norm=1.0),
)
```
</td>
</tr>
</table>
**`lr_config`** 字段的变化:
- `lr_config` 字段被移除,我们使用新的 `param_scheduler` 配置取代。
- `warmup` 相关的字段都被移除,因为学习率预热可以通过多个学习率规划器的组合来实现,因此不再单独实现。
新的优化器参数规划器组合机制非常灵活,你可以使用它来设计多种学习率、动量曲线,详见{external+mmengine:doc}`MMEngine 中的教程 <tutorials/param_scheduler>`。
<table class="docutils">
<tr>
<td>原配置</td>
<td>
```python
lr_config = dict(
policy='CosineAnnealing',
min_lr=0,
warmup='linear',
warmup_iters=5,
warmup_ratio=0.01,
warmup_by_epoch=True)
```
</td>
<tr>
<td>新配置</td>
<td>
```python
param_scheduler = [
# 学习率预热
dict(
type='LinearLR',
start_factor=0.01,
by_epoch=True,
end=5,
# 每轮迭代都更新学习率,而不是每个 epoch
convert_to_iter_based=True),
# 主学习率规划器
dict(type='CosineAnnealingLR', by_epoch=True, begin=5),
]
```
</td>
</tr>
</table>
**`runner`** 字段的变化:
`runner` 字段被拆分为 `train_cfg``val_cfg` 和 `test_cfg` 三个字段,分别配置训练、验证和测试循环。
<table class="docutils">
<tr>
<td>原配置</td>
<td>
```python
runner = dict(type='EpochBasedRunner', max_epochs=100)
```
</td>
<tr>
<td>新配置</td>
<td>
```python
# `val_interval` 字段来自原配置中 `evaluation.interval` 字段
train_cfg = dict(by_epoch=True, max_epochs=100, val_interval=1)
val_cfg = dict() # 空字典表示使用默认验证配置
test_cfg = dict() # 空字典表示使用默认测试配置
```
</td>
</tr>
</table>
在 OpenMMLab 2.0 中,我们引入了“循环控制器”来控制训练、验证和测试行为,而原先 `Runner` 功能也相应地发生了变化。详细介绍参见 MMEngine 中的{external+mmengine:doc}`执行器教程 <design/runner>`。
### 运行设置
**`checkpoint_config`** 和 **`log_config`** 字段的变化:
`checkpoint_config` 被移动至 `default_hooks.checkpoint``log_config` 被移动至 `default_hooks.logger`。同时,
我们将很多原先在训练脚本中隐式定义的钩子移动到了 `default_hooks` 字段。
```python
default_hooks = dict(
# 记录每轮迭代的耗时
timer=dict(type='IterTimerHook'),
# 每 100 轮迭代打印一次日志
logger=dict(type='LoggerHook', interval=100),
# 启用优化器参数规划器
param_scheduler=dict(type='ParamSchedulerHook'),
# 每个 epoch 保存一次模型权重文件,并且自动保存最优权重文件
checkpoint=dict(type='CheckpointHook', interval=1, save_best='auto'),
# 在分布式环境中设置采样器种子
sampler_seed=dict(type='DistSamplerSeedHook'),
# 可视化验证结果,将 `enable` 设为 True 来启用这一功能。
visualization=dict(type='VisualizationHook', enable=False),
)
```
此外我们将原来的日志功能拆分为日志记录和可视化器。日志记录负责按照指定间隔保存日志数据以及进行数据平滑等处理可视化器用于在不同的后端记录日志如终端、TensorBoard 和 WandB。
<table class="docutils">
<tr>
<td>原配置</td>
<td>
```python
log_config = dict(
interval=100,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook'),
])
```
</td>
<tr>
<td>新配置</td>
<td>
```python
default_hooks = dict(
...
logger=dict(type='LoggerHook', interval=100),
)
visualizer = dict(
type='UniversalVisualizer',
vis_backends=[dict(type='LocalVisBackend'), dict(type='TensorboardVisBackend')],
)
```
</td>
</tr>
</table>
**`load_from`** 和 **`resume_from`** 字段的变动:
- `resume_from` 字段被移除。我们现在使用 `resume``load_from` 字段实现以下功能:
- 如 `resume=True``load_from` 不为 None`load_from` 指定的权重文件恢复训练。
- 如 `resume=True``load_from` 为 None尝试从工作目录中最新的权重文件恢复训练。
- 如 `resume=False``load_from` 不为 None仅加载指定的权重文件不恢复训练。
- 如 `resume=False``load_from` 为 None不进行任何操作。
**`dist_params`** 字段的变动:`dist_params` 字段被移动为 `env_cfg` 字段的一个子字段。以下为 `env_cfg` 字段的所
有配置项:
```python
env_cfg = dict(
# 是否启用 cudnn benchmark
cudnn_benchmark=False,
# 设置多进程相关参数
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
# 设置分布式相关参数
dist_cfg=dict(backend='nccl'),
)
```
**`workflow`** 字段的变动:`workflow` 相关的功能现已被移除。
新字段 **`visualizer`**:可视化器是 OpenMMLab 2.0 架构中的新设计,我们使用可视化器进行日志、结果的可视化与多后
端的存储。详见 MMEngine 中的{external+mmengine:doc}`可视化教程 <advanced_tutorials/visualization>`。
```python
visualizer = dict(
type='UniversalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
# 将下行取消注释,即可将日志和可视化结果保存至 TesnorBoard
# dict(type='TensorboardVisBackend')
]
)
```
新字段 **`default_scope`**指定所有注册器进行模块搜索默认的起点。MMClassification 中的 `default_scope` 字段为 `mmpretrain`,大部分情况下不需要修改。详见 MMengine 中的{external+mmengine:doc}`注册器教程 <advanced_tutorials/registry>`。
## 模块变动
### `mmpretrain.apis`
@ -496,7 +515,7 @@ visualizer = dict(
| `evaluation` | 移除,使用 [`mmpretrain.evaluation`](mmpretrain.evaluation) |
| `hook` | 移动至 [`mmpretrain.engine.hooks`](mmpretrain.engine.hooks) |
| `optimizers` | 移动至 [`mmpretrain.engine.optimizers`](mmpretrain.engine.optimizers) |
| `utils` | 移除,分布式环境相关的函数统一至 [`mmengine.dist`](mmengine.dist) 包 |
| `utils` | 移除,分布式环境相关的函数统一至 [`mmengine.dist`](api/dist) 包 |
| `visualization` | 移除,其中可视化相关的功能被移动至 [`mmpretrain.visualization.UniversalVisualizer`](mmpretrain.visualization.UniversalVisualizer) |
`hooks` 包中的 `MMClsWandbHook` 尚未实现。
@ -548,13 +567,13 @@ visualizer = dict(
[heads](mmpretrain.models.heads) 中的变动:
| 分类头的方法 | 变动 |
| :-------------: | :------------------------------------------------------------------------------------------------------------------------------------------ |
| `pre_logits` | 无变动 |
| `forward_train` | 变更为 `loss` 方法。 |
| `simple_test` | 变更为 `predict` 方法。 |
| `loss` | 现在接受 `data_samples` 参数,而不是 `gt_labels``data_samples` 参数应当接受 [ClsDataSample](mmpretrain.structures.ClsDataSample) 的列表。 |
| `forward` | 新方法,它将返回分类头的输出,不会进行任何后处理(包括 softmax 或 sigmoid |
| 分类头的方法 | 变动 |
| :-------------: | :--------------------------------------------------------------------------------------------------------------------------------------- |
| `pre_logits` | 无变动 |
| `forward_train` | 变更为 `loss` 方法。 |
| `simple_test` | 变更为 `predict` 方法。 |
| `loss` | 现在接受 `data_samples` 参数,而不是 `gt_labels``data_samples` 参数应当接受 [ClsDataSample](mmpretrain.structures.DataSample) 的列表。 |
| `forward` | 新方法,它将返回分类头的输出,不会进行任何后处理(包括 softmax 或 sigmoid。 |
### `mmpretrain.utils`
@ -570,6 +589,144 @@ visualizer = dict(
| `wrap_distributed_model` | 移除,现在 runner 会自动包装模型。 |
| `auto_select_device` | 移除,现在 runner 会自动选择设备。 |
### 其他变动
# 从 MMSelfSup 0.x 迁移
- 我们将所有注册器的定义从各个包移动到了 `mmpretrain.registry` 包。
## 配置文件
本章节将介绍 `_base_` 文件夹中的配置文件的变化,主要包含以下三个部分:
- 数据集:`configs/_base_/datasets`
- 模型:`configs/_base_/models`
- 优化器及调度:`configs/_base_/schedules`
### 数据集
**MMSelfSup 0.x** 中,我们使用字段 `data` 来整合数据相关信息, 例如 `samples_per_gpu``train``val` 等。
**MMPretrain 1.x** 中,我们分别使用字段 `train_dataloader`, `val_dataloader` 整理训练和验证的数据相关信息,并且 `data` 字段已经被 **移除**。
<table class="docutils">
<tr>
<td>旧版本</td>
<td>
```python
data = dict(
samples_per_gpu=32, # total 32*8(gpu)=256
workers_per_gpu=4,
train=dict(
type=dataset_type,
data_source=dict(
type=data_source,
data_prefix='data/imagenet/train',
ann_file='data/imagenet/meta/train.txt',
),
num_views=[1, 1],
pipelines=[train_pipeline1, train_pipeline2],
prefetch=prefetch,
),
val=...)
```
</td>
<tr>
<td>新版本</td>
<td>
```python
train_dataloader = dict(
batch_size=32,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
collate_fn=dict(type='default_collate'),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='meta/train.txt',
data_prefix=dict(img_path='train/'),
pipeline=train_pipeline))
val_dataloader = ...
```
</td>
</tr>
</table>
另外,我们 **移除** 了字段 `data_source`,以此来保证我们项目和其它 OpenMMLab 项目数据流的一致性。请查阅 [Config](user_guides/config.md) 获取更详细的信息。
**`pipeline`** 中的变化:
以 MAE 的 `pipeline` 作为例子,新的写法如下:
```python
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='RandomResizedCrop',
size=224,
scale=(0.2, 1.0),
backend='pillow',
interpolation='bicubic'),
dict(type='RandomFlip', prob=0.5),
dict(type='PackSelfSupInputs', meta_keys=['img_path'])
]
```
### 模型
在模型的配置文件中,和 MMSeflSup 0.x 版本相比,主要有两点不同。
1. 有一个新的字段 `data_preprocessor`,主要负责对数据进行预处理,例如归一化,通道转换等。例子如下:
```python
data_preprocessor=dict(
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True)
model = dict(
type='MAE',
data_preprocessor=dict(
mean=[127.5, 127.5, 127.5],
std=[127.5, 127.5, 127.5],
bgr_to_rgb=True),
backbone=...,
neck=...,
head=...,
init_cfg=...)
```
2. 在新版本的 `head` 字段中,我们新增加了 `loss`,主要负责损失函数的构建。例子如下:
```python
model = dict(
type='MAE',
backbone=...,
neck=...,
head=dict(
type='MAEPretrainHead',
norm_pix=True,
patch_size=16,
loss=dict(type='MAEReconstructionLoss')),
init_cfg=...)
```
## 模块变动
下列表格记录了代码模块、文件夹的主要改变。
| MMSelfSup 0.x | MMPretrain 1.x | Remark |
| ------------------------ | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| apis | / | 目前 `apis` 文件夹已暂时被**移除**,在未来可能会再添加回来。 |
| core | engine | `core` 文件夹重命名为 `engine`,包含了 `hooks``opimizers`。([API link](mmpretrain.engine)) |
| datasets | datasets | 数据集相关类主要基于不同的数据集实现,例如 ImageNetPlaces205。([API link](mmpretrain.datasets)) |
| datasets/data_sources | / | `data_sources` 已经被**移除**,并且现在 `datasets` 的逻辑和 OpenMMLab 其它项目保持一致。 |
| datasets/pipelines | datasets/transforms | `pipelines` 文件夹已经重命名为 `transforms`。([API link](mmpretrain.datasets.transforms)) |
| / | evaluation | `evaluation` 主要负责管理一些评测函数或者是类。([API link](mmpretrain.evaluation)) |
| models/algorithms | selfsup | 算法文件移动至 `selfsup` 文件夹。([API link](mmpretrain.models.selfsup)) |
| models/backbones | selfsup | 自监督学习算法对应的,重新实现的主干网络移动到算法的 `.py` 文件中。([API link](mmpretrain.models.selfsup)) |
| models/target_generators | selfsup | 目标生成器的实现移动到算法的 `.py` 文件中。([API link](mmpretrain.models.selfsup)) |
| / | models/losses | `losses` 文件夹提供了各种不同损失函数的实现。([API link](mmpretrain.models.losses)) |
| / | structures | `structures` 文件夹提供了数据结构的实现。在 MMPretrain 中,我们实现了一种新的数据结构,`DataSample`,在训练/验证过程中来传输和接受数据信息。([API link](mmpretrain.structures)) |
| / | visualization | `visualization` 文件夹包含了 visualizer主要负责一些可视化的工作例如数据增强的可视化。([API link](mmpretrain.visualization)) |

View File

@ -228,7 +228,7 @@ OpenMMLab 2.0 数据集格式规范规定,标注文件必须为 `json` 或 `ya
假设您要使用训练数据集,那么配置文件如下所示:
```json
```
{
'metainfo':

View File

@ -457,9 +457,9 @@ class AveragePrecision(BaseMetric):
References
----------
.. [1] `Wikipedia entry for the Average precision
<https://en.wikipedia.org/w/index.php?title=Information_retrieval&
oldid=793358396#Average_precision>`_
1. `Wikipedia entry for the Average precision
<https://en.wikipedia.org/w/index.php?title=Information_retrieval&
oldid=793358396#Average_precision>`_
Examples:
>>> import torch

View File

@ -1,9 +1,10 @@
docutils==0.17.1
docutils==0.18.1
modelindex
myst-parser
git+https://github.com/mzr1996/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme
sphinx==4.5.0
sphinx==6.1.3
sphinx-copybutton
sphinx-notfound-page
sphinx-tabs
sphinxcontrib-jquery
tabulate