In **MMSelfSup 0.x**, we use key `data` to summarize all information, such as `samples_per_gpu`, `train`, `val`, etc.
In **MMSelfSup 1.x**, we separate `train_dataloader`, `val_dataloader` to summarize information correspodingly and the key `data` has been **removed**.
Besides, we remove the key of `data_source` to keep the pipeline format consistent with that in other OpenMMLab projects. Please refer to [Config](user_guides/1_config.md) for more details.
In the config of models, there are two main different parts from MMSeflSup 0.x.
1. There is a new key called `data_preprocessor`, which is responsible for preprocessing the data, like normalization, channel conversion, etc. For example:
```python
model = dict(
type='MAE',
data_preprocessor=dict(
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True),
backbone=...,
neck=...,
head=...,
init_cfg=...)
```
2. There is a new key `loss` in `head` in MMSelfSup 1.x, to determine the loss function of the algorithm. For example:
| optimizer_config | / | It has been **removed**. |
| / | optim_wrapper | The `optim_wrapper` provides a common interface for updating parameters. |
| lr_config | param_scheduler | The `param_scheduler` is a list to set learning rate or other parameters, which is more flexible. |
| runner | train_cfg | The loop setting (`EpochBasedTrainLoop`, `IterBasedTrainLoop`) in `train_cfg` controls the work flow of the algorithm training. |
1. Changes in **`optimizer`** and **`optimizer_config`**:
- Now we use `optim_wrapper` field to specify all configuration about the optimization process. And the
`optimizer` is a sub field of `optim_wrapper` now.
-`paramwise_cfg` is also a sub field of `optim_wrapper`, instead of `optimizer`.
-`optimizer_config` is removed now, and all configurations of it are moved to `optim_wrapper`.
4. Changes in **`workflow`**: `workflow` related functionalities are **removed**.
5. New field **`visualizer`**:
The visualizer is a new design in OpenMMLab 2.0 architecture. We use a
visualizer instance in the runner to handle results & log visualization and save to different backends.
See the [MMEngine tutorial](TODO) for more details.
```python
visualizer = dict(
type='SelfSupVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
# Uncomment the below line to save the log and visualization results to TensorBoard.
# dict(type='TensorboardVisBackend')
]
)
```
6. New field **`default_scope`**: The start point to search module for all registries. The `default_scope` in MMSelfSup is `mmselfsup`. See [the registry tutorial](TODO) for more details.
## Package
The table below records the general modification of the folders and files.
| apis | / | Currently, the `apis` folder has been **removed**, it might be added in the future. |
| core | engine | The `core` folder has been renamed to `engine`, which includes `hooks`, `opimizers`. |
| datasets | datasets | The datasets is implemented according to different datasets, such as ImageNet, Places205. |
| datasets/data_sources | / | The `data_sources` has been **removed** and the directory of `datasets` now is consistent with other OpenMMLab projects. |
| datasets/pipelines | datasets/transforms | The `pipelines` folder has been renamed to `transforms`. |
| / | evaluation | The `evaluation` is created for some evaluation functions or classes, such as KNN function or layer for detection. |
| / | models/losses | The `losses` folder is created to provide different loss implementations, which is from `heads` |
| / | structures | The `structures` folder is for the implementation of data structures. In MMSelfSup, we implement a new data structure, `selfsup_data_sample`, to pass and receive data throughout the training/val process. |
| / | visualization | The `visualization` folder contains the visualizer, which is responsible for some visualization tasks like visualizing data augmentation. |