[Refactor]: Refactor config tutorial

pull/352/head
YuanLiuuuuuu 2022-06-08 04:43:07 +00:00 committed by fangyixiao18
parent 5ea54a48c8
commit 5b8c1a8355
1 changed files with 186 additions and 176 deletions

View File

@ -2,8 +2,6 @@
MMSelfSup mainly uses python files as configs. The design of our configuration file system integrates modularity and inheritance, facilitating users to conduct various experiments. All configuration files are placed in the `configs` folder. If you wish to inspect the config file in summary, you may run `python tools/misc/print_config.py` to see the complete config.
<!-- TOC -->
- [Tutorial 0: Learn about Configs](#tutorial-0-learn-about-configs)
- [Config File and Checkpoint Naming Convention](#config-file-and-checkpoint-naming-convention)
- [Algorithm information](#algorithm-information)
@ -11,29 +9,30 @@ MMSelfSup mainly uses python files as configs. The design of our configuration f
- [Training information](#training-information)
- [Data information](#data-information)
- [Config File Name Example](#config-file-name-example)
- [Checkpoint Naming Convention](#checkpoint-naming-convention)
- [Config File Structure](#config-file-structure)
- [Inherit and Modify Config File](#inherit-and-modify-config-file)
- [Use intermediate variables in configs](#use-intermediate-variables-in-configs)
- [Ignore some fields in the base configs](#ignore-some-fields-in-the-base-configs)
- [Use some fields in the base configs](#use-some-fields-in-the-base-configs)
- [Modify config through script arguments](#modify-config-through-script-arguments)
- [Import user-defined modules](#import-user-defined-modules)
<!-- TOC -->
- [Import modules from other MM-codebases](#import-modules-from-other-mm-codebases)
## Config File and Checkpoint Naming Convention
We follow the below convention to name config files. Contributors are advised to follow the same style. The config file names are divided into four parts: algorithm info, module information, training information and data information. Logically, different parts are concatenated by underscores `'_'`, and words in the same part are concatenated by dashes `'-'`.
We follow conventions below to name config files. Contributors are advised to follow the same conventions. The name of config file is divided into four parts: `algorithm info`, `module information`, `training information` and `data information`. Logically, different parts are concatenated by underscores `'_'`, and info belonging to the same part is concatenated by dashes `'-'`.
The following example is for illustration:
```
{algorithm}_{module}_{training_info}_{data_info}.py
{algorithm_info}_{module_info}_{training_info}_{data_info}.py
```
- `algorithm info`Algorithm information includes algorithm name, such as simclr, mocov2, etc.;
- `module info` Module information is used to represent some backbone, neck, head information;
- `training info`Training information, some training schedule, including batch size, lr schedule, data augment and the like;
- `data info`Data information, dataset name, input size and so on, such as imagenet, cifar, etc.;
- `algorithm_info`Algorithm information includes algorithm name, such as simclr, mocov2, etc;
- `module_info` Module information denotes backbones, necks, heads and losses;
- `training_info`Training information, e.g. some training schedules, including batch size, lr schedule, data augment;
- `data_info`Data information, e.g. dataset name, input size;
We detail the naming convention for each part in the name of the config file:
### Algorithm information
@ -41,13 +40,13 @@ We follow the below convention to name config files. Contributors are advised to
{algorithm}-{misc}
```
`Algorithm` means the abbreviation from the paper and its version. E.g:
`algorithm` generally denotes the abbreviation for the paper and its version. For example:
- `relative-loc` : The different word is concatenated by dashes `'-'`
- `simclr`
- `mocov2`
`misc` offers some other algorithm related information. E.g.
`misc` offers some other algorithm related information.
- `npid-ensure-neg`
- `deepcluster-sobel`
@ -55,7 +54,7 @@ We follow the below convention to name config files. Contributors are advised to
### Module information
```
{backbone setting}-{neck setting}-{head_setting}
{backbone_setting}-{neck_setting}-{head_setting}-{loss_setting}
```
The module information mainly includes the backbone information. E.g:
@ -67,12 +66,14 @@ Or there are some special settings which is needed to be mentioned in the config
- `resnet50-nofrz`: In some downstream tasksthe backbone will not froze stages while training
While `neck_setting`, `head_setting` and `loss_setting` are optional.
### Training information
Training related settingsincluding batch size, lr schedule, data augment, etc.
- Batch size, the format is `{gpu x batch_per_gpu}`like `8xb32`;
- Training recipethe methods will be arranged in the order `{pipeline aug}-{train aug}-{loss trick}-{scheduler}-{epochs}`.
- Batch size, and the format is `{gpu x batch_per_gpu}`like `8xb32`;
- Training recipes, and they will be arranged in the order `{pipeline aug}-{train aug}-{scheduler}-{epochs}`.
E.g:
@ -86,11 +87,13 @@ Data information contains the dataset, input size, etc. E.g:
- `in1k` : `ImageNet1k` dataset, default to use the input image size of 224x224
- `in1k-384px` : Indicates that the input image size is 384x384
- `cifar10`
- `inat18` : `iNaturalist2018` datasetit has 8142 classes
- `inat18` : `iNaturalist2018` dataset, and it has 8142 classes
- `places205`
### Config File Name Example
Here, we give a concret file name to explain the naming conventions.
```
swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96.py
```
@ -102,26 +105,19 @@ swav_resnet50_8xb32-mcrop-2-6-coslr-200e_in1k-224-96.py
- `mcrop-2-6`:Use multi-crop data augment method
- `coslr`: Use cosine learning rate scheduler
- `200e`: Train the model for 200 epoch
- `in1k-224-96`: Data informationtrain on ImageNet1k datasetthe input sizes are 224x224 and 96x96
### Checkpoint Naming Convention
The naming of the weight mainly includes the configuration file name, date and hash value.
```
{config_name}_{date}-{hash}.pth
```
- `in1k-224-96`: Data informationtrained on ImageNet1k datasetand the input sizes are 224x224 and 96x96
## Config File Structure
There are four kinds of basic component file in the `configs/_base_` folders, namely
There are four kinds of basic files in the `configs/_base_`, namely
- models
- datasets
- schedules
- runtime
You can easily build your own training config file by inherit some base config files. And the configs that are composed by components from `_base_` are called _primitive_.
All these basic files define the basic elements, such as train/val/test loop and optimizer, to run the experiment.
You can easily build your own training config file by inheriting some base config files. And the configs that are composed by components from `_base_` are called _primitive_.
For easy understanding, we use MoCo v2 as a example and comment the meaning of each line. For more detaile, please refer to the API documentation.
@ -135,56 +131,64 @@ _base_ = [
'../_base_/default_runtime.py', # runtime setting
]
# Here we inherit runtime settings and modify the max_keep_ckpts.
# the max_keep_ckpts controls the max number of ckpt file in your work_dirs
# if it is 3, when CheckpointHook (in mmcv) saves the 4th ckpt
# it will remove the oldest one to keep the number of total ckpts as 3
checkpoint_config = dict(interval=10, max_keep_ckpts=3)
# Here we inherit the default runtime settings and modify the ``CheckpointHook``.
# The max_keep_ckpts controls the max number of ckpt file in your work_dirs
# If it is 3, the ``CheckpointHook`` will save the latest 3 checkpoints.
# If there are more than 3 checkpoints in work_dirs, it will remove the oldest
# one to keep the total number as 3.
default_hooks = dict(
checkpoint=dict(type='CheckpointHook', interval=10, max_keep_ckpts=3)
)
```
```{note}
The 'type' in the configuration file is not a constructed parameter, but a class name.
```
`../_base_/models/mocov2.py` is the base model config for MoCo v2.
`../_base_/models/mocov2.py` is the base configuration file for the model of MoCo v2.
```python
# type='MoCo' specifies we will use the model of MoCo. And we
# split the model into four parts, which are backbone, neck, head
# and loss. 'queue_len', 'feat_dim' and 'momentum' are required
# by MoCo during the training process.
model = dict(
type='MoCo', # Algorithm name
queue_len=65536, # Number of negative keys maintained in the queue
feat_dim=128, # Dimension of compact feature vectors, equal to the out_channels of the neck
momentum=0.999, # Momentum coefficient for the momentum-updated encoder
type='MoCo',
queue_len=65536,
feat_dim=128,
momentum=0.999,
backbone=dict(
type='ResNet', # Backbone name
depth=50, # Depth of backbone, ResNet has options of 18, 34, 50, 101, 152
in_channels=3, # The channel number of the input images
out_indices=[4], # The output index of the output feature maps, 0 for conv-1, x for stage-x
norm_cfg=dict(type='BN')), # Dictionary to construct and config norm layer
type='ResNet',
depth=50,
in_channels=3,
out_indices=[4], # 0: conv-1, x: stage-x
norm_cfg=dict(type='BN')),
neck=dict(
type='MoCoV2Neck', # Neck name
in_channels=2048, # Number of input channels
hid_channels=2048, # Number of hidden channels
out_channels=128, # Number of output channels
with_avg_pool=True), # Whether to apply the global average pooling after backbone
head=dict(
type='ContrastiveHead', # Head name, indicates that the MoCo v2 use contrastive loss
temperature=0.2)) # The temperature hyper-parameter that controls the concentration level of the distribution.
type='MoCoV2Neck',
in_channels=2048,
hid_channels=2048,
out_channels=128,
with_avg_pool=True),
head=dict(type='ContrastiveHead', temperature=0.2),
loss=dict(type='mmcls.CrossEntropyLoss'))
```
`../_base_/datasets/imagenet_mocov2.py` is the base dataset config for MoCo v2.
`../_base_/datasets/imagenet_mocov2.py` is the base configuration file for
the dataset of MoCo v2. The configuration file specifies the configuration
for dataset and dataloader.
```python
# dataset settings
data_source = 'ImageNet' # data source name
dataset_type = 'MultiViewDataset' # dataset type is related to the pipeline composing
img_norm_cfg = dict(
mean=[0.485, 0.456, 0.406], # Mean values used to pre-training the pre-trained backbone models
std=[0.229, 0.224, 0.225]) # Standard variance used to pre-training the pre-trained backbone models
# We use the ``ImageNet`` dataset implemented by mmclassification, so there
# is a ``mmcls`` prefix.
dataset_type = 'mmcls.ImageNet'
data_root = 'data/imagenet/'
file_client_args = dict(backend='disk')
# Since we use ``ImageNet`` from mmclassification, we need set the
# custom_imports here.
custom_imports = dict(imports='mmcls.datasets', allow_failed_imports=False)
# The difference between mocov2 and mocov1 is the transforms in the pipeline
train_pipeline = [
dict(type='RandomResizedCrop', size=224, scale=(0.2, 1.)), # RandomResizedCrop
view_pipeline = [
dict(type='RandomResizedCrop', size=224, scale=(0.2, 1.)),
dict(
type='RandomAppliedTrans', # Random apply ColorJitter augment method with probability 0.8
type='RandomApply',
transforms=[
dict(
type='ColorJitter',
@ -193,93 +197,92 @@ train_pipeline = [
saturation=0.4,
hue=0.1)
],
p=0.8),
dict(type='RandomGrayscale', p=0.2), # RandomGrayscale with probability 0.2
dict(type='GaussianBlur', sigma_min=0.1, sigma_max=2.0, p=0.5), # Random GaussianBlur with probability 0.5
dict(type='RandomHorizontalFlip'), # Randomly flip the picture horizontally
prob=0.8),
dict(type='RandomGrayscale', prob=0.2, keep_channels=True),
dict(type='RandomGaussianBlur', sigma_min=0.1, sigma_max=2.0, prob=0.5),
dict(type='RandomFlip', prob=0.5),
]
# prefetch
prefetch = False # Whether to using prefetch to speed up the pipeline
if not prefetch:
train_pipeline.extend(
[dict(type='ToTensor'),
dict(type='Normalize', **img_norm_cfg)])
train_pipeline = [
dict(type='LoadImageFromFile', file_client_args=file_client_args),
dict(type='MultiView', num_views=2, transforms=[view_pipeline]),
dict(type='PackSelfSupInputs', meta_keys=['img_path'])
]
# dataset summary
data = dict(
samples_per_gpu=32, # Batch size of a single GPU, total 32*8=256
workers_per_gpu=4, # Worker to pre-fetch data for each single GPU
drop_last=True, # Whether to drop the last batch of data
train=dict(
type=dataset_type, # dataset name
data_source=dict(
type=data_source, # data source name
data_prefix='data/imagenet/train', # Dataset root, when ann_file does not exist, the category information is automatically obtained from the root folder
ann_file='data/imagenet/meta/train.txt', # ann_file existes, the category information is obtained from file
),
num_views=[2], # The number of different views from pipeline
pipelines=[train_pipeline], # The train pipeline
prefetch=prefetch, # The boolean value
))
train_dataloader = dict(
batch_size=32,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='meta/train.txt',
data_prefix=dict(img_path='train/'),
pipeline=train_pipeline))
```
`../_base_/schedules/sgd_coslr-200e_in1k.py` is the base schedule config for MoCo v2.
`../_base_/schedules/sgd_coslr-200e_in1k.py` is the base configuration file for
the training schedules of MoCo v2.
```python
# optimizer
optimizer = dict(
type='SGD', # Optimizer type
lr=0.03, # Learning rate of optimizers, see detail usages of the parameters in the documentation of PyTorch
weight_decay=1e-4, # Momentum parameter
momentum=0.9) # Weight decay of SGD
# Config used to build the optimizer hook, refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/optimizer.py#L8 for implementation details.
optimizer_config = dict() # this config can set grad_clip, coalesce, bucket_size_mb, etc.
optimizer_wrapper = dict(optimizer=dict(type='SGD', lr=0.03, weight_decay=1e-4, momentum=0.9))
# learning policy
# Learning rate scheduler config used to register LrUpdater hook
lr_config = dict(
policy='CosineAnnealing', # The policy of scheduler, also support Step, Cyclic, etc. Refer to details of supported LrUpdater from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py#L9.
min_lr=0.) # The minimum lr setting in CosineAnnealing
# runtime settings
runner = dict(
type='EpochBasedRunner', # Type of runner to use (i.e. IterBasedRunner or EpochBasedRunner)
max_epochs=200) # Runner that runs the workflow in total max_epochs. For IterBasedRunner use `max_iters`
# learning rate scheduler
# use cosine learning rate decay here
param_scheduler = [
dict(type='CosineAnnealingLR', T_max=200, by_epoch=True, begin=0, end=200)
]
# loop settings
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=200)
```
`../_base_/default_runtime.py` is the default runtime settings.
`../_base_/default_runtime.py` contains the default runtime settings. The runtime settings include some basic components during training, such as default_hooks and log_processor
```python
# checkpoint saving
checkpoint_config = dict(interval=10) # The save interval is 10
default_scope = 'mmselfsup'
# yapf:disable
log_config = dict(
interval=50, # Interval to print the log
hooks=[
dict(type='TextLoggerHook'), # The Tensorboard logger is also supported
# dict(type='TensorboardLoggerHook'),
])
# yapf:enable
default_hooks = dict(
runtime_info=dict(type='RuntimeInfoHook'),
optimizer=dict(type='OptimizerHook', grad_clip=None),
timer=dict(type='IterTimerHook'),
logger=dict(type='LoggerHook', interval=50),
param_scheduler=dict(type='ParamSchedulerHook'),
checkpoint=dict(type='CheckpointHook', interval=10),
sampler_seed=dict(type='DistSamplerSeedHook'),
)
# runtime settings
dist_params = dict(backend='nccl') # Parameters to setup distributed training, the port can also be set.
log_level = 'INFO' # The output level of the log.
load_from = None # Runner to load ckpt
resume_from = None # Resume checkpoints from a given path, the training will be resumed from the epoch when the checkpoint's is saved.
workflow = [('train', 1)] # Workflow for runner. [('train', 1)] means there is only one workflow and the workflow named 'train' is executed once.
persistent_workers = True # The boolean type to set persistent_workers in Dataloader. see detail in the documentation of PyTorch
env_cfg = dict(
cudnn_benchmark=False,
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
dist_cfg=dict(backend='nccl'),
)
log_processor = dict(
interval=50,
custom_keys=[dict(data_src='', method='mean', windows_size='global')])
vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
type='SelfSupLocalVisualizer',
vis_backends=vis_backends,
name='visualizer')
# custom_hooks = [dict(type='SelfSupVisualizationHook', interval=10)]
log_level = 'INFO'
load_from = None
resume = False
```
## Inherit and Modify Config File
For easy understanding, we recommend contributors to inherit from existing methods.
For easy understanding, we recommend contributors to inherit from existing configurations.
For all configs under the same folder, it is recommended to have only **one** _primitive_ config. All other configs should inherit from the _primitive_ config. In this way, the maximum of inheritance level is 3.
For example, if your config file is based on MoCo v2 with some other modification, you can first inherit the basic MoCo v2 structure, dataset and other training setting by specifying `_base_ ='./mocov2_resnet50_8xb32-coslr-200e_in1k.py.py'` (The path relative to your config file), and then modify the necessary parameters in the config file. A more specific example, now we want to use almost all configs in `configs/selfsup/mocov2/mocov2_resnet50_8xb32-coslr-200e_in1k.py.py`, but change the number of training epochs from 200 to 800, modify when to decay the learning rate, and modify the dataset path, you can create a new config file `configs/selfsup/mocov2/mocov2_resnet50_8xb32-coslr-800e_in1k.py.py` with content as below:
For example, if your config file is based on MoCo v2 with some other modification, you can first inherit the basic configuration of MoCo v2 by specifying `_base_ ='./mocov2_resnet50_8xb32-coslr-200e_in1k.py.py'` (The path relative to your config file), and then modify the necessary parameters in your customized config file. A more specific example, now we want to use almost all configs in `configs/selfsup/mocov2/mocov2_resnet50_8xb32-coslr-200e_in1k.py.py`, except for changing the training epochs from 200 to 800, you can create a new config file `configs/selfsup/mocov2/mocov2_resnet50_8xb32-coslr-800e_in1k.py.py` with content as below:
```python
_base_ = './mocov2_resnet50_8xb32-coslr-200e_in1k.py'
@ -289,40 +292,66 @@ runner = dict(max_epochs=800)
### Use intermediate variables in configs
Some intermediate variables are used in the configuration file. The intermediate variables make the configuration file clearer and easier to modify.
Some intermediate variables are used in the configuration file. The intermediate variables make the configuration file more clear and easier to modify.
For example, `data_source`, `dataset_type`, `train_pipeline`, `prefetch` are the intermediate variables of the data. We first need to define them and then pass them to `data`.
For example, `dataset_type`, `train_pipeline`, `file_client_args` are the intermediate variables of the data. We first need to define them and then pass them to `data`.
```python
data_source = 'ImageNet'
dataset_type = 'MultiViewDataset'
img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
train_pipeline = [...]
# dataset settings
# prefetch
prefetch = False # Whether to using prefetch to speed up the pipeline
if not prefetch:
train_pipeline.extend(
[dict(type='ToTensor'),
dict(type='Normalize', **img_norm_cfg)])
# Since we use ``ImageNet`` from mmclassification, we need set the
# custom_imports here.
custom_imports = dict(imports='mmcls.datasets', allow_failed_imports=False)
# dataset summary
data = dict(
samples_per_gpu=32,
workers_per_gpu=4,
drop_last=True,
train=dict(type=dataset_type, type=data_source, data_prefix=...),
num_views=[2],
pipelines=[train_pipeline],
prefetch=prefetch,
))
# We use the ``ImageNet`` dataset implemented by mmclassification, so there
# is a ``mmcls`` prefix.
dataset_type = 'mmcls.ImageNet'
data_root = 'data/imagenet/'
file_client_args = dict(backend='disk')
# The difference between mocov2 and mocov1 is the transforms in the pipeline
view_pipeline = [
dict(type='RandomResizedCrop', size=224, scale=(0.2, 1.)),
dict(
type='RandomApply',
transforms=[
dict(
type='ColorJitter',
brightness=0.4,
contrast=0.4,
saturation=0.4,
hue=0.1)
],
prob=0.8),
dict(type='RandomGrayscale', prob=0.2, keep_channels=True),
dict(type='RandomGaussianBlur', sigma_min=0.1, sigma_max=2.0, prob=0.5),
dict(type='RandomFlip', prob=0.5),
]
train_pipeline = [
dict(type='LoadImageFromFile', file_client_args=file_client_args),
dict(type='MultiView', num_views=2, transforms=[view_pipeline]),
dict(type='PackSelfSupInputs', meta_keys=['img_path'])
]
train_dataloader = dict(
batch_size=32,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='meta/train.txt',
data_prefix=dict(img_path='train/'),
pipeline=train_pipeline))
```
### Ignore some fields in the base configs
Sometimes, you need to set `_delete_=True` to ignore some domain content in the basic configuration file. You can refer to [mmcv](https://mmcv.readthedocs.io/en/latest/understand_mmcv/config.html#inherit-from-base-config-with-ignored-fields) for more instructions.
Sometimes, you need to set `_delete_=True` to ignore some domain content in the basic configuration file. You can refer to [mmengine](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/config.md) for more instructions.
The following is an example. If you want to use `MoCoV2Neck` in simclr setting, just using inheritance and directly modify it will report `get unexcepected keyword 'num_layers'` error, because the `'num_layers'` field of the basic config in `model.neck` domain information is reserved, and you need to add `_delete_=True` to ignore the content of `model.neck` related fields in the basic configuration file:
The following is an example. If you want to use `MoCoV2Neck` in simclr, just using inheritance and directly modifying it will report `get unexcepected keyword 'num_layers'` error, because the `'num_layers'` field of the basic config in `model.neck` domain information is reserved, and you need to add `_delete_=True` to ignore the original content of `model.neck` in the basic configuration file:
```python
_base_ = 'simclr_resnet50_8xb32-coslr-200e_in1k.py'
@ -339,9 +368,9 @@ model = dict(
### Use some fields in the base configs
Sometimes, you may refer to some fields in the `_base_` config, so as to avoid duplication of definitions. You can refer to [mmcv](https://mmcv.readthedocs.io/en/latest/understand_mmcv/config.html#reference-variables-from-base) for some more instructions.
Sometimes, you may refer to some fields in the `_base_` config, so as to avoid duplication of definitions. You can refer to [mmengine](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/config.md) for some more instructions.
The following is an example of using auto augment in the training data preprocessing pipeline refer to `configs/selfsup/odc/odc_resnet50_8xb64-steplr-440e_in1k.py`. When defining `num_classes`, just add the definition file name of auto augment to `_base_`, and then use `{{_base_.num_classes}}` to reference the variables:
The following is an example of using the `num_classes` variable in the base configuration file, please refer to `configs/selfsup/odc/odc_resnet50_8xb64-steplr-440e_in1k.py`.
```python
_base_ = [
@ -357,23 +386,6 @@ model = dict(
memory_bank=dict(num_classes={{_base_.num_classes}}),
)
# optimizer
optimizer = dict(
type='SGD',
lr=0.06,
momentum=0.9,
weight_decay=1e-5,
paramwise_options={'\\Ahead.': dict(momentum=0.)})
# learning policy
lr_config = dict(policy='step', step=[400], gamma=0.4)
# runtime settings
runner = dict(type='EpochBasedRunner', max_epochs=440)
# the max_keep_ckpts controls the max number of ckpt file in your work_dirs
# if it is 3, when CheckpointHook (in mmcv) saves the 4th ckpt
# it will remove the oldest one to keep the number of total ckpts as 3
checkpoint_config = dict(interval=10, max_keep_ckpts=3)
```
## Modify config through script arguments
@ -393,11 +405,9 @@ When users use the script "tools/train.py" or "tools/test.py" to submit tasks or
- Update values of list/tuples.
If the value to be updated is a list or a tuple. For example, the config file normally sets `workflow=[('train', 1)]`. If you want to
change this key, you may specify `--cfg-options workflow="[(train,1),(val,1)]"`. Note that the quotation mark " is necessary to
support list/tuple data types, and that **NO** white space is allowed inside the quotation marks in the specified value.
If the value to be updated is a list or a tuple. For example, some configuration files contain `param_scheduler = "[dict(type='CosineAnnealingLR',T_max=200,by_epoch=True,begin=0,end=200)]"`. If you want to change this key, you may specify `--cfg-options param_scheduler = "[dict(type='LinearLR',start_factor=1e-4, by_epoch=True,begin=0,end=40,convert_to_iter_based=True)]"`. Note that the quotation mark " is necessary to support list/tuple data types, and that **NO** white space is allowed inside the quotation marks in the specified value.
## Import user-defined modules
## Import modules from other MM-codebases
```{note}
This part may only be used when using other MM-codebase, like mmcls as a third party library to build your own project, and beginners can skip it.