* add mytrain.py for test
* test before layers
* test attr in layers
* test classifier
* delete mytrain.py
* add rand_bbox_minmax rand_bbox and cutmix_bbox_and_lam to BaseCutMixLayer
* add mixup_prob to BatchMixupLayer
* add cutmixup
* add cutmixup to __init__
* test classifier with cutmixup
* delete some comments
* set mixup_prob default to 1.0
* add cutmixup to classifier
* use cutmixup
* use cutmixup
* fix bugs
* test cutmixup
* move mixup and cutmix to augment
* inherit from BaseAugment
* add BaseAugment
* inherit from BaseAugment
* rename identity.py
* add @
* build augment
* register module
* rename to augment.py
* delete cutmixup.py
* do not inherit from BaseAugment
* add augments
* use augments in classifier
* prob default to 1.0
* add comments
* use augments
* use augments
* assert sum of augmentation probabilities should equal to 1
* augmentation probabilities equal to 1
* calculate Identity prob
* replace xxx with self.xxx
* add comments
* sync with augments
* for BC-breaking
* delete useless comments in mixup.py
* add mytrain.py for test
* test before layers
* test attr in layers
* test classifier
* delete mytrain.py
* register custom_hooks in runner
* set custom_hooks_config to cfg.get(custom_hooks, None)
* add mytrain.py for test
* test before layers
* test attr in layers
* test classifier
* delete mytrain.py
* move init_cfg to parameter
* isort
* Use a sentinel value to denote the default init_cfg
* Refector label smooth loss, now support mode `original`, `classy_vision`
and `multi_label`.
* Add unittests for label smooth loss.
* Improve docstring of LSR
* add increasing in solarize and posterize
* fix linting
* Revert "add increasing in solarize and posterize"
This reverts commit 128af36e9b.
* revise according to comments
* Add paramater magnitude_std in RandAugment to allow randomly movement of magnitude_value
* Add unittest for magnitude_std
* Improve docstring of magnitude_std
* GlabelAveragePooling support 1d, 2d and 3d by param, and add neck test
* Imporve neck test
* Change 'mode' attribute in GAP to 'dim', and add docstring
* add mytrain.py for test
* test before layers
* test attr in layers
* test classifier
* delete mytrain.py
* set cal_acc in ClsHead defaults to False
* set cal_acc defaults to False
* use *args, **kwargs instead
* change bs16 to 3 in test_image_classifier_vit
* fix some comments
* change cal_acc=True
* test LinearClsHead
* add mytrain.py for test
* test before layers
* test attr in layers
* test classifier
* delete mytrain.py
* add imagenet_bs4096_AdamW.py
* delete 2 lines of comments
* change bs to 64
* fix bug
* add vit to model_zoo.md
* rename
* add imagenet bs 4096
* add vit_base_patch16_224_finetune
* add vit_base_patch16_224_pretrain
* add vit_base_patch16_384_finetune
* add vit_base_patch16_384_finetune
* add vit_b_p16_224_finetune_imagenet
* add vit_b_p16_224_pretrain_imagenet
* add vit_b_p16_384_finetune_imagenet
* add vit
* add vit
* add vit head
* vit unitest
* keep up with ClsHead
* test vit
* add flag to determiine whether to calculate acc during training
* Changes related to mmcv1.3.0
* change checkpoint saving interval to 10
* add label smooth
* default_runtime.py recovery
* docformatter
* docformatter
* delete 2 lines of comments
* delete configs/_base_/schedules/imagenet_bs4096.py
* add configs/_base_/schedules/imagenet_bs2048_AdamW.py
* rename imagenet_bs4096.py to imagenet_bs2048_AdamW.py
* add AutoAugment
* fix weight decay in vit
* change eval interval to 10
* add mytrain.py for test
* test before layers
* test attr in layers
* test classifier
* delete mytrain.py
* delete @torch.jit.ignore
* change eval interval back to 1
* add some comments to imagenet_bs2048_AdamW
* add some comments