Commit Graph

188 Commits (076ee10cacbda16d7c1bcd5514a64df8a826682b)

Author SHA1 Message Date
Ma Zerun 076ee10cac
[Feature] Add swin-transformer model. (#271)
* Add swin transformer archs S, B and L.

* Add SwinTransformer configs

* Add train config files of swin.

* Align init method with original code

* Use nn.Unfold to merge patch

* Change all ConfigDict to dict

* Add init_cfg for all subclasses of BaseModule.

* Use mmcv version init function

* Add Swin README

* Use safer cfg copy method

* Improve docstring and variable name.

* Fix some difference in randaug

Fix BGR bug, align scheduler config.

Fix label smoothing parameter difference.

* Fix missing droppath in attn

* Fix bug of relative posititon table if window width is not equal to
height.

* Make `PatchMerging` more general, support kernel, stride, padding and
dilation.

* Rename `residual` to `identity` in attention and FFN.

* Add `auto_pad` option to auto pad feature map

* Improve docstring.

* Fix bug in ShiftWMSA padding.

* Remove unused `key` and `value` in ShiftWMSA

* Move `PatchMerging` into utils and use common `PatchEmbed`.

* Use latest `LinearClsHead`, train augments and label smooth settings.
And remove original `SwinLinearClsHead`.

* Mark some configs as "Evalution Only".

* Remove useless comment in config

* 1. Move ShiftWindowMSA and WindowMSA to `utils/attention.py`
2. Add docstrings of each module.
3. Fix some variables' names.
4. Other small improvement.

* Add unit tests of swin-transformer and patchmerging.

* Fix some bugs in unit tests.

* Fix bug of rel_position_index if window is not square.

* Make WindowMSA implicit, and add unit tests.

* Add metafile.yml, update readme and model_zoo.
2021-07-01 09:30:42 +08:00
Mingqiang Ning 4ebee155e8
fix a bug when samples_per_gpu==1 (#311) 2021-06-30 20:57:21 +08:00
whcao bee0ac6b56
[Refactor]Modify patchembed (#330)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* add patchembed and hybridembed

* add patchembed and hybridembed to __init__

* test patchembed and hybridembed

* fix some comments
2021-06-30 20:48:04 +08:00
Ezra-Yu 06bbd6940d
[Fix] Fix init_weights bug in some backbones and update Readme (#318)
* some init_weights bugs

* init_weights

* add import Basemodule

* code stye

* isort

* Use recommend init_weights override method.

Add init_cfg parameter in InvertedResidual.

Remove some useless comment.

Co-authored-by: mzr1996 <mzr1996@163.com>
2021-06-30 19:41:58 +08:00
Ma Zerun 19cfb25e5e
Use parameter default value to control default behavior of init_cfg in (#319)
`LinearClsHead` and `MultiLabelLinearClsHead`

And remove the verbose `_init_layers` method of `LinearClsHead` and
`MultiLabelLinearClsHead`.
2021-06-30 19:13:27 +08:00
Ma Zerun 65410b05ad
Fix Mobilenetv3 structure and add pretrained model (#291)
* Refactor Mobilenetv3 structure and add ConvClsHead.

* Change model's name from 'MobileNetv3' to 'MobileNetV3'

* Modify configs for MobileNetV3 on CIFAR10.

And add MobileNetV3 configs for imagenet

* Fix activate setting bugs in MobileNetV3.

And remove bias in SELayer.

* Modify unittest

* Remove useless config and file.

* Fix mobilenetv3-large arch setting

* Add dropout option in ConvClsHead

* Fix MobilenetV3 structure according to torchvision version.

1. Remove with_expand_conv option in InvertedResidual, it should be decided by channels.

2. Revert activation function, should before SE layer.

* Format code.

* Rename MobilenetV3 arch "big" to "large".

* Add mobilenetv3_small torchvision training recipe

* Modify default `out_indices` of MobilenetV3, now it will change
according to `arch` if not specified.

* Add MobilenetV3 large config.

* Add mobilenetv3 README

* Modify InvertedResidual unit test.

* Refactor ConvClsHead to StackedLinearClsHead, and add unit tests.

* Add unit test for `simple_test` of `StackedLinearClsHead`.

* Fix typo

Co-authored-by: Yidi Shao <ydshao@smail.nju.edu.cn>
2021-06-27 23:19:36 +08:00
Ma Zerun 53c0df271f
Fix magnitude_std bug in RandAugment, and update unit tests. (#309) 2021-06-21 11:25:11 +08:00
whcao 3a08db9182
[Feature]Add augments to models/utils (#278)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* add rand_bbox_minmax rand_bbox and cutmix_bbox_and_lam to BaseCutMixLayer

* add mixup_prob to BatchMixupLayer

* add cutmixup

* add cutmixup to __init__

* test classifier with cutmixup

* delete some comments

* set mixup_prob default to 1.0

* add cutmixup to classifier

* use cutmixup

* use cutmixup

* fix bugs

* test cutmixup

* move mixup and cutmix to augment

* inherit from BaseAugment

* add BaseAugment

* inherit from BaseAugment

* rename identity.py

* add @

* build augment

* register module

* rename to augment.py

* delete cutmixup.py

* do not inherit from BaseAugment

* add augments

* use augments in classifier

* prob default to 1.0

* add comments

* use augments

* use augments

* assert sum of augmentation probabilities should equal to 1

* augmentation probabilities equal to 1

* calculate Identity prob

* replace xxx with self.xxx

* add comments

* sync with augments

* for BC-breaking

* delete useless comments in mixup.py
2021-06-20 09:44:51 +08:00
whcao b9879a4667
[Bug]Fix linearclshead (#307)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* fix init_cfg bug
2021-06-16 00:37:16 +08:00
whcao d62f198d2b
[Feature]Support custom hooks (#305)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* register custom_hooks in runner

* set custom_hooks_config to cfg.get(custom_hooks, None)
2021-06-15 21:09:58 +08:00
whcao 438c9da6eb
[Feature]Fix linear cls head (#303)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* move init_cfg to parameter

* isort

* Use a sentinel value to denote the default init_cfg
2021-06-15 21:08:30 +08:00
AllentDan a24a9f6faa
[Fix] Build compatible with low pytorch versions (#301)
* add version compatible for torchscript

* doc

* doc again

* fix lint

* fix lint isort
2021-06-14 23:25:35 +08:00
WRH b99bd4fa88
Fix bug for CPU training (#286)
* remove MMDataParallel when using cpu

* support cpu testing

* fix lint
2021-06-12 22:26:33 +08:00
AllentDan c2f01e0dcd
[Feature] Add torchscript deployment (#279)
* add torchscript deploy

* fix lint

* add check and delete \
2021-06-12 21:50:48 +08:00
q.yao dbddde52ef
[Feature] TensorRT test tools. (#284)
* first commit

* update resnext result

* update docs

* update docstring
2021-06-12 21:47:10 +08:00
whcao 5e1a02103f
[Feature]Delete comments (#298)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* delete comments
2021-06-12 21:45:22 +08:00
Ma Zerun 84a939f858
Refactor LabelSmoothLoss (#285)
* Refector label smooth loss, now support mode `original`, `classy_vision`
and `multi_label`.

* Add unittests for label smooth loss.

* Improve docstring of LSR
2021-06-12 21:32:18 +08:00
LXXXXR 6de635a81c
[Bug] Missing test data when num_imgs can not be evenly divided by num_gpus (#299)
* fix bug in test

* remove unneccesary code
2021-06-11 15:53:23 +08:00
Miao Zheng 4ca21c7d03
[WIP] Refactoring weights initialization (#270)
* [WIP] Refactoring weights initialization

* fix lint and constant init cfg

* fix pretrained bug

* fix typo

* fix isort

* revise model utils
2021-06-10 10:54:34 +08:00
LXXXXR 27a49a9646
bump version to v0.12.0 (#287) 2021-06-03 11:42:34 +08:00
LXXXXR 2c9e12f850
[Feature] Add an argument `efficientnet_style` to `RandomResizedCrop` and `CenterCrop` (#268)
* add config for resnest test

* fix config

* add label smoothing

* add memcached

* minor fix

* fix bug

* fix config

* add config

* minor fix

* fix configs

* use EResize

* change interpolation

* add more configs

* add docsting

* add unittest

* remove unnecessary changes

* minor fix

* add more docstring

* fix linting

* refactor

* add resize in crop to ensure crop size is output size

* fix bug and add comments

* fix bug
2021-05-31 14:10:57 +08:00
LXXXXR b23b319f56
fix version (#276) 2021-05-29 10:47:16 +08:00
LXXXXR bd9411d743
[Bug] Download dataset only on rank 0 (#273)
* only download dataset on rank 0

* download only on rank 0

* fix bug

* fix error message
2021-05-29 10:45:58 +08:00
Yinhao Li b67a11c548
Fix kwargss to kwargs. (#274) 2021-05-26 19:27:30 +08:00
LXXXXR dac090162d
Bump version to v0.11.1 (#256)
* bump version to v0.11.1

* minor fix

* minor fix

* minor fix

* minor fix
2021-05-21 16:36:08 +08:00
Y. Xiong 82e3937174
[Fix] Only allow directory operation when rank==0 when testing (#258)
* only allow dir operation when rank==0

* move check dir to multi_gpu_test
2021-05-21 14:04:46 +08:00
Ma Zerun 09597e5a4c
Add transform `RandomErasing` (#248)
* Add transform `RandomErasing`.

* Add unittests of `RandomErasing`

* Fix typo in docstring

* Improve docstring and unittests.
2021-05-19 22:35:26 +08:00
LXXXXR dc296f64c6
[Fix] Fix multi-node test tmp dir (#251)
* fix multi-node test tmp dir

* fix mmcv version
2021-05-16 21:53:13 +08:00
Wenwei Zhang 5ee08767f2
inherits mmcv registry (#252) 2021-05-14 23:36:56 +08:00
LXXXXR 8c90a879ce
[Fix] Fix magnitude_range in RandAug (#249)
* add increasing in solarize and posterize

* fix linting

* Revert "add increasing in solarize and posterize"

This reverts commit 128af36e9b.

* revise according to comments
2021-05-12 15:21:55 +08:00
mzr1996 a3b8d6015d
[Feature] Add RandAUG magnitude noise (#240)
* Add paramater magnitude_std in RandAugment to allow randomly movement of magnitude_value

* Add unittest for magnitude_std

* Improve docstring of magnitude_std
2021-05-10 17:13:41 +08:00
Y. Xiong 22b40696d6
add tempdir check (#242) 2021-05-10 17:12:43 +08:00
mzr1996 8128900a12
GlabelAveragePooling support 1d, 2d and 3d by param, and add neck test (#236)
* GlabelAveragePooling support 1d, 2d and 3d by param, and add neck test

* Imporve neck test

* Change 'mode' attribute in GAP to 'dim', and add docstring
2021-05-10 15:00:50 +08:00
whcao b30f79ea4c
[Feature]Modify Parameters Passing in models.heads (#239)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* set cal_acc in ClsHead defaults to False

* set cal_acc defaults to False

* use *args, **kwargs instead

* change bs16 to 3 in test_image_classifier_vit

* fix some comments

* change cal_acc=True

* test LinearClsHead
2021-05-10 14:56:55 +08:00
LXXXXR 37167158e7
bump version to v0.11.0 (#233) 2021-05-01 22:26:39 +08:00
whcao 16947f1239
[Bug]Fix weight decay (#227)
* add imagenet bs 4096

* add vit_base_patch16_224_finetune

* add vit_base_patch16_224_pretrain

* add vit_base_patch16_384_finetune

* add vit_base_patch16_384_finetune

* add vit_b_p16_224_finetune_imagenet

* add vit_b_p16_224_pretrain_imagenet

* add vit_b_p16_384_finetune_imagenet

* add vit

* add vit

* add vit head

* vit unitest

* keep up with ClsHead

* test vit

* add flag to determiine whether to calculate acc during training

* Changes related to mmcv1.3.0

* change checkpoint saving interval to 10

* add label smooth

* default_runtime.py recovery

* docformatter

* docformatter

* delete 2 lines of comments

* delete configs/_base_/schedules/imagenet_bs4096.py

* add configs/_base_/schedules/imagenet_bs2048_AdamW.py

* rename imagenet_bs4096.py to imagenet_bs2048_AdamW.py

* add AutoAugment

* fix weight decay in vit

* change eval interval to 10

* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* delete @torch.jit.ignore

* change eval interval back to 1

* add some comments to imagenet_bs2048_AdamW

* add some comments
2021-04-28 17:16:43 +08:00
QingChuanWS 01d2849b76
[Feature]: add onnxruntime test tool (#212)
* [draft] add onnxruntime accuruacy verification

* fix a bug

* update code

* fix lint

* fix lint

* update code and doc

* update doc

* update code

* update code

* updata doc and updata code

* update doc and fix some bug

* update doc

* update doc

* update doc

* update doc

* update doc

* update doc

* fix bug

* update doc

* update code

* move CUDAExecutionProvider to first place

* update resnext accuracy

* update doc

Co-authored-by: maningsheng <maningsheng@sensetime.com>
2021-04-26 13:57:08 +08:00
WRH 9be435846c
Support training on CPU (#219)
* draft

* add parameter for training tools

* Update .pre-commit-config.yaml
2021-04-26 13:56:45 +08:00
whcao 31a6a362ba
Add some vit configs (#217)
* add vit_base_patch32_384_finetune.py

* add vit_base_patch32_384_finetune_imagenet.py to vision_transformer

* add vit_large_patch16_384_finetune.py to models

* add vit_large_patch16_384_finetune_imagenet.py to vision_transformer

* add vit_large_patch32_384_finetune to models

* add vit_large_patch32_384_finetune_imagenet to vision_transformer

* add vit_large_patch16_224_finetune.py to models

* add vit_large_patch16_224_finetune_imagenet.py to vision_transformer

* delete some useless comments
2021-04-20 11:32:20 +08:00
whcao affb39fe07
[Feature]Add Vit (#214)
* add imagenet bs 4096

* add vit_base_patch16_224_finetune

* add vit_base_patch16_224_pretrain

* add vit_base_patch16_384_finetune

* add vit_base_patch16_384_finetune

* add vit_b_p16_224_finetune_imagenet

* add vit_b_p16_224_pretrain_imagenet

* add vit_b_p16_384_finetune_imagenet

* add vit

* add vit

* add vit head

* vit unitest

* keep up with ClsHead

* test vit

* add flag to determiine whether to calculate acc during training

* Changes related to mmcv1.3.0

* change checkpoint saving interval to 10

* add label smooth

* default_runtime.py recovery

* docformatter

* docformatter

* delete 2 lines of comments

* delete configs/_base_/schedules/imagenet_bs4096.py

* add configs/_base_/schedules/imagenet_bs2048_AdamW.py

* rename imagenet_bs4096.py to imagenet_bs2048_AdamW.py

* add helpers.py

* test vit hybrid backbone

* fix HybridEmbed

* use to_2tuple instead
2021-04-16 19:22:41 +08:00
LXXXXR 7d618e6606
[Fix] Fix version (#209)
* fix version

* add projects in openmmlab

* minor fix

* empty

* add mmocr

* empty

* empty

* fix linting
2021-04-16 19:07:17 +08:00
agim-a 3affc481c8
[Fix] check for CLASSES in checkpoint meta (#207)
- check for CLASSES in checkpoint meta when key meta does not exists
2021-04-15 22:19:23 +08:00
whcao 1cde6f6e65
[Feature] Add cutmix option (#198)
* Add cutmix option

* fix code style

* add some annotations

* add annotation about custom_hooks

* check constraint of alpha > 0

* add test cutmix

* fix code style

* add cutmix to configs/models

* add cutmix to configs/resnet

* flake8

* empty
2021-04-14 21:27:42 +08:00
mzr1996 b7b520881f
Update CONTRIBUTING.md according to mmcv (#210)
* Update CONTRIBUTING.md according to mmcv

* Docstring formatting by docformatter

* Update openmmlab website.
2021-04-14 21:22:37 +08:00
whcao af83e981ac
[Bug]Fix label smooth bug (#203)
* add convert_to_one_hot

* add test_label_smooth_loss

* add my label_smooth_loss

* fix CELoss bug

* test new label smooth loss

* LabelSmoothLoss downward compatibility

* add some comments

* remove the old version of LabelSmoothLoss

* add some comments

* add some comments

* add some comments

* add label smooth to config
2021-04-13 13:53:56 +08:00
whcao dcf61173f6
[Feature]Add cal_acc to cls_head.py (#206)
* add cal_acc to cls_head.py

* test ClsHead with cal_acc

* 4 spaces indentation
2021-04-13 13:52:14 +08:00
LXXXXR 5195932952
[Feature] Support random augmentation (#201)
* support random augmentation

* minor fix on posterize

* minor fix on posterize

* minor fix on cutout

* minor fix on cutout

* fix bug in solarize add

* revised according to comments
2021-04-09 14:02:50 +08:00
LXXXXR 4d1fb1a662
[Feature] ColorJitter and Lighting (#190)
* add configs

* remove config

* add color jitter and lighting

* revised according to comments
2021-04-02 19:23:39 +08:00
LXXXXR 1f6549eeee
bump version to 0.10.0 (#194) 2021-04-01 10:39:18 +08:00
LXXXXR 93cd960466
[Feature] Support AutoAug, AutoContrast, Equalize, Contrast, Brightness and Sharpness (#179)
* add AutoContrast, Equalize, Contrast, Brightness and Sharpness pipelines

* add ImageNetPolicy

* add configs

* add unittest

* remove config

* rerun CI

* rerun CI

* [Fix] Update pip install mmcv command in ci (#187)

* update pip install mmcv command in ci

* update pip install mmcv command in ci

* fix ci

* fix ci
2021-03-30 15:38:55 +08:00