Commit Graph

93 Commits (f9eb9b409b65361167446c7c28e240f64aa7cf18)

Author SHA1 Message Date
Ma Zerun f9eb9b409b
[Docs] Add Copyright information. (#413) 2021-08-17 19:52:42 +08:00
whcao 359f56ad58
[Feature] Add transformer in transformer (#339)
* add tnt_small configs

* add tnt backbone

* test tnt

* add tnt to model_zoo

* rename the config file name

* add optimizor

* move tnt backbone unitest

* add metric

* fix keyname in arch

* encapsulate "inner transformer block" and "outer transformer block"

* fix TnT

* Use `inner_block_cfg` and `outer_block_cfg` instead of `args` and
`kwargs`.

Co-authored-by: mzr1996 <mzr1996@163.com>
2021-07-30 09:16:27 +08:00
Junjun2016 e3b77e7508
[Fix] Fix `patch_cfg` argument bug in SwinTransformer (#368) 2021-07-27 19:19:44 +08:00
Junjun2016 1585ca83c8
[Fix] Fix docstring typo and init error in ShuffleNetV1 (#374) 2021-07-27 19:17:42 +08:00
Ma Zerun 899047a3b3
Fix duplicate `init_weights` call in ViT init function. (#373) 2021-07-26 05:33:11 -04:00
Ma Zerun a97ccd5579
[Fix] Fix swin transformer config (#355)
* Fix config bug in swin

* Format config and checkpoint name of swin transformer.

* Fix link in model zoo
2021-07-14 15:10:20 +08:00
Ma Zerun d04ebc1eb5
[Docs] Add API Reference in the docs (#342)
* Add API inference in the docs and fix readthedocs config.

* Replace some relative link in docs.

* Format docstring for reStructuredText syntax.

* Fix vit paper link

* Fix docstring of `show_results` function in `BaseClassifier`.
2021-07-14 15:06:50 +08:00
Ma Zerun 076ee10cac
[Feature] Add swin-transformer model. (#271)
* Add swin transformer archs S, B and L.

* Add SwinTransformer configs

* Add train config files of swin.

* Align init method with original code

* Use nn.Unfold to merge patch

* Change all ConfigDict to dict

* Add init_cfg for all subclasses of BaseModule.

* Use mmcv version init function

* Add Swin README

* Use safer cfg copy method

* Improve docstring and variable name.

* Fix some difference in randaug

Fix BGR bug, align scheduler config.

Fix label smoothing parameter difference.

* Fix missing droppath in attn

* Fix bug of relative posititon table if window width is not equal to
height.

* Make `PatchMerging` more general, support kernel, stride, padding and
dilation.

* Rename `residual` to `identity` in attention and FFN.

* Add `auto_pad` option to auto pad feature map

* Improve docstring.

* Fix bug in ShiftWMSA padding.

* Remove unused `key` and `value` in ShiftWMSA

* Move `PatchMerging` into utils and use common `PatchEmbed`.

* Use latest `LinearClsHead`, train augments and label smooth settings.
And remove original `SwinLinearClsHead`.

* Mark some configs as "Evalution Only".

* Remove useless comment in config

* 1. Move ShiftWindowMSA and WindowMSA to `utils/attention.py`
2. Add docstrings of each module.
3. Fix some variables' names.
4. Other small improvement.

* Add unit tests of swin-transformer and patchmerging.

* Fix some bugs in unit tests.

* Fix bug of rel_position_index if window is not square.

* Make WindowMSA implicit, and add unit tests.

* Add metafile.yml, update readme and model_zoo.
2021-07-01 09:30:42 +08:00
Ezra-Yu 06bbd6940d
[Fix] Fix init_weights bug in some backbones and update Readme (#318)
* some init_weights bugs

* init_weights

* add import Basemodule

* code stye

* isort

* Use recommend init_weights override method.

Add init_cfg parameter in InvertedResidual.

Remove some useless comment.

Co-authored-by: mzr1996 <mzr1996@163.com>
2021-06-30 19:41:58 +08:00
Ma Zerun 65410b05ad
Fix Mobilenetv3 structure and add pretrained model (#291)
* Refactor Mobilenetv3 structure and add ConvClsHead.

* Change model's name from 'MobileNetv3' to 'MobileNetV3'

* Modify configs for MobileNetV3 on CIFAR10.

And add MobileNetV3 configs for imagenet

* Fix activate setting bugs in MobileNetV3.

And remove bias in SELayer.

* Modify unittest

* Remove useless config and file.

* Fix mobilenetv3-large arch setting

* Add dropout option in ConvClsHead

* Fix MobilenetV3 structure according to torchvision version.

1. Remove with_expand_conv option in InvertedResidual, it should be decided by channels.

2. Revert activation function, should before SE layer.

* Format code.

* Rename MobilenetV3 arch "big" to "large".

* Add mobilenetv3_small torchvision training recipe

* Modify default `out_indices` of MobilenetV3, now it will change
according to `arch` if not specified.

* Add MobilenetV3 large config.

* Add mobilenetv3 README

* Modify InvertedResidual unit test.

* Refactor ConvClsHead to StackedLinearClsHead, and add unit tests.

* Add unit test for `simple_test` of `StackedLinearClsHead`.

* Fix typo

Co-authored-by: Yidi Shao <ydshao@smail.nju.edu.cn>
2021-06-27 23:19:36 +08:00
whcao 5e1a02103f
[Feature]Delete comments (#298)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* delete comments
2021-06-12 21:45:22 +08:00
Miao Zheng 4ca21c7d03
[WIP] Refactoring weights initialization (#270)
* [WIP] Refactoring weights initialization

* fix lint and constant init cfg

* fix pretrained bug

* fix typo

* fix isort

* revise model utils
2021-06-10 10:54:34 +08:00
whcao 16947f1239
[Bug]Fix weight decay (#227)
* add imagenet bs 4096

* add vit_base_patch16_224_finetune

* add vit_base_patch16_224_pretrain

* add vit_base_patch16_384_finetune

* add vit_base_patch16_384_finetune

* add vit_b_p16_224_finetune_imagenet

* add vit_b_p16_224_pretrain_imagenet

* add vit_b_p16_384_finetune_imagenet

* add vit

* add vit

* add vit head

* vit unitest

* keep up with ClsHead

* test vit

* add flag to determiine whether to calculate acc during training

* Changes related to mmcv1.3.0

* change checkpoint saving interval to 10

* add label smooth

* default_runtime.py recovery

* docformatter

* docformatter

* delete 2 lines of comments

* delete configs/_base_/schedules/imagenet_bs4096.py

* add configs/_base_/schedules/imagenet_bs2048_AdamW.py

* rename imagenet_bs4096.py to imagenet_bs2048_AdamW.py

* add AutoAugment

* fix weight decay in vit

* change eval interval to 10

* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* delete @torch.jit.ignore

* change eval interval back to 1

* add some comments to imagenet_bs2048_AdamW

* add some comments
2021-04-28 17:16:43 +08:00
whcao 31a6a362ba
Add some vit configs (#217)
* add vit_base_patch32_384_finetune.py

* add vit_base_patch32_384_finetune_imagenet.py to vision_transformer

* add vit_large_patch16_384_finetune.py to models

* add vit_large_patch16_384_finetune_imagenet.py to vision_transformer

* add vit_large_patch32_384_finetune to models

* add vit_large_patch32_384_finetune_imagenet to vision_transformer

* add vit_large_patch16_224_finetune.py to models

* add vit_large_patch16_224_finetune_imagenet.py to vision_transformer

* delete some useless comments
2021-04-20 11:32:20 +08:00
whcao affb39fe07
[Feature]Add Vit (#214)
* add imagenet bs 4096

* add vit_base_patch16_224_finetune

* add vit_base_patch16_224_pretrain

* add vit_base_patch16_384_finetune

* add vit_base_patch16_384_finetune

* add vit_b_p16_224_finetune_imagenet

* add vit_b_p16_224_pretrain_imagenet

* add vit_b_p16_384_finetune_imagenet

* add vit

* add vit

* add vit head

* vit unitest

* keep up with ClsHead

* test vit

* add flag to determiine whether to calculate acc during training

* Changes related to mmcv1.3.0

* change checkpoint saving interval to 10

* add label smooth

* default_runtime.py recovery

* docformatter

* docformatter

* delete 2 lines of comments

* delete configs/_base_/schedules/imagenet_bs4096.py

* add configs/_base_/schedules/imagenet_bs2048_AdamW.py

* rename imagenet_bs4096.py to imagenet_bs2048_AdamW.py

* add helpers.py

* test vit hybrid backbone

* fix HybridEmbed

* use to_2tuple instead
2021-04-16 19:22:41 +08:00
mzr1996 b7b520881f
Update CONTRIBUTING.md according to mmcv (#210)
* Update CONTRIBUTING.md according to mmcv

* Docstring formatting by docformatter

* Update openmmlab website.
2021-04-14 21:22:37 +08:00
LXXXXR b98b317012
fix bug in vgg weight_init (#140) 2021-01-15 10:23:11 +08:00
LXXXXR 63f38988eb
[Fix] Fix optional issues in docstring (#138)
* fix optional issue in docstring

* revised according to comments

* add optional
2021-01-14 11:09:08 +08:00
Lei Yang bc1b08ba41
Add VGG and pretained models (#27)
* add vgg

* add vgg model coversion tool

* fix out_indices and docstr

* add vgg models in configs

* add params, flops and accuracy in docs

* add pretrained models url

* use ConvModule and refine var names

* update vgg conversion tool

* modify bn config

* add docs for arch_setting

* add unit test for vgg

* rm debug code

* update vgg pretrained models
2020-09-29 17:49:42 +08:00
Jerry Jiarui XU 8d3acce307
Add ResNeSt (#25)
* Add ResNeSt

* fixed test

* refactor

* add ResNeSt base

* update modelzoo

* update modelzoo

* Add S-200,S269 _base_
2020-09-22 10:41:51 +08:00
Xiaojie Li 7af9419ffa
Fix init_weights in 'shufflenet_v2.py'. (#29)
* fix init_weights in shufflenetv2

* fix doc

* fix doc

Co-authored-by: lixiaojie <lixiaojie@sensetime.com>
2020-08-13 09:49:41 +08:00
Lei Yang 408d92bcbe
revise docstr of alexnet to correct input_size (#26) 2020-08-12 16:14:19 +08:00
yanglei 9a661ef981 Add ResNet_CIFAR 2020-07-12 00:06:56 +08:00
lixiaojie c1d0090700 Change the init_weight for shufflenet models 2020-07-12 00:06:56 +08:00
lixiaojie 1d5f3d8c24 update shufflenet_v1 2020-07-12 00:06:55 +08:00
lixiaojie f6260b33bf Add RandomCrop, RandomResizedCrop, RandomGrayscale, impad 2020-07-12 00:06:55 +08:00
yanglei 8995e16834 Add AlexNet and LeNet5 2020-07-12 00:06:54 +08:00
yanglei 17092d8be4 Fix the mid_channels of SEResNeXt 2020-07-07 11:25:48 +08:00
lixiaojie eb24a94b68 add mobilenetv2 convert code 2020-07-05 21:59:22 +08:00
lixiaojie 046aa5c003 add shufflenet_v2 convert code 2020-07-03 19:21:18 +08:00
chenkai 91a43c5bac Add RegNet 2020-07-01 14:21:45 +08:00
louzan 03b75789c6 Dev mobilenetv3 2020-06-30 15:50:36 +08:00
chenkai 02e11cc1f3 Refactoring for ResNet family 2020-06-25 11:57:50 +08:00
lixiaojie 50208498dd Update mobilenetv2 2020-06-22 19:40:31 +08:00
yangmingmin 29e17ab326 Dev se resnext 2020-06-19 11:45:42 +08:00
yangmingmin f729a60f87 Dev se resnet 2020-06-17 14:20:20 +08:00
chenkai c168aa786e Merge branch 'fix/resnet_zero_init_residual' into 'master'
Set default zero_init_residual of ResNet to True

See merge request open-mmlab/mmclassification!22
2020-06-16 22:09:48 +08:00
louzan deee5d61d7 add mobilenetv2 2020-06-16 14:37:03 +08:00
yanglei 800ace3137 set default zero_init_residual to True 2020-06-16 11:36:54 +08:00
lixiaojie fcb1ad80e4 merge master 2020-06-16 00:49:55 +08:00
lixiaojie 66e9bb017d fix unit test 2020-06-16 00:03:45 +08:00
lixiaojie 8060e0f620 merge master 2020-06-15 20:54:52 +08:00
lixiaojie 1ce116a898 Merge branch 'master' of gitlab.sz.sensetime.com:open-mmlab/mmclassification into dev_shufflenetv1 2020-06-15 20:52:27 +08:00
lixiaojie edceab13e3 modify self.layers 2020-06-15 20:52:17 +08:00
qianchen 5662daeea8 Merge branch 'dev_shufflenetv1' into 'master'
add shufflenetv1

See merge request open-mmlab/mmclassification!10
2020-06-15 20:49:44 +08:00
lixiaojie 8249ad7205 merge master 2020-06-15 20:46:30 +08:00
lixiaojie cf85db0f7c Merge branch 'master' of gitlab.sz.sensetime.com:open-mmlab/mmclassification into dev_shufflenetv2 2020-06-15 20:42:22 +08:00
lixiaojie 703714b78e modify format of self.layers 2020-06-15 20:42:04 +08:00
wangshiguang a1da2013ad add pat ci image 2020-06-15 17:43:40 +08:00
lixiaojie 3bf971238a fix grammar 2020-06-15 16:29:36 +08:00