Commit Graph

7 Commits (6eff94165c93705d1ef59f5742e8f4fb906451f7)

Author SHA1 Message Date
谢昕辰 dff7a968a3
[Fix] fix patch_embed and pos_embed mismatch error (#685)
* fix patch_embed and pos_embed mismatch error

* add docstring

* update unittest

* use downsampled image shape

* use tuple

* remove unused parameters and add doc

* fix init weights function

* revise docstring

* Update vit.py

If -> Whether

* fix lint

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
2021-07-19 09:27:10 -07:00
Ze Liu b6c7c77a08
[WIP] Add Swin Transformer (#511)
* add Swin Transformer

* add Swin Transformer

* fixed import

* Add some swin training settings.

* Fix some filename error.

* Fix attribute name: pretrain -> pretrained

* Upload mmcls implementation of swin transformer.

* Refactor Swin Transformer to follow mmcls style.

* Refactor init_weigths of swin_transformer.py

* Fix lint

* Match inference precision

* Add some comments

* Add swin_convert to load official style ckpt

* Remove arg: auto_pad

* 1. Complete comments for each block;

2. Correct weight convert function;

3. Fix the pad of Patch Merging;

* Clean function args.

* Fix vit unit test.

* 1. Add swin transformer unit tests;

2. Fix some pad bug;

3. Modify config to adapt new swin implementation;

* Modify config arg

* Update readme.md of swin

* Fix config arg error and Add some swin benchmark msg.

* Add MeM and ms test content for readme.md of swin transformer.

* Fix doc string of swin module

* 1. Register swin transformer to model list;

2. Modify pth url which keep meta attribute;

* Update swin.py

* Merge config settings.

* Modify config style.

* Update README.md

Add ViT link

* Modify main readme.md

Co-authored-by: Jiarui XU <xvjiarui0826@gmail.com>
Co-authored-by: sennnnn <201730271412@mail.scut.edu.cn>
Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
2021-07-01 23:41:55 +08:00
sennnnn 0c4c3b790d
[Fix] Fix some vit init bugs (#609)
* [Fix] Fix vit init bug

* Add some vit unit tests

* Modify module import

* Fix pretrain weights bug

* Modify pretrained judge

* Add some unit tests to improve code cov

* Optimize code

* Fix vit unit test
2021-06-19 15:53:13 -07:00
sennnnn 8f8abe373f
[Refactor] Using mmcv transformer bricks to refactor vit. (#571)
* [Refactor] Using mmcv bricks to refactor vit

* Follow the vit code structure from mmclassification

* Add MMCV install into CI system.

* Add  to 'Install MMCV' CI item

* Add 'Install MMCV_CPU' and 'Install MMCV_GPU CI' items

* Fix & Add

1. Fix low code coverage of vit.py;

2. Remove HybirdEmbed;

3. Fix doc string of VisionTransformer;

* Add helpers unit test.

* Add converter to convert vit pretrain weights from timm style to mmcls style.

* Clean some rebundant code and refactor init

1. Use timm style init_weights;

2. Remove to_xtuple and trunc_norm_;

* Add comments for VisionTransformer.init_weights()

* Add arg: pretrain_style to choose timm or mmcls vit pretrain weights.
2021-06-17 10:41:25 -07:00
sennnnn aa9b609f11
Add option for output shape of ViT (#530)
* Add arg: final_reshape to control if converting output feature information from NLC to NCHW;

* Fix the default value of final_reshape;

* Modify arg: final_reshape to arg: out_shape;

* Fix some unit test bug;
2021-05-05 22:49:28 -07:00
sennnnn cf2cb542f7
Adjust vision transformer backbone architectures (#524)
* Adjust vision transformer backbone architectures;

* Add DropPath, trunc_normal_ for VisionTransformer implementation;

* Add class token buring intermediate period and remove it during final period;

* Fix some parameters loss bug;

* * Store intermediate token features and impose no processes on them;

* Remove class token and reshape entire token feature from NLC to NCHW;

* Fix some doc error

* Add a arg for VisionTransformer backbone to control if input class token into transformer;

* Add stochastic depth decay rule for DropPath;

* * Fix output bug when input_cls_token=False;

* Add related unit test;

* * Add arg: out_indices to control model output;

* Add unit test for DropPath;

* Apply suggestions from code review

Co-authored-by: Jerry Jiarui XU <xvjiarui0826@gmail.com>
2021-04-30 10:37:47 -07:00
谢昕辰 5b33faa146
support transformer backbone (#465)
* vit backbone

* fix lint

* add docstrings and fix pretrained pos_embed dim not match prob

* add unittest for vit

* fix lint

* add vit based fcn configs

* fix import error

* support multiple resolution input images

* upsample pos_embed at init_weights

* support resize pos_embed at evaluation

* fix training errors

* add more unitest code for vit backbone

* unitest for uncovered code

* add norm_eval unittest

* refactor _pos_embeding

* minor change

* change var name

* rafactor init_weight

* load weights after resize

* ignore 'module' in pretrain checkpoint

* add with_cp

* add with_cp

Co-authored-by: Jiarui XU <xvjiarui0826@gmail.com>
2021-04-21 20:19:55 -07:00