Commit Graph

37 Commits (fa53174fd9a2bad710e4049fe59ae90b26fe24a9)

Author SHA1 Message Date
fanqiNO1 64c446d507
[Feature] Support LoRA. (#1687)
* [Feature] Support LoRA

* [Feature] Support LoRA

* [Fix] Fix bugs

* [Refactor] Add copyright

* [Fix] Fix bugs

* [Enhancement] Add

* [Fix] Fix bugs

* [Fix] Fix bugs

* [Fix] Fix bugs

* [Fix] Fix bugs

* [Fix] Fix bugs

* [Docs] Update docstring

* [Docs] Update docstring

* [Refactor] Reformat with yapf

* [Docs] Update docstring

* [Refactor] Docformat

* [Refactor] Fix double-quote-string

* [Fix] fix pytorch version

* [Fix] isort

* [Fix] isort

* [Enhancement] Extend forward

* [Enhancement] Extend test

* [Fix] Fix targets

* [Enhancement] Extend LoRA to frozen models

* [Fix] Fix spelling

* [Fix] Override __getattr__

* [Fix] Add init_cfg

* [Enhancement] Add example config

* [Fix] Fix init_cfg

* [Enhancement] Add merging script

* [Fix] Remove init_cfg

* [Fix] Change lora key

* [Fix] Fix merge scripts

* [Fix] Fix merge scripts

* [Docs] Add docs

* [Fix] fix
2023-07-24 11:30:57 +08:00
fanqiNO1 7cbfb36c14
[Refactor] Fix spelling (#1681)
* [Refactor] Fix spelling

* [Refactor] Fix spelling

* [Refactor] Fix spelling

* [Refactor] Fix spelling
2023-07-05 11:07:43 +08:00
Yixiao Fang 1ee9bbe050
[Docs] Update links (#1457)
* update links

* update readtherdocs

* update

* update

* fix lint

* update

* update

* update

* update cov branch

* update

* update

* update
2023-04-06 20:58:52 +08:00
Ma Zerun a05c79e806
[Refactor] Move transforms in mmselfsup to mmpretrain. (#1396)
* [Refactor] Move transforms in mmselfsup to mmpretrain.

* Update transform docs and configs. And register some mmcv transforms in
mmpretrain.

* Fix missing transform wrapper.

* update selfsup transforms

* Fix UT

* Fix UT

* update gaussianblur inconfigs

---------

Co-authored-by: fangyixiao18 <fangyx18@hotmail.com>
2023-03-03 15:01:11 +08:00
Ma Zerun dda3d6565b
[Docs] Update generate_readme.py and readme files. (#1388)
* Update generate_readme.py and readme files.

* Update reamde

* Update docs

* update metafile

* update simmim readme

* update

* update mae

* fix lint

* update mocov2

* update readme pic

* fix lint

* Fix mmcls download links.

* Fix Chinese docs.

* Decrease readthedocs requirements.

---------

Co-authored-by: fangyixiao18 <fangyx18@hotmail.com>
2023-03-02 13:29:07 +08:00
Yixiao Fang 89000c10eb
[Refactor] Refactor configs and metafile (#1369)
* update base datasets

* update base

* update barlowtwins

* update with new convention

* update

* update

* update

* add schedule

* add densecl

* add eva

* add mae

* add maskfeat

* add milan and mixmim

* add moco

* add swav simclr

* add simmim and simsiam

* refine

* update

* add to model index

* update config inheritance

* fix error in metafile

* Update pre-commit and metafile check script

* update metafile

* fix name error

* Fix classification model name and config name

---------

Co-authored-by: mzr1996 <mzr1996@163.com>
2023-02-23 11:17:16 +08:00
Hubert 6203fd6cc9
[Docs] Improve ViT and MobileViT model pages. (#1155)
* [Docs] Improve the ViT model page

* [Docs] Improve the MobileViT model page

* fix
2022-11-04 14:53:26 +08:00
Ma Zerun 29f066f7fb
[Improve] Speed up data preprocessor. (#1064)
* [Improve] Speed up data preprocessor.

* Add ClsDataSample serialization override functions.

* Add unit tests

* Modify configs to fit new mixup args.

* Fix `num_classes` of the ImageNet-21k config.

* Update docs.
2022-10-17 17:08:18 +08:00
mzr1996 d90dfc3d99 [Docs] Use relative link to config instead of abs link in README. 2022-09-22 09:59:06 +08:00
Ezra-Yu 0e3e8e91cc
[Refactor] Refactor Config Doc. (#987)
* refactor config doc

* refine

* Fix the config docs.

* update CN config

* Fix some grammar erros.

* Fix Chinese docs.

Co-authored-by: mzr1996 <mzr1996@163.com>
2022-08-29 11:10:05 +08:00
mzr1996 0c7a04b1c7 Fix lint 2022-07-18 14:14:31 +08:00
mzr1996 735a3ee11f Update auto_scale_lr fields 2022-07-18 11:11:13 +08:00
yingfhu a667b488ae minor fix 2022-07-18 11:11:13 +08:00
yingfhu ce81a07059 [Refactor] add auto_scale_lr 2022-07-18 11:11:13 +08:00
mzr1996 c0f3ba68a8 Add MAE version ViT-base training results and fix some errors in configs. 2022-07-18 11:11:13 +08:00
mzr1996 f3299b4ca2 [Refactor] Refactor batch augmentations 2022-07-18 11:11:13 +08:00
mzr1996 548db6f4ac [Refactor] Update optimizer related registries and configs. 2022-07-18 10:53:56 +08:00
mzr1996 995b1d0d58 [Refactor] Add `ResizeEdge` and refactor all dataset configs. 2022-07-18 10:53:56 +08:00
yingfhu 65f3b2221d Modify config relates to logger and checkpoint 2022-07-18 10:53:28 +08:00
Ezra-Yu 58d9f649ed Refactor scheduler configuration 2022-07-18 10:53:28 +08:00
Ezra-Yu 2f2aa3037c Refactor default hooks configs 2022-07-18 10:53:28 +08:00
Ezra-Yu 93a27c8324 [Feature] Add `PackClsInputs` and use `LoadImageFromFile`, `Resize` & `RandomFlip` in MMCV. 2022-07-18 10:53:28 +08:00
mzr1996 58b21ee56f [Model] Add IPU ViT model 2022-06-03 18:15:28 +08:00
Ma Zerun 824fbcbaae
[Refactor] Use mdformat instead of markdownlint to format markdown. (#844)
* [Refactor] Use mdformat instead of markdownlint to format markdown.

* Update unavailiable api links in tutorials

* Update CONTRIBUTING.md

* Use mdformat==0.7.9 to support Python 3.6
2022-06-02 15:22:01 +08:00
mzr1996 1d6fbe0efe [Fix] Fix lint and mmcv version requirement for IPU. 2022-04-29 22:33:29 +08:00
Hu Di b4eefe4794
[Enhance] Support training on IPU and add fine-tuning configs of ViT. (#723)
* implement training and evaluation on IPU

* fp16 SOTA

* Tput reaches 5600

* 123

* add poptorch dataloder

* change ipu_replicas to ipu-replicas

* add noqa to config long line(website)

* remove ipu dataloder test code

* del one blank line in test_builder

* refine the dataloder initialization

* fix a typo

* refine args for dataloder

* remove an annoted line

* process one more conflict

* adjust code structure in mmcv.ipu

* adjust ipu code structure in mmcv

* IPUDataloader to IPUDataLoader

* align with mmcv

* adjust according to mmcv

* mmcv code structre fixed

Co-authored-by: hudi <dihu@graphcore.ai>
2022-04-29 22:22:19 +08:00
Ma Zerun 833152b1f4
[Docs] Update README in configs according to OpenMMLab standard. (#672)
* Update README according to OpenMMLab standard.

* Update model zoo docs generation.

* Revert modification for paperlink
2022-01-26 18:26:01 +08:00
Ma Zerun 159b38d276
[Reproduction] Reproduce training results of T2T-ViT (#610)
* Add cosine cool down lr updater

* Use ema hook

* Update decay mult

* Update configs.

* Update T2T-ViT readme and format all readme

* Update swin readme

* Update tnt readme

* Add docstring for `CosineAnnealingCooldownLrUpdaterHook`.

* Update t2t readme and metafile
2021-12-28 15:09:40 +08:00
Ma Zerun f3fbc8b90b
[Docs] Add abstract and image for every paper. (#546) 2021-11-24 17:23:37 +08:00
Ma Zerun d1473e4a7f
[Dependency] Update mmcv dependency version (#509)
* Update mmcv dependency version

* Add code info in some metafiles
2021-11-02 18:08:30 +08:00
Ma Zerun 2932f9d8a3
[Refactor] Refator ViT (Continue #295) (#395)
* [Squash] Refator ViT (from #295)

* Use base variable to simplify auto_aug setting

* Use common PatchEmbed, remove HybridEmbed and refactor ViT init
structure.

* Add `output_cls_token` option and change the output format of ViT and
input format of ViT head.

* Update unit tests and add test for `output_cls_token`.

* Support out_indices.

* Standardize config files

* Support resize position embedding.

* Add readme file of vit

* Rename config file

* Improve docs about ViT.

* Update docstring

* Use local version `MultiheadAttention` instead of mmcv version.

* Fix MultiheadAttention

* Support `qk_scale` argument in `MultiheadAttention`

* Improve docs and change `layer_cfg` to `layer_cfgs` and support
sequence.

* Use init_cfg to init Linear layer in VisionTransformerHead

* update metafile

* Update checkpoints and configs

* Imporve docstring.

* Update README

* Revert GAP modification.
2021-10-18 16:07:00 +08:00
Ma Zerun fd0f5cce92
[Docs] Add model-pages in Model Zoo (#480)
* Add model-pages

* Add shortname in configs

* Use link directly instead of `switch_language.md`

* Auto collapse model-zoo pages.

* Fix link in RegVGG

* Add link replace

* fix lint
2021-10-14 15:26:47 +08:00
Ma Zerun a9d65271ab
[Docs] Add algorithm readme and update meta yml (#418)
* Add README.md for models without checkpoints.

* Update model-index.yml

* Update metafile.yml of seresnet
2021-08-24 17:46:46 +08:00
whcao 3be95b99c2
[Feature]Modify modelzoo readme (#230)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* add imagenet_bs4096_AdamW.py

* delete 2 lines of comments

* change bs to 64

* fix bug

* add vit to model_zoo.md

* rename
2021-04-29 15:18:55 +08:00
whcao 16947f1239
[Bug]Fix weight decay (#227)
* add imagenet bs 4096

* add vit_base_patch16_224_finetune

* add vit_base_patch16_224_pretrain

* add vit_base_patch16_384_finetune

* add vit_base_patch16_384_finetune

* add vit_b_p16_224_finetune_imagenet

* add vit_b_p16_224_pretrain_imagenet

* add vit_b_p16_384_finetune_imagenet

* add vit

* add vit

* add vit head

* vit unitest

* keep up with ClsHead

* test vit

* add flag to determiine whether to calculate acc during training

* Changes related to mmcv1.3.0

* change checkpoint saving interval to 10

* add label smooth

* default_runtime.py recovery

* docformatter

* docformatter

* delete 2 lines of comments

* delete configs/_base_/schedules/imagenet_bs4096.py

* add configs/_base_/schedules/imagenet_bs2048_AdamW.py

* rename imagenet_bs4096.py to imagenet_bs2048_AdamW.py

* add AutoAugment

* fix weight decay in vit

* change eval interval to 10

* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* delete @torch.jit.ignore

* change eval interval back to 1

* add some comments to imagenet_bs2048_AdamW

* add some comments
2021-04-28 17:16:43 +08:00
whcao 31a6a362ba
Add some vit configs (#217)
* add vit_base_patch32_384_finetune.py

* add vit_base_patch32_384_finetune_imagenet.py to vision_transformer

* add vit_large_patch16_384_finetune.py to models

* add vit_large_patch16_384_finetune_imagenet.py to vision_transformer

* add vit_large_patch32_384_finetune to models

* add vit_large_patch32_384_finetune_imagenet to vision_transformer

* add vit_large_patch16_224_finetune.py to models

* add vit_large_patch16_224_finetune_imagenet.py to vision_transformer

* delete some useless comments
2021-04-20 11:32:20 +08:00
whcao affb39fe07
[Feature]Add Vit (#214)
* add imagenet bs 4096

* add vit_base_patch16_224_finetune

* add vit_base_patch16_224_pretrain

* add vit_base_patch16_384_finetune

* add vit_base_patch16_384_finetune

* add vit_b_p16_224_finetune_imagenet

* add vit_b_p16_224_pretrain_imagenet

* add vit_b_p16_384_finetune_imagenet

* add vit

* add vit

* add vit head

* vit unitest

* keep up with ClsHead

* test vit

* add flag to determiine whether to calculate acc during training

* Changes related to mmcv1.3.0

* change checkpoint saving interval to 10

* add label smooth

* default_runtime.py recovery

* docformatter

* docformatter

* delete 2 lines of comments

* delete configs/_base_/schedules/imagenet_bs4096.py

* add configs/_base_/schedules/imagenet_bs2048_AdamW.py

* rename imagenet_bs4096.py to imagenet_bs2048_AdamW.py

* add helpers.py

* test vit hybrid backbone

* fix HybridEmbed

* use to_2tuple instead
2021-04-16 19:22:41 +08:00