mmselfsup/tests/test_models/test_heads.py

150 lines
4.2 KiB
Python
Raw Normal View History

2021-12-15 19:07:01 +08:00
# Copyright (c) OpenMMLab. All rights reserved.
import torch
Bump version to v0.7.0 (#229) * [Enhance] add pre-commit hook for algo-readme and copyright (#213) * [Enhance] add test windows in workflows (#215) * [Enhance] add test windows in workflows * fix lint * add optional requirements * add try-except judgement * add opencv installation in windows test steps * fix path error on windows * update * update path * update * add pytest skip for algorithm test * update requirements/runtime.txt * update pytest skip * [Docs] translate 0_config.md into Chinese (#216) * [Docs] translate 0_config.md into Chinese * [Fix] fix format description in 0_config.md * Update: 0_config.md * [Fix] fix tsne 'no `init_cfg`' error (#222) * [Fix] fix tsne 'no init_cfg' and pool_type errors * [Refactor] fix linting of tsne vis * [Docs] reorganizing OpenMMLab projects and update algorithms in readme (#219) * [Docs] reorganizing OpenMMLab projects and update algorithms in readme * using small letters * fix typo * [Fix] fix image channel bgr/rgb bug and update benchmarks (#210) * [Fix] fix image channel bgr/rgb bug * update model zoo * update readme and metafile * [Fix] fix typo * [Fix] fix typo * [Fix] fix lint * modify Places205 directory according to the downloaded dataset * update results * [Fix] Fix the bug when using prefetch under multi-view methods, e.g., DenseCL (#218) * fig bug for prefetch_loader under multi-view setting * fix lint problem Co-authored-by: liming <liming.ai@bytedance.com> * [Feature]: MAE official (#221) * [Feature]: MAE single image pre-training * [Fix]: Fix config * [Fix]: Fix dataset link * [Feature]: Add run * [Refactor]: Delete spot * [Feature]: ignore nohup output file * [Feature]: Add auto script to generate run cmd * [Refactor]: Refactor mae config file * [Feature]: sz20 settings * [Feature]: Add auto resume * [Fix]: Fix lint * [Feature]: Make git ignore txt * [Refactor]: Delete gpus in script * [Fix]: Make generate_cmd to add --async * [Feature]: Initial version of Vit fine-tune * [Fix]: Add 1424 specific settings * [Fix]: Fix missing file client bug for 1424 * [Feature]: 1424 customized settings * [Fix]: Make drop in eval to False * [Feature]: Change the finetune and pre-training settings * [Feature]: Add debug setting * [Refactor]: Refactor the model * [Feature]: Customized settings * [Feature]: Add A100 settings * [Fix]: Change mae to imagenet * [Feature]: Change mae pretrain num workers to 32 * [Feature]: Change num workers to 16 * [Feature]: Add A100 setting for pre_release ft version * [Feature]: Add img_norm_cfg * [Fix]: Fix mae cls test missing logits bug * [Fix]: Fix mae cls head bias initialize to zero * [Feature]: Rename mae config name * [Feature]: Add MAE README.md * [Fix]: Fix lint * [Feature]: Fix typo * [Fix]: Fix typo * [Feature]: Fix invalid link * [Fix]: Fix finetune config file name * [Feature]: Official pretrain v1 * [Feature]: Change log interval to 100 * [Feature]: pretrain 1600 epochs * [Fix]: Change encoder num head to 12 * [Feature]: Mix precision * [Feature]: Add default value to random masking * [Feature]: Official MAE finetune * [Feature]: Finetune img per gpu 32 * [Feature]: Add multi machine training for lincls * [Fix]: Fix lincls master port master addr * [Feature]: Change img per gpu to 128 * [Feature]: Add linear eval and Refactor * [Fix]: Fix debug mode * [Fix]: Delete MAE dataset in __init__.py * [Feature]: normalize pixel for mae * [Fix]: Fix lint * [Feature]: LARS for linear eval * [Feature]: Add lars for mae linear eval * [Feature]: Change mae linear lars num workers to 32 * [Feature]: Change mae linear lars num workers to 8 * [Feature]: log every 25 iter for mae linear eval lars * [Feature]: Add 1600 epoch and 800 epoch pretraining * [Fix]: Change linear eval to 902 * [Fix]: Add random flip to linear eval * [Fix]: delete fp16 in mae * [Refactor]: Change backbone to mmcls * [Fix]: Align finetune settings * [Fix]: replace timm trunc_normal with mmcv trunc_normal * [Fix]: Change finetune layer_decay to 0.65 * [Fix]: Delete pretrain last norm when global_pooling * [Fix]: set requires_grad of norm1 to False * [Fix]: delete norm1 * [Fix]: Fix docstring bug * [Fix]: Fix lint * [Fix]: Add external link * [Fix]: Delete auto_resume and reformat config readme. * [Fix]: Fix pytest bug * [Fix]: Fix lint * [Refactor]: Rename filename * [Feature]: Add docstring * [Fix]: Rename config file name * [Fix]: Fix name inconsistency bug * [Fix]: Change the default value of persistent_worker in builder to True * [Fix]: Change the default value of CPUS_PER_TASK to 5 * [Fix]: Add a blank line to line136 in tools/train.py * [Fix]: Fix MAE algorithm docstring format and add paper name and url * [Feature]: Add MAE paper name and link, and store mae teaser on github * [Refactor]: Delete mae.png * [Fix]: Fix config file name” * [Fix]: Fix name bug * [Refactor]: Change default GPUS to 8 * [Fix]: Abandon change to drop_last * [Fix]: Fix docstring in mae algorithm * [Fix]: Fix lint * [Fix]: Fix lint * [Fix]: Fix mae finetune algo type bug * [Feature]: Add unit test for algorithm * [Feature]: Add unit test for remaining parts * [Fix]: Fix lint * [Fix]: Fix typo * [Fix]: Delete some unnecessary modification in gitignore * [Feature]: Change finetune setting in mae algo to mixup setting * [Fix]: Change norm_pix_loss to norm_pix in pretrain head * [Fix]: Delete modification in dist_train_linear.sh * [Refactor]: Delete global pool in mae_cls_vit.py * [Fix]: Change finetune param to mixup in test_mae_classification * [Fix]: Change norm_pix_loss to norm_pix of mae_pretrain_head in unit test * [Fix]: Change norm_pix_loss to norm_pix in unit test * [Refactor]: Create init_weights for mae_finetune_head and mae_linprobe_head * [Refactor]: Construct 2d sin-cosine position embedding using torch * [Refactor]: Using classification and using mixup from mmcls * [Fix]: Fix lint * [Fix]: Add False to finetune mae linprobe‘ “ * [Fix]: Set drop_last to False * [Fix]: Fix MAE finetune layerwise lr bug * [Refactor]: Delete redundant MAE when registering MAE * [Refactor]: Split initialize_weights in MAE to submodules * [Fix]: Change the min_lr of mae pretrain to 0.0 * [Refactor]: Delete unused _init_weights in mae_cls_vit * [Refactor]: Change MAE cls vit to a more general name * [Feature]: Add Epoch Fix cosine annealing lr updater * [Fix]: Fix lint * [Feature]: Add layer wise lr decay in optimizer constructor * [Fix]: Fix lint * [Fix]: Fix set layer wise lr decay bug * [Fix]: Fix UT for MAE * [Fix]: Fix lint * [Fix]: update algorithm readme format for MAE * [Fix]: Fix isort * [Fix]: Add Returns inmae_pretrain_vit * [Fix]: Change bgr to rgb * [Fix]: Change norm pix to True * [Fix]: Use cls_token to linear prob * [Fix]: Delete mixup.py * [Fix]: Fix MAE readme * [Feature]: Delete linprobe * [Refactor]: Merge MAE head into one file * [Fix]: Fix lint * [Fix]: rename mae_pretrain_head to mae_head * [Fix]: Fix import error in __init__.py * [Feature]: skip MAE algo UT when running on windows * [Fix]: Fix UT bug * [Feature]: Update model_zoo * [Fix]: Rename MAE pretrain model name * [Fix]: Delete mae ft prefix * [Feature]: Change b to base * [Refactor]: Change b in MAE pt config to base * [Fix]: Fix typo in docstring * [Fix]: Fix name bug * [Feature]: Add new constructor for MAE finetune * [Fix]: Fix model_zoo link * [Fix]: Skip UT for MAE * [Fix]: Change fixed channel order to param Co-authored-by: LIU Yuan <liuyuuan@pjlab.org.cn> Co-authored-by: liu yuan <liuyuan@pjlab.org.cn> * [Feature]: Add diff seeds to diff ranks and set torch seed in worker_init_fn (#228) * [Feature]: Add set diff seeds to diff ranks * [Fix]: Set diff seed to diff workers * Bump version to v0.7.0 (#227) * Bump version to v0.7.0 * [Docs] update readme Co-authored-by: wang11wang <95845452+wang11wang@users.noreply.github.com> Co-authored-by: Liangyu Chen <45140242+c-liangyu@users.noreply.github.com> Co-authored-by: Ming Li <73068772+mitming@users.noreply.github.com> Co-authored-by: liming <liming.ai@bytedance.com> Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> Co-authored-by: LIU Yuan <liuyuuan@pjlab.org.cn> Co-authored-by: liu yuan <liuyuan@pjlab.org.cn>
2022-03-04 13:43:49 +08:00
import torch.nn.functional as F
2021-12-15 19:07:01 +08:00
from mmselfsup.models.heads import (ClsHead, ContrastiveHead, LatentClsHead,
Bump version to v0.9.0 (#299) * [Feature]: MAE pre-training with fp16 (#271) * [Feature]: MAE pre-training with fp16 * [Fix]: Fix lint * [Fix]: Fix SimMIM config link, and add SimMIM to model_zoo (#272) * [Fix]: Fix link error * [Fix]: Add SimMIM to model zoo * [Fix]: Fix lint * [Fix] fix 'no init_cfg' error for pre-trained model backbones (#256) * [UT] add unit test for apis (#276) * [UT] add unit test for apis * ignore pytest log * [Feature] Add extra dataloader settings in configs. (#264) * [Feature] support to set validation samples per gpu independently * set default to be cfg.data.samples_per_gpu * modify the tools/test.py * using 'train_dataloader', 'val_dataloader', 'test_dataloader' for specific settings * test 'evaluation' branch * [Fix]: Change imgs_per_gpu to samples_per_gpu MAE (#278) * [Feature]: Add SimMIM 192 pt 224 ft (#280) * [Feature]: Add SimMIM 192 pt 224 ft * [Feature]: Add simmim 192 pt 224 ft to readme * [Fix] fix key error bug when registering custom hooks (#273) * [UT] remove pytorch1.5 test (#288) * [Benchmark] rename linear probing config file names (#281) * [Benchmark] rename linear probing config file names * update config links * Avoid GPU memory leak with prefetch dataloader (#277) * [Feature] barlowtwins (#207) * [Fix]: Fix mmcls upgrade bug (#235) * [Feature]: Add multi machine dist_train (#232) * [Feature]: Add multi machine dist_train * [Fix]: Change bash to sh * [Fix]: Fix missing sh suffix * [Refactor]: Change bash to sh * [Refactor] Add unit test (#234) * [Refactor] add unit test * update workflow * update * [Fix] fix lint * update test * refactor moco and densecl unit test * fix lint * add unit test * update unit test * remove modification * [Feature]: Add MAE metafile (#238) * [Feature]: Add MAE metafile * [Fix]: Fix lint * [Fix]: Change LARS to AdamW in the metafile of MAE * Add barlowtwins * Add unit test for barlowtwins * Adjust training params * add decorator to pass CI * adjust params * Add barlowtwins * Add unit test for barlowtwins * Adjust training params * add decorator to pass CI * adjust params * add barlowtwins configs * revise LatentCrossCorrelationHead * modify ut to save memory * add metafile * add barlowtwins results to model zoo * add barlow twins to homepage * fix batch size bug * add algorithm readme * add type hints * reorganize the model zoo * remove one config * recover the config * add missing docstring * revise barlowtwins * reorganize coco and voc benchmark * add barlowtwins to index.rst * revise docstring Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com> Co-authored-by: fangyixiao18 <fangyx18@hotmail.com> * [Fix] fix --local-rank (#290) * [UT] reduce memory usage while runing unit test (#291) * [Feature]: CAE Supported (#284) * [Feature]: Add mc * [Feature]: Add dataset of CAE * [Feature]: Init version of CAE * [Feature]: Add mc * [Fix]: Change beta to (0.9, 0.999) * [Fix]: New feature * [Fix]: Decouple the qkv bias * [Feature]: Decouple qkv bias in MultiheadAttention * [Feature]: New mask generator * [Fix]: Fix TransformEncoderLayer bug * [Feature]: Add MAE CAE linear prob * [Fix]: Fix config * [Fix]: Delete redundant mc * [Fix]: Add init value in mim cls vit * [Fix]: Fix cae ft config * [Fix]: Delete repeated init_values * [Fix]: Change bs from 64 to 128 in CAE ft * [Fix]: Add mc in cae pt * [Fix]: Fix momemtum update bug * [Fix]: Add no weight_decay for gamma * [Feature]: Add mc for cae pt * [Fix]: Delete mc * [Fix]: Delete redundant files * [Fix]: Fix lint * [Feature]: Add docstring to algo, backbone, neck and head * [Fix]: Fix lint * [Fix]: network * [Feature]: Add docstrings for network blocks * [Feature]: Add docstring to ToTensor * [Feature]: Add docstring to transoform * [Fix]: Add type hint to BEiTMaskGenerator * [Fix]: Fix lint * [Fix]: Add copyright to dalle_e * [Fix]: Fix BlockwiseMaskGenerator * [Feature]: Add UT for CAE * [Fix]: Fix dalle state_dict path not existed bug * [Fix]: Delete file_client_args related code * [Fix]: Remove redundant code * [Refactor]: Add fp16 to the name of cae pre-train config * [Refactor]: Use FFN from mmcv * [Refactor]: Change network_blocks to trasformer_blocks * [Fix]: Fix mask generator name bug * [Fix]: cae pre-train config bug * [Fix]: Fix docstring grammar * [Fix]: Fix mc related code * [Fix]: Add object parent to transform * [Fix]: Delete unnecessary modification * [Fix]: Change blockwisemask generator to simmim mask generator * [Refactor]: Change cae mae pretrain vit to cae mae vit * [Refactor]: Change lamb to lambd * [Fix]: Remove blank line * [Fix]: Fix lint * [Fix]: Fix UT * [Fix]: Delete modification to swin * [Fix]: Fix lint * [Feature]: Add README and metafile * [Feature]: Update index.rst * [Fix]: Update model_zoo * [Fix]: Change MAE to CAE in algorithm * [Fix]: Change SimMIMMaskGenerator to CAEMaskGenerator * [Fix]: Fix model zoo * [Fix]: Change to dalle_encoder * [Feature]: Add download link for dalle * [Fix]: Fix lint * [Fix]: Fix UT * [Fix]: Update metafile * [Fix]: Change b to base * [Feature]: Add dalle download link in warning * [Fix] add arxiv link in readme Co-authored-by: Jiahao Xie <52497952+Jiahao000@users.noreply.github.com> * [Enhance] update SimCLR models and results (#295) * [Enhance] update simclr models and results * [Fix] revise comments to indicate settings * Update version (#296) * [Feature]: Update to 0.9.0 * [Feature]: Add version constrain for mmcls * [Fix]: Fix bug * [Fix]: Fix version bug * [Feature]: Update version in install.md * update changelog * update readme * [Fix] fix uppercase * [Fix] fix uppercase * [Fix] fix uppercase * update version dependency * add cae to readme Co-authored-by: fangyixiao18 <fangyx18@hotmail.com> Co-authored-by: Jiahao Xie <52497952+Jiahao000@users.noreply.github.com> Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com> Co-authored-by: Ming Li <73068772+mitming@users.noreply.github.com> Co-authored-by: xcnick <xcnick0412@gmail.com> Co-authored-by: fangyixiao18 <fangyx18@hotmail.com> Co-authored-by: Jiahao Xie <52497952+Jiahao000@users.noreply.github.com>
2022-04-29 20:01:30 +08:00
LatentCrossCorrelationHead,
Bump version to v0.7.0 (#229) * [Enhance] add pre-commit hook for algo-readme and copyright (#213) * [Enhance] add test windows in workflows (#215) * [Enhance] add test windows in workflows * fix lint * add optional requirements * add try-except judgement * add opencv installation in windows test steps * fix path error on windows * update * update path * update * add pytest skip for algorithm test * update requirements/runtime.txt * update pytest skip * [Docs] translate 0_config.md into Chinese (#216) * [Docs] translate 0_config.md into Chinese * [Fix] fix format description in 0_config.md * Update: 0_config.md * [Fix] fix tsne 'no `init_cfg`' error (#222) * [Fix] fix tsne 'no init_cfg' and pool_type errors * [Refactor] fix linting of tsne vis * [Docs] reorganizing OpenMMLab projects and update algorithms in readme (#219) * [Docs] reorganizing OpenMMLab projects and update algorithms in readme * using small letters * fix typo * [Fix] fix image channel bgr/rgb bug and update benchmarks (#210) * [Fix] fix image channel bgr/rgb bug * update model zoo * update readme and metafile * [Fix] fix typo * [Fix] fix typo * [Fix] fix lint * modify Places205 directory according to the downloaded dataset * update results * [Fix] Fix the bug when using prefetch under multi-view methods, e.g., DenseCL (#218) * fig bug for prefetch_loader under multi-view setting * fix lint problem Co-authored-by: liming <liming.ai@bytedance.com> * [Feature]: MAE official (#221) * [Feature]: MAE single image pre-training * [Fix]: Fix config * [Fix]: Fix dataset link * [Feature]: Add run * [Refactor]: Delete spot * [Feature]: ignore nohup output file * [Feature]: Add auto script to generate run cmd * [Refactor]: Refactor mae config file * [Feature]: sz20 settings * [Feature]: Add auto resume * [Fix]: Fix lint * [Feature]: Make git ignore txt * [Refactor]: Delete gpus in script * [Fix]: Make generate_cmd to add --async * [Feature]: Initial version of Vit fine-tune * [Fix]: Add 1424 specific settings * [Fix]: Fix missing file client bug for 1424 * [Feature]: 1424 customized settings * [Fix]: Make drop in eval to False * [Feature]: Change the finetune and pre-training settings * [Feature]: Add debug setting * [Refactor]: Refactor the model * [Feature]: Customized settings * [Feature]: Add A100 settings * [Fix]: Change mae to imagenet * [Feature]: Change mae pretrain num workers to 32 * [Feature]: Change num workers to 16 * [Feature]: Add A100 setting for pre_release ft version * [Feature]: Add img_norm_cfg * [Fix]: Fix mae cls test missing logits bug * [Fix]: Fix mae cls head bias initialize to zero * [Feature]: Rename mae config name * [Feature]: Add MAE README.md * [Fix]: Fix lint * [Feature]: Fix typo * [Fix]: Fix typo * [Feature]: Fix invalid link * [Fix]: Fix finetune config file name * [Feature]: Official pretrain v1 * [Feature]: Change log interval to 100 * [Feature]: pretrain 1600 epochs * [Fix]: Change encoder num head to 12 * [Feature]: Mix precision * [Feature]: Add default value to random masking * [Feature]: Official MAE finetune * [Feature]: Finetune img per gpu 32 * [Feature]: Add multi machine training for lincls * [Fix]: Fix lincls master port master addr * [Feature]: Change img per gpu to 128 * [Feature]: Add linear eval and Refactor * [Fix]: Fix debug mode * [Fix]: Delete MAE dataset in __init__.py * [Feature]: normalize pixel for mae * [Fix]: Fix lint * [Feature]: LARS for linear eval * [Feature]: Add lars for mae linear eval * [Feature]: Change mae linear lars num workers to 32 * [Feature]: Change mae linear lars num workers to 8 * [Feature]: log every 25 iter for mae linear eval lars * [Feature]: Add 1600 epoch and 800 epoch pretraining * [Fix]: Change linear eval to 902 * [Fix]: Add random flip to linear eval * [Fix]: delete fp16 in mae * [Refactor]: Change backbone to mmcls * [Fix]: Align finetune settings * [Fix]: replace timm trunc_normal with mmcv trunc_normal * [Fix]: Change finetune layer_decay to 0.65 * [Fix]: Delete pretrain last norm when global_pooling * [Fix]: set requires_grad of norm1 to False * [Fix]: delete norm1 * [Fix]: Fix docstring bug * [Fix]: Fix lint * [Fix]: Add external link * [Fix]: Delete auto_resume and reformat config readme. * [Fix]: Fix pytest bug * [Fix]: Fix lint * [Refactor]: Rename filename * [Feature]: Add docstring * [Fix]: Rename config file name * [Fix]: Fix name inconsistency bug * [Fix]: Change the default value of persistent_worker in builder to True * [Fix]: Change the default value of CPUS_PER_TASK to 5 * [Fix]: Add a blank line to line136 in tools/train.py * [Fix]: Fix MAE algorithm docstring format and add paper name and url * [Feature]: Add MAE paper name and link, and store mae teaser on github * [Refactor]: Delete mae.png * [Fix]: Fix config file name” * [Fix]: Fix name bug * [Refactor]: Change default GPUS to 8 * [Fix]: Abandon change to drop_last * [Fix]: Fix docstring in mae algorithm * [Fix]: Fix lint * [Fix]: Fix lint * [Fix]: Fix mae finetune algo type bug * [Feature]: Add unit test for algorithm * [Feature]: Add unit test for remaining parts * [Fix]: Fix lint * [Fix]: Fix typo * [Fix]: Delete some unnecessary modification in gitignore * [Feature]: Change finetune setting in mae algo to mixup setting * [Fix]: Change norm_pix_loss to norm_pix in pretrain head * [Fix]: Delete modification in dist_train_linear.sh * [Refactor]: Delete global pool in mae_cls_vit.py * [Fix]: Change finetune param to mixup in test_mae_classification * [Fix]: Change norm_pix_loss to norm_pix of mae_pretrain_head in unit test * [Fix]: Change norm_pix_loss to norm_pix in unit test * [Refactor]: Create init_weights for mae_finetune_head and mae_linprobe_head * [Refactor]: Construct 2d sin-cosine position embedding using torch * [Refactor]: Using classification and using mixup from mmcls * [Fix]: Fix lint * [Fix]: Add False to finetune mae linprobe‘ “ * [Fix]: Set drop_last to False * [Fix]: Fix MAE finetune layerwise lr bug * [Refactor]: Delete redundant MAE when registering MAE * [Refactor]: Split initialize_weights in MAE to submodules * [Fix]: Change the min_lr of mae pretrain to 0.0 * [Refactor]: Delete unused _init_weights in mae_cls_vit * [Refactor]: Change MAE cls vit to a more general name * [Feature]: Add Epoch Fix cosine annealing lr updater * [Fix]: Fix lint * [Feature]: Add layer wise lr decay in optimizer constructor * [Fix]: Fix lint * [Fix]: Fix set layer wise lr decay bug * [Fix]: Fix UT for MAE * [Fix]: Fix lint * [Fix]: update algorithm readme format for MAE * [Fix]: Fix isort * [Fix]: Add Returns inmae_pretrain_vit * [Fix]: Change bgr to rgb * [Fix]: Change norm pix to True * [Fix]: Use cls_token to linear prob * [Fix]: Delete mixup.py * [Fix]: Fix MAE readme * [Feature]: Delete linprobe * [Refactor]: Merge MAE head into one file * [Fix]: Fix lint * [Fix]: rename mae_pretrain_head to mae_head * [Fix]: Fix import error in __init__.py * [Feature]: skip MAE algo UT when running on windows * [Fix]: Fix UT bug * [Feature]: Update model_zoo * [Fix]: Rename MAE pretrain model name * [Fix]: Delete mae ft prefix * [Feature]: Change b to base * [Refactor]: Change b in MAE pt config to base * [Fix]: Fix typo in docstring * [Fix]: Fix name bug * [Feature]: Add new constructor for MAE finetune * [Fix]: Fix model_zoo link * [Fix]: Skip UT for MAE * [Fix]: Change fixed channel order to param Co-authored-by: LIU Yuan <liuyuuan@pjlab.org.cn> Co-authored-by: liu yuan <liuyuan@pjlab.org.cn> * [Feature]: Add diff seeds to diff ranks and set torch seed in worker_init_fn (#228) * [Feature]: Add set diff seeds to diff ranks * [Fix]: Set diff seed to diff workers * Bump version to v0.7.0 (#227) * Bump version to v0.7.0 * [Docs] update readme Co-authored-by: wang11wang <95845452+wang11wang@users.noreply.github.com> Co-authored-by: Liangyu Chen <45140242+c-liangyu@users.noreply.github.com> Co-authored-by: Ming Li <73068772+mitming@users.noreply.github.com> Co-authored-by: liming <liming.ai@bytedance.com> Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> Co-authored-by: LIU Yuan <liuyuuan@pjlab.org.cn> Co-authored-by: liu yuan <liuyuan@pjlab.org.cn>
2022-03-04 13:43:49 +08:00
LatentPredictHead, MAEFinetuneHead,
MAEPretrainHead, MaskFeatFinetuneHead,
MaskFeatPretrainHead, MultiClsHead,
SwAVHead)
2021-12-15 19:07:01 +08:00
def test_cls_head():
# test ClsHead
head = ClsHead()
fake_cls_score = [torch.rand(4, 3)]
fake_gt_label = torch.randint(0, 2, (4, ))
loss = head.loss(fake_cls_score, fake_gt_label)
assert loss['loss'].item() > 0
def test_contrastive_head():
head = ContrastiveHead()
fake_pos = torch.rand(32, 1) # N, 1
fake_neg = torch.rand(32, 100) # N, k
loss = head.forward(fake_pos, fake_neg)
assert loss['loss'].item() > 0
def test_latent_predict_head():
predictor = dict(
type='NonLinearNeck',
in_channels=64,
hid_channels=128,
out_channels=64,
with_bias=True,
with_last_bn=True,
with_avg_pool=False,
norm_cfg=dict(type='BN1d'))
head = LatentPredictHead(predictor=predictor)
fake_input = torch.rand(32, 64) # N, C
fake_traget = torch.rand(32, 64) # N, C
loss = head.forward(fake_input, fake_traget)
assert loss['loss'].item() > -1
def test_latent_cls_head():
head = LatentClsHead(64, 10)
fake_input = torch.rand(32, 64) # N, C
fake_traget = torch.rand(32, 64) # N, C
loss = head.forward(fake_input, fake_traget)
assert loss['loss'].item() > 0
Bump version to v0.9.0 (#299) * [Feature]: MAE pre-training with fp16 (#271) * [Feature]: MAE pre-training with fp16 * [Fix]: Fix lint * [Fix]: Fix SimMIM config link, and add SimMIM to model_zoo (#272) * [Fix]: Fix link error * [Fix]: Add SimMIM to model zoo * [Fix]: Fix lint * [Fix] fix 'no init_cfg' error for pre-trained model backbones (#256) * [UT] add unit test for apis (#276) * [UT] add unit test for apis * ignore pytest log * [Feature] Add extra dataloader settings in configs. (#264) * [Feature] support to set validation samples per gpu independently * set default to be cfg.data.samples_per_gpu * modify the tools/test.py * using 'train_dataloader', 'val_dataloader', 'test_dataloader' for specific settings * test 'evaluation' branch * [Fix]: Change imgs_per_gpu to samples_per_gpu MAE (#278) * [Feature]: Add SimMIM 192 pt 224 ft (#280) * [Feature]: Add SimMIM 192 pt 224 ft * [Feature]: Add simmim 192 pt 224 ft to readme * [Fix] fix key error bug when registering custom hooks (#273) * [UT] remove pytorch1.5 test (#288) * [Benchmark] rename linear probing config file names (#281) * [Benchmark] rename linear probing config file names * update config links * Avoid GPU memory leak with prefetch dataloader (#277) * [Feature] barlowtwins (#207) * [Fix]: Fix mmcls upgrade bug (#235) * [Feature]: Add multi machine dist_train (#232) * [Feature]: Add multi machine dist_train * [Fix]: Change bash to sh * [Fix]: Fix missing sh suffix * [Refactor]: Change bash to sh * [Refactor] Add unit test (#234) * [Refactor] add unit test * update workflow * update * [Fix] fix lint * update test * refactor moco and densecl unit test * fix lint * add unit test * update unit test * remove modification * [Feature]: Add MAE metafile (#238) * [Feature]: Add MAE metafile * [Fix]: Fix lint * [Fix]: Change LARS to AdamW in the metafile of MAE * Add barlowtwins * Add unit test for barlowtwins * Adjust training params * add decorator to pass CI * adjust params * Add barlowtwins * Add unit test for barlowtwins * Adjust training params * add decorator to pass CI * adjust params * add barlowtwins configs * revise LatentCrossCorrelationHead * modify ut to save memory * add metafile * add barlowtwins results to model zoo * add barlow twins to homepage * fix batch size bug * add algorithm readme * add type hints * reorganize the model zoo * remove one config * recover the config * add missing docstring * revise barlowtwins * reorganize coco and voc benchmark * add barlowtwins to index.rst * revise docstring Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com> Co-authored-by: fangyixiao18 <fangyx18@hotmail.com> * [Fix] fix --local-rank (#290) * [UT] reduce memory usage while runing unit test (#291) * [Feature]: CAE Supported (#284) * [Feature]: Add mc * [Feature]: Add dataset of CAE * [Feature]: Init version of CAE * [Feature]: Add mc * [Fix]: Change beta to (0.9, 0.999) * [Fix]: New feature * [Fix]: Decouple the qkv bias * [Feature]: Decouple qkv bias in MultiheadAttention * [Feature]: New mask generator * [Fix]: Fix TransformEncoderLayer bug * [Feature]: Add MAE CAE linear prob * [Fix]: Fix config * [Fix]: Delete redundant mc * [Fix]: Add init value in mim cls vit * [Fix]: Fix cae ft config * [Fix]: Delete repeated init_values * [Fix]: Change bs from 64 to 128 in CAE ft * [Fix]: Add mc in cae pt * [Fix]: Fix momemtum update bug * [Fix]: Add no weight_decay for gamma * [Feature]: Add mc for cae pt * [Fix]: Delete mc * [Fix]: Delete redundant files * [Fix]: Fix lint * [Feature]: Add docstring to algo, backbone, neck and head * [Fix]: Fix lint * [Fix]: network * [Feature]: Add docstrings for network blocks * [Feature]: Add docstring to ToTensor * [Feature]: Add docstring to transoform * [Fix]: Add type hint to BEiTMaskGenerator * [Fix]: Fix lint * [Fix]: Add copyright to dalle_e * [Fix]: Fix BlockwiseMaskGenerator * [Feature]: Add UT for CAE * [Fix]: Fix dalle state_dict path not existed bug * [Fix]: Delete file_client_args related code * [Fix]: Remove redundant code * [Refactor]: Add fp16 to the name of cae pre-train config * [Refactor]: Use FFN from mmcv * [Refactor]: Change network_blocks to trasformer_blocks * [Fix]: Fix mask generator name bug * [Fix]: cae pre-train config bug * [Fix]: Fix docstring grammar * [Fix]: Fix mc related code * [Fix]: Add object parent to transform * [Fix]: Delete unnecessary modification * [Fix]: Change blockwisemask generator to simmim mask generator * [Refactor]: Change cae mae pretrain vit to cae mae vit * [Refactor]: Change lamb to lambd * [Fix]: Remove blank line * [Fix]: Fix lint * [Fix]: Fix UT * [Fix]: Delete modification to swin * [Fix]: Fix lint * [Feature]: Add README and metafile * [Feature]: Update index.rst * [Fix]: Update model_zoo * [Fix]: Change MAE to CAE in algorithm * [Fix]: Change SimMIMMaskGenerator to CAEMaskGenerator * [Fix]: Fix model zoo * [Fix]: Change to dalle_encoder * [Feature]: Add download link for dalle * [Fix]: Fix lint * [Fix]: Fix UT * [Fix]: Update metafile * [Fix]: Change b to base * [Feature]: Add dalle download link in warning * [Fix] add arxiv link in readme Co-authored-by: Jiahao Xie <52497952+Jiahao000@users.noreply.github.com> * [Enhance] update SimCLR models and results (#295) * [Enhance] update simclr models and results * [Fix] revise comments to indicate settings * Update version (#296) * [Feature]: Update to 0.9.0 * [Feature]: Add version constrain for mmcls * [Fix]: Fix bug * [Fix]: Fix version bug * [Feature]: Update version in install.md * update changelog * update readme * [Fix] fix uppercase * [Fix] fix uppercase * [Fix] fix uppercase * update version dependency * add cae to readme Co-authored-by: fangyixiao18 <fangyx18@hotmail.com> Co-authored-by: Jiahao Xie <52497952+Jiahao000@users.noreply.github.com> Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com> Co-authored-by: Ming Li <73068772+mitming@users.noreply.github.com> Co-authored-by: xcnick <xcnick0412@gmail.com> Co-authored-by: fangyixiao18 <fangyx18@hotmail.com> Co-authored-by: Jiahao Xie <52497952+Jiahao000@users.noreply.github.com>
2022-04-29 20:01:30 +08:00
def test_latent_cross_correlation_head():
head = LatentCrossCorrelationHead(2, 0.0051)
fake_input = torch.rand(32, 2) # N, C
fake_traget = torch.rand(32, 2) # N, C
loss = head.forward(fake_input, fake_traget)
assert loss['loss'].item() > 0
2021-12-15 19:07:01 +08:00
def test_multi_cls_head():
head = MultiClsHead(in_indices=(0, 1))
fake_input = [torch.rand(8, 64, 5, 5), torch.rand(8, 256, 14, 14)]
out = head.forward(fake_input)
assert isinstance(out, list)
fake_cls_score = [torch.rand(4, 3)]
fake_gt_label = torch.randint(0, 2, (4, ))
loss = head.loss(fake_cls_score, fake_gt_label)
print(loss.keys())
for k in loss.keys():
if 'loss' in k:
assert loss[k].item() > 0
def test_swav_head():
head = SwAVHead(feat_dim=128, num_crops=[2, 6])
fake_input = torch.rand(32, 128) # N, C
loss = head.forward(fake_input)
assert loss['loss'].item() > 0
Bump version to v0.7.0 (#229) * [Enhance] add pre-commit hook for algo-readme and copyright (#213) * [Enhance] add test windows in workflows (#215) * [Enhance] add test windows in workflows * fix lint * add optional requirements * add try-except judgement * add opencv installation in windows test steps * fix path error on windows * update * update path * update * add pytest skip for algorithm test * update requirements/runtime.txt * update pytest skip * [Docs] translate 0_config.md into Chinese (#216) * [Docs] translate 0_config.md into Chinese * [Fix] fix format description in 0_config.md * Update: 0_config.md * [Fix] fix tsne 'no `init_cfg`' error (#222) * [Fix] fix tsne 'no init_cfg' and pool_type errors * [Refactor] fix linting of tsne vis * [Docs] reorganizing OpenMMLab projects and update algorithms in readme (#219) * [Docs] reorganizing OpenMMLab projects and update algorithms in readme * using small letters * fix typo * [Fix] fix image channel bgr/rgb bug and update benchmarks (#210) * [Fix] fix image channel bgr/rgb bug * update model zoo * update readme and metafile * [Fix] fix typo * [Fix] fix typo * [Fix] fix lint * modify Places205 directory according to the downloaded dataset * update results * [Fix] Fix the bug when using prefetch under multi-view methods, e.g., DenseCL (#218) * fig bug for prefetch_loader under multi-view setting * fix lint problem Co-authored-by: liming <liming.ai@bytedance.com> * [Feature]: MAE official (#221) * [Feature]: MAE single image pre-training * [Fix]: Fix config * [Fix]: Fix dataset link * [Feature]: Add run * [Refactor]: Delete spot * [Feature]: ignore nohup output file * [Feature]: Add auto script to generate run cmd * [Refactor]: Refactor mae config file * [Feature]: sz20 settings * [Feature]: Add auto resume * [Fix]: Fix lint * [Feature]: Make git ignore txt * [Refactor]: Delete gpus in script * [Fix]: Make generate_cmd to add --async * [Feature]: Initial version of Vit fine-tune * [Fix]: Add 1424 specific settings * [Fix]: Fix missing file client bug for 1424 * [Feature]: 1424 customized settings * [Fix]: Make drop in eval to False * [Feature]: Change the finetune and pre-training settings * [Feature]: Add debug setting * [Refactor]: Refactor the model * [Feature]: Customized settings * [Feature]: Add A100 settings * [Fix]: Change mae to imagenet * [Feature]: Change mae pretrain num workers to 32 * [Feature]: Change num workers to 16 * [Feature]: Add A100 setting for pre_release ft version * [Feature]: Add img_norm_cfg * [Fix]: Fix mae cls test missing logits bug * [Fix]: Fix mae cls head bias initialize to zero * [Feature]: Rename mae config name * [Feature]: Add MAE README.md * [Fix]: Fix lint * [Feature]: Fix typo * [Fix]: Fix typo * [Feature]: Fix invalid link * [Fix]: Fix finetune config file name * [Feature]: Official pretrain v1 * [Feature]: Change log interval to 100 * [Feature]: pretrain 1600 epochs * [Fix]: Change encoder num head to 12 * [Feature]: Mix precision * [Feature]: Add default value to random masking * [Feature]: Official MAE finetune * [Feature]: Finetune img per gpu 32 * [Feature]: Add multi machine training for lincls * [Fix]: Fix lincls master port master addr * [Feature]: Change img per gpu to 128 * [Feature]: Add linear eval and Refactor * [Fix]: Fix debug mode * [Fix]: Delete MAE dataset in __init__.py * [Feature]: normalize pixel for mae * [Fix]: Fix lint * [Feature]: LARS for linear eval * [Feature]: Add lars for mae linear eval * [Feature]: Change mae linear lars num workers to 32 * [Feature]: Change mae linear lars num workers to 8 * [Feature]: log every 25 iter for mae linear eval lars * [Feature]: Add 1600 epoch and 800 epoch pretraining * [Fix]: Change linear eval to 902 * [Fix]: Add random flip to linear eval * [Fix]: delete fp16 in mae * [Refactor]: Change backbone to mmcls * [Fix]: Align finetune settings * [Fix]: replace timm trunc_normal with mmcv trunc_normal * [Fix]: Change finetune layer_decay to 0.65 * [Fix]: Delete pretrain last norm when global_pooling * [Fix]: set requires_grad of norm1 to False * [Fix]: delete norm1 * [Fix]: Fix docstring bug * [Fix]: Fix lint * [Fix]: Add external link * [Fix]: Delete auto_resume and reformat config readme. * [Fix]: Fix pytest bug * [Fix]: Fix lint * [Refactor]: Rename filename * [Feature]: Add docstring * [Fix]: Rename config file name * [Fix]: Fix name inconsistency bug * [Fix]: Change the default value of persistent_worker in builder to True * [Fix]: Change the default value of CPUS_PER_TASK to 5 * [Fix]: Add a blank line to line136 in tools/train.py * [Fix]: Fix MAE algorithm docstring format and add paper name and url * [Feature]: Add MAE paper name and link, and store mae teaser on github * [Refactor]: Delete mae.png * [Fix]: Fix config file name” * [Fix]: Fix name bug * [Refactor]: Change default GPUS to 8 * [Fix]: Abandon change to drop_last * [Fix]: Fix docstring in mae algorithm * [Fix]: Fix lint * [Fix]: Fix lint * [Fix]: Fix mae finetune algo type bug * [Feature]: Add unit test for algorithm * [Feature]: Add unit test for remaining parts * [Fix]: Fix lint * [Fix]: Fix typo * [Fix]: Delete some unnecessary modification in gitignore * [Feature]: Change finetune setting in mae algo to mixup setting * [Fix]: Change norm_pix_loss to norm_pix in pretrain head * [Fix]: Delete modification in dist_train_linear.sh * [Refactor]: Delete global pool in mae_cls_vit.py * [Fix]: Change finetune param to mixup in test_mae_classification * [Fix]: Change norm_pix_loss to norm_pix of mae_pretrain_head in unit test * [Fix]: Change norm_pix_loss to norm_pix in unit test * [Refactor]: Create init_weights for mae_finetune_head and mae_linprobe_head * [Refactor]: Construct 2d sin-cosine position embedding using torch * [Refactor]: Using classification and using mixup from mmcls * [Fix]: Fix lint * [Fix]: Add False to finetune mae linprobe‘ “ * [Fix]: Set drop_last to False * [Fix]: Fix MAE finetune layerwise lr bug * [Refactor]: Delete redundant MAE when registering MAE * [Refactor]: Split initialize_weights in MAE to submodules * [Fix]: Change the min_lr of mae pretrain to 0.0 * [Refactor]: Delete unused _init_weights in mae_cls_vit * [Refactor]: Change MAE cls vit to a more general name * [Feature]: Add Epoch Fix cosine annealing lr updater * [Fix]: Fix lint * [Feature]: Add layer wise lr decay in optimizer constructor * [Fix]: Fix lint * [Fix]: Fix set layer wise lr decay bug * [Fix]: Fix UT for MAE * [Fix]: Fix lint * [Fix]: update algorithm readme format for MAE * [Fix]: Fix isort * [Fix]: Add Returns inmae_pretrain_vit * [Fix]: Change bgr to rgb * [Fix]: Change norm pix to True * [Fix]: Use cls_token to linear prob * [Fix]: Delete mixup.py * [Fix]: Fix MAE readme * [Feature]: Delete linprobe * [Refactor]: Merge MAE head into one file * [Fix]: Fix lint * [Fix]: rename mae_pretrain_head to mae_head * [Fix]: Fix import error in __init__.py * [Feature]: skip MAE algo UT when running on windows * [Fix]: Fix UT bug * [Feature]: Update model_zoo * [Fix]: Rename MAE pretrain model name * [Fix]: Delete mae ft prefix * [Feature]: Change b to base * [Refactor]: Change b in MAE pt config to base * [Fix]: Fix typo in docstring * [Fix]: Fix name bug * [Feature]: Add new constructor for MAE finetune * [Fix]: Fix model_zoo link * [Fix]: Skip UT for MAE * [Fix]: Change fixed channel order to param Co-authored-by: LIU Yuan <liuyuuan@pjlab.org.cn> Co-authored-by: liu yuan <liuyuan@pjlab.org.cn> * [Feature]: Add diff seeds to diff ranks and set torch seed in worker_init_fn (#228) * [Feature]: Add set diff seeds to diff ranks * [Fix]: Set diff seed to diff workers * Bump version to v0.7.0 (#227) * Bump version to v0.7.0 * [Docs] update readme Co-authored-by: wang11wang <95845452+wang11wang@users.noreply.github.com> Co-authored-by: Liangyu Chen <45140242+c-liangyu@users.noreply.github.com> Co-authored-by: Ming Li <73068772+mitming@users.noreply.github.com> Co-authored-by: liming <liming.ai@bytedance.com> Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> Co-authored-by: LIU Yuan <liuyuuan@pjlab.org.cn> Co-authored-by: liu yuan <liuyuan@pjlab.org.cn>
2022-03-04 13:43:49 +08:00
def test_mae_pretrain_head():
head = MAEPretrainHead(norm_pix=False, patch_size=16)
fake_input = torch.rand((2, 3, 224, 224))
fake_mask = torch.ones((2, 196))
fake_pred = torch.rand((2, 196, 768))
loss = head.forward(fake_input, fake_pred, fake_mask)
assert loss['loss'].item() > 0
head_norm_pixel = MAEPretrainHead(norm_pix=True, patch_size=16)
loss_norm_pixel = head_norm_pixel.forward(fake_input, fake_pred, fake_mask)
assert loss_norm_pixel['loss'].item() > 0
x = torch.rand((1, 4, 16**2 * 3))
imgs = head_norm_pixel.unpatchify(x)
assert imgs.size() == torch.Size((1, 3, 32, 32))
Bump version to v0.7.0 (#229) * [Enhance] add pre-commit hook for algo-readme and copyright (#213) * [Enhance] add test windows in workflows (#215) * [Enhance] add test windows in workflows * fix lint * add optional requirements * add try-except judgement * add opencv installation in windows test steps * fix path error on windows * update * update path * update * add pytest skip for algorithm test * update requirements/runtime.txt * update pytest skip * [Docs] translate 0_config.md into Chinese (#216) * [Docs] translate 0_config.md into Chinese * [Fix] fix format description in 0_config.md * Update: 0_config.md * [Fix] fix tsne 'no `init_cfg`' error (#222) * [Fix] fix tsne 'no init_cfg' and pool_type errors * [Refactor] fix linting of tsne vis * [Docs] reorganizing OpenMMLab projects and update algorithms in readme (#219) * [Docs] reorganizing OpenMMLab projects and update algorithms in readme * using small letters * fix typo * [Fix] fix image channel bgr/rgb bug and update benchmarks (#210) * [Fix] fix image channel bgr/rgb bug * update model zoo * update readme and metafile * [Fix] fix typo * [Fix] fix typo * [Fix] fix lint * modify Places205 directory according to the downloaded dataset * update results * [Fix] Fix the bug when using prefetch under multi-view methods, e.g., DenseCL (#218) * fig bug for prefetch_loader under multi-view setting * fix lint problem Co-authored-by: liming <liming.ai@bytedance.com> * [Feature]: MAE official (#221) * [Feature]: MAE single image pre-training * [Fix]: Fix config * [Fix]: Fix dataset link * [Feature]: Add run * [Refactor]: Delete spot * [Feature]: ignore nohup output file * [Feature]: Add auto script to generate run cmd * [Refactor]: Refactor mae config file * [Feature]: sz20 settings * [Feature]: Add auto resume * [Fix]: Fix lint * [Feature]: Make git ignore txt * [Refactor]: Delete gpus in script * [Fix]: Make generate_cmd to add --async * [Feature]: Initial version of Vit fine-tune * [Fix]: Add 1424 specific settings * [Fix]: Fix missing file client bug for 1424 * [Feature]: 1424 customized settings * [Fix]: Make drop in eval to False * [Feature]: Change the finetune and pre-training settings * [Feature]: Add debug setting * [Refactor]: Refactor the model * [Feature]: Customized settings * [Feature]: Add A100 settings * [Fix]: Change mae to imagenet * [Feature]: Change mae pretrain num workers to 32 * [Feature]: Change num workers to 16 * [Feature]: Add A100 setting for pre_release ft version * [Feature]: Add img_norm_cfg * [Fix]: Fix mae cls test missing logits bug * [Fix]: Fix mae cls head bias initialize to zero * [Feature]: Rename mae config name * [Feature]: Add MAE README.md * [Fix]: Fix lint * [Feature]: Fix typo * [Fix]: Fix typo * [Feature]: Fix invalid link * [Fix]: Fix finetune config file name * [Feature]: Official pretrain v1 * [Feature]: Change log interval to 100 * [Feature]: pretrain 1600 epochs * [Fix]: Change encoder num head to 12 * [Feature]: Mix precision * [Feature]: Add default value to random masking * [Feature]: Official MAE finetune * [Feature]: Finetune img per gpu 32 * [Feature]: Add multi machine training for lincls * [Fix]: Fix lincls master port master addr * [Feature]: Change img per gpu to 128 * [Feature]: Add linear eval and Refactor * [Fix]: Fix debug mode * [Fix]: Delete MAE dataset in __init__.py * [Feature]: normalize pixel for mae * [Fix]: Fix lint * [Feature]: LARS for linear eval * [Feature]: Add lars for mae linear eval * [Feature]: Change mae linear lars num workers to 32 * [Feature]: Change mae linear lars num workers to 8 * [Feature]: log every 25 iter for mae linear eval lars * [Feature]: Add 1600 epoch and 800 epoch pretraining * [Fix]: Change linear eval to 902 * [Fix]: Add random flip to linear eval * [Fix]: delete fp16 in mae * [Refactor]: Change backbone to mmcls * [Fix]: Align finetune settings * [Fix]: replace timm trunc_normal with mmcv trunc_normal * [Fix]: Change finetune layer_decay to 0.65 * [Fix]: Delete pretrain last norm when global_pooling * [Fix]: set requires_grad of norm1 to False * [Fix]: delete norm1 * [Fix]: Fix docstring bug * [Fix]: Fix lint * [Fix]: Add external link * [Fix]: Delete auto_resume and reformat config readme. * [Fix]: Fix pytest bug * [Fix]: Fix lint * [Refactor]: Rename filename * [Feature]: Add docstring * [Fix]: Rename config file name * [Fix]: Fix name inconsistency bug * [Fix]: Change the default value of persistent_worker in builder to True * [Fix]: Change the default value of CPUS_PER_TASK to 5 * [Fix]: Add a blank line to line136 in tools/train.py * [Fix]: Fix MAE algorithm docstring format and add paper name and url * [Feature]: Add MAE paper name and link, and store mae teaser on github * [Refactor]: Delete mae.png * [Fix]: Fix config file name” * [Fix]: Fix name bug * [Refactor]: Change default GPUS to 8 * [Fix]: Abandon change to drop_last * [Fix]: Fix docstring in mae algorithm * [Fix]: Fix lint * [Fix]: Fix lint * [Fix]: Fix mae finetune algo type bug * [Feature]: Add unit test for algorithm * [Feature]: Add unit test for remaining parts * [Fix]: Fix lint * [Fix]: Fix typo * [Fix]: Delete some unnecessary modification in gitignore * [Feature]: Change finetune setting in mae algo to mixup setting * [Fix]: Change norm_pix_loss to norm_pix in pretrain head * [Fix]: Delete modification in dist_train_linear.sh * [Refactor]: Delete global pool in mae_cls_vit.py * [Fix]: Change finetune param to mixup in test_mae_classification * [Fix]: Change norm_pix_loss to norm_pix of mae_pretrain_head in unit test * [Fix]: Change norm_pix_loss to norm_pix in unit test * [Refactor]: Create init_weights for mae_finetune_head and mae_linprobe_head * [Refactor]: Construct 2d sin-cosine position embedding using torch * [Refactor]: Using classification and using mixup from mmcls * [Fix]: Fix lint * [Fix]: Add False to finetune mae linprobe‘ “ * [Fix]: Set drop_last to False * [Fix]: Fix MAE finetune layerwise lr bug * [Refactor]: Delete redundant MAE when registering MAE * [Refactor]: Split initialize_weights in MAE to submodules * [Fix]: Change the min_lr of mae pretrain to 0.0 * [Refactor]: Delete unused _init_weights in mae_cls_vit * [Refactor]: Change MAE cls vit to a more general name * [Feature]: Add Epoch Fix cosine annealing lr updater * [Fix]: Fix lint * [Feature]: Add layer wise lr decay in optimizer constructor * [Fix]: Fix lint * [Fix]: Fix set layer wise lr decay bug * [Fix]: Fix UT for MAE * [Fix]: Fix lint * [Fix]: update algorithm readme format for MAE * [Fix]: Fix isort * [Fix]: Add Returns inmae_pretrain_vit * [Fix]: Change bgr to rgb * [Fix]: Change norm pix to True * [Fix]: Use cls_token to linear prob * [Fix]: Delete mixup.py * [Fix]: Fix MAE readme * [Feature]: Delete linprobe * [Refactor]: Merge MAE head into one file * [Fix]: Fix lint * [Fix]: rename mae_pretrain_head to mae_head * [Fix]: Fix import error in __init__.py * [Feature]: skip MAE algo UT when running on windows * [Fix]: Fix UT bug * [Feature]: Update model_zoo * [Fix]: Rename MAE pretrain model name * [Fix]: Delete mae ft prefix * [Feature]: Change b to base * [Refactor]: Change b in MAE pt config to base * [Fix]: Fix typo in docstring * [Fix]: Fix name bug * [Feature]: Add new constructor for MAE finetune * [Fix]: Fix model_zoo link * [Fix]: Skip UT for MAE * [Fix]: Change fixed channel order to param Co-authored-by: LIU Yuan <liuyuuan@pjlab.org.cn> Co-authored-by: liu yuan <liuyuan@pjlab.org.cn> * [Feature]: Add diff seeds to diff ranks and set torch seed in worker_init_fn (#228) * [Feature]: Add set diff seeds to diff ranks * [Fix]: Set diff seed to diff workers * Bump version to v0.7.0 (#227) * Bump version to v0.7.0 * [Docs] update readme Co-authored-by: wang11wang <95845452+wang11wang@users.noreply.github.com> Co-authored-by: Liangyu Chen <45140242+c-liangyu@users.noreply.github.com> Co-authored-by: Ming Li <73068772+mitming@users.noreply.github.com> Co-authored-by: liming <liming.ai@bytedance.com> Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com> Co-authored-by: LIU Yuan <liuyuuan@pjlab.org.cn> Co-authored-by: liu yuan <liuyuan@pjlab.org.cn>
2022-03-04 13:43:49 +08:00
def test_mae_finetune_head():
head = MAEFinetuneHead(num_classes=1000, embed_dim=768)
fake_input = torch.rand((2, 768))
fake_labels = F.normalize(torch.rand((2, 1000)), dim=-1)
fake_features = head.forward(fake_input)
assert list(fake_features[0].shape) == [2, 1000]
loss = head.loss(fake_features, fake_labels)
assert loss['loss'].item() > 0
def test_maskfeat_pretrain_head():
head = MaskFeatPretrainHead(hog_dim=108)
fake_mask = torch.ones((2, 14, 14)).bool()
fake_pred = torch.rand((2, 197, 768))
fake_hog = torch.rand((2, 196, 108))
loss = head.forward(fake_pred, fake_hog, fake_mask)
assert loss['loss'].item() > 0
def test_maskfeat_finetune_head():
head = MaskFeatFinetuneHead(num_classes=1000, embed_dim=768)
fake_input = torch.rand((2, 768))
fake_labels = F.normalize(torch.rand((2, 1000)), dim=-1)
fake_features = head.forward(fake_input)
assert list(fake_features[0].shape) == [2, 1000]
loss = head.loss(fake_features, fake_labels)
assert loss['loss'].item() > 0