* Add wrapper to use backbones from timm
* Add tests
* Remove timm from optional deps and modify GitHub workflow.
Co-authored-by: mzr1996 <mzr1996@163.com>
* Imporve result visualization to support wait time and change the backend
to matplotlib.
* Add unit test for visualization
* Add adaptive dpi function
* Rename `imshow_cls_result` to `imshow_infos`.
* Support str in `imshow_infos`
* Improve docstring.
* Add classwise weight in losses:CE,BCE,softBCE
* Update unit test
* rm some extra code
* rm some extra code
* fix broadcast
* fix broadcast
* update unit tests
* use new_tensor
* fix lint
* Add hparams argument in `AutoAugment` and `RandAugment`.
And `pad_val` supports sequence instead of tuple only.
* Add unit tests for `AutoAugment` and `hparams` in `RandAugment`.
* Use smaller test image to speed up uni tests.
* Use hparams to simplify RandAugment config in swin-transformer.
* Rename augment config name from `pipeline` to `pipelines`.
* Add some commnet ad docstring.
* Refactor unit tests folder structure.
* Remove label smooth and Vit test in `test_classifiers.py`
* Rename test_utils in dataset to test_dataset_utils
* Split test_models/test_utils/test_utils.py to multiple sub files.
* Add unit tests of classifiers and heads
* Use patch context manager.
* Add unit test of `is_tracing`, and add warning in `is_tracing` if torch
verison is smaller than 1.6.0
* Add swin transformer archs S, B and L.
* Add SwinTransformer configs
* Add train config files of swin.
* Align init method with original code
* Use nn.Unfold to merge patch
* Change all ConfigDict to dict
* Add init_cfg for all subclasses of BaseModule.
* Use mmcv version init function
* Add Swin README
* Use safer cfg copy method
* Improve docstring and variable name.
* Fix some difference in randaug
Fix BGR bug, align scheduler config.
Fix label smoothing parameter difference.
* Fix missing droppath in attn
* Fix bug of relative posititon table if window width is not equal to
height.
* Make `PatchMerging` more general, support kernel, stride, padding and
dilation.
* Rename `residual` to `identity` in attention and FFN.
* Add `auto_pad` option to auto pad feature map
* Improve docstring.
* Fix bug in ShiftWMSA padding.
* Remove unused `key` and `value` in ShiftWMSA
* Move `PatchMerging` into utils and use common `PatchEmbed`.
* Use latest `LinearClsHead`, train augments and label smooth settings.
And remove original `SwinLinearClsHead`.
* Mark some configs as "Evalution Only".
* Remove useless comment in config
* 1. Move ShiftWindowMSA and WindowMSA to `utils/attention.py`
2. Add docstrings of each module.
3. Fix some variables' names.
4. Other small improvement.
* Add unit tests of swin-transformer and patchmerging.
* Fix some bugs in unit tests.
* Fix bug of rel_position_index if window is not square.
* Make WindowMSA implicit, and add unit tests.
* Add metafile.yml, update readme and model_zoo.
* add mytrain.py for test
* test before layers
* test attr in layers
* test classifier
* delete mytrain.py
* add patchembed and hybridembed
* add patchembed and hybridembed to __init__
* test patchembed and hybridembed
* fix some comments
* Refactor Mobilenetv3 structure and add ConvClsHead.
* Change model's name from 'MobileNetv3' to 'MobileNetV3'
* Modify configs for MobileNetV3 on CIFAR10.
And add MobileNetV3 configs for imagenet
* Fix activate setting bugs in MobileNetV3.
And remove bias in SELayer.
* Modify unittest
* Remove useless config and file.
* Fix mobilenetv3-large arch setting
* Add dropout option in ConvClsHead
* Fix MobilenetV3 structure according to torchvision version.
1. Remove with_expand_conv option in InvertedResidual, it should be decided by channels.
2. Revert activation function, should before SE layer.
* Format code.
* Rename MobilenetV3 arch "big" to "large".
* Add mobilenetv3_small torchvision training recipe
* Modify default `out_indices` of MobilenetV3, now it will change
according to `arch` if not specified.
* Add MobilenetV3 large config.
* Add mobilenetv3 README
* Modify InvertedResidual unit test.
* Refactor ConvClsHead to StackedLinearClsHead, and add unit tests.
* Add unit test for `simple_test` of `StackedLinearClsHead`.
* Fix typo
Co-authored-by: Yidi Shao <ydshao@smail.nju.edu.cn>
* add mytrain.py for test
* test before layers
* test attr in layers
* test classifier
* delete mytrain.py
* add rand_bbox_minmax rand_bbox and cutmix_bbox_and_lam to BaseCutMixLayer
* add mixup_prob to BatchMixupLayer
* add cutmixup
* add cutmixup to __init__
* test classifier with cutmixup
* delete some comments
* set mixup_prob default to 1.0
* add cutmixup to classifier
* use cutmixup
* use cutmixup
* fix bugs
* test cutmixup
* move mixup and cutmix to augment
* inherit from BaseAugment
* add BaseAugment
* inherit from BaseAugment
* rename identity.py
* add @
* build augment
* register module
* rename to augment.py
* delete cutmixup.py
* do not inherit from BaseAugment
* add augments
* use augments in classifier
* prob default to 1.0
* add comments
* use augments
* use augments
* assert sum of augmentation probabilities should equal to 1
* augmentation probabilities equal to 1
* calculate Identity prob
* replace xxx with self.xxx
* add comments
* sync with augments
* for BC-breaking
* delete useless comments in mixup.py
* Refector label smooth loss, now support mode `original`, `classy_vision`
and `multi_label`.
* Add unittests for label smooth loss.
* Improve docstring of LSR
* add increasing in solarize and posterize
* fix linting
* Revert "add increasing in solarize and posterize"
This reverts commit 128af36e9b.
* revise according to comments
* Add paramater magnitude_std in RandAugment to allow randomly movement of magnitude_value
* Add unittest for magnitude_std
* Improve docstring of magnitude_std
* GlabelAveragePooling support 1d, 2d and 3d by param, and add neck test
* Imporve neck test
* Change 'mode' attribute in GAP to 'dim', and add docstring
* add mytrain.py for test
* test before layers
* test attr in layers
* test classifier
* delete mytrain.py
* set cal_acc in ClsHead defaults to False
* set cal_acc defaults to False
* use *args, **kwargs instead
* change bs16 to 3 in test_image_classifier_vit
* fix some comments
* change cal_acc=True
* test LinearClsHead
* add convert_to_one_hot
* add test_label_smooth_loss
* add my label_smooth_loss
* fix CELoss bug
* test new label smooth loss
* LabelSmoothLoss downward compatibility
* add some comments
* remove the old version of LabelSmoothLoss
* add some comments
* add some comments
* add some comments
* add label smooth to config
* support random augmentation
* minor fix on posterize
* minor fix on posterize
* minor fix on cutout
* minor fix on cutout
* fix bug in solarize add
* revised according to comments
* Add mixup option
* Modify the structure of mixup and add configs
* Clean configs
* Add test for mixup and SoftCrossEntropyLoss
* Add simple test for ImageClassifier
* Fix bug in test_losses.py
* Add assertion in CrossEntropyLoss
* resolve conflicts
add heads and config for multilabel tasks
* minor change
* remove evaluating mAP in head
* add baseline config
* add configs
* reserve only one config
* minor change
* fix minor bug
* minor change
* minor change
* add unittests and fix docstrings
* support thr
* replace thrs with thr
* fix docstring
* minor change
* revise according to comments
* revised according to comments
* revise according to comments
* rewrite basedataset.evaluate to avoid duplicate calculation
* minor change
* change thr to thrs
* add more unit test
* support support, support class-wise evaluation results and move eval_metrics.py
* Fix docstring
* change average to be non-optional
* revise according to comments
* add more unittest
* add bce loss for multilabel task
* minor change
* apply class wise sum
* fix docstring
* do not apply sum over classes and fix docstring
* fix docstring
* fix weight shape
* fix weight shape
* fix docstring
* fix linting issue
Co-authored-by: Y. Xiong <xiongyuxy@gmail.com>
* add focal loss
* apply class wise sum
* fix doctring
* do not apply sum over classes and fix docstring
* fix docstring
* fix weight shape
* fix weight shape