Commit Graph

199 Commits (088d5b5addc2ecf8dca0ca1500bae9b90bcb97e1)

Author SHA1 Message Date
Ezra-Yu 34d5a25281
[Refactor] Support passing arguments to loss from head. (#523) 2021-11-10 17:12:34 +08:00
Ma Zerun fffa30dd48
[Feature] Add Tokens-to-Token ViT backbone and converted checkpoints. (#467)
* add t2t backbone

* register t2t_vit

* add t2t_vit config

* [Temp] Align posterize transform with timm.

* Fix lint

* Refactor t2t-vit

* Add config for t2t-vit

* Add metafile and README for t2t-vit

* Add unit tests

* configs

* Update metafile and README

* Improve docstring

* Fix batch size which should be 8x64 instead of 8x128

* Fix typo

* Update model zoo

* Update training augments config.

* Move some arguments of T2TModule to T2TViT

* Update docs.

* Update unit test

Co-authored-by: HIT-cwh <2892770585@qq.com>
2021-10-29 10:37:16 +08:00
imyhxy 671414becb
[Fix] Fix missing import `Compose` and `Normalize`.
* Fixed missing import 'Compose'

* Fixed mistype `Compose` in `mmcls/datasets/__init__.py`

* Fixed missing import `Normalize`

* [Docs] Fix typos in doctest

* [Fix] Sort import module
2021-10-28 15:21:05 +08:00
Zhicheng Chen 6d6ce215a2
[Feature] Add seesaw loss. (#500)
* migrate seesaw loss from mmdet

* add assertion and doc string fixes

* add error information

* docstring fixes

* minor doc update
2021-10-27 10:11:59 +08:00
Ma Zerun 77a3834531
[Feature] Add Res2Net backbone and converted weights. (#465)
* Add Res2Net from mmdet, and change it to mmcls style.

* Align structure with official repo

* Support `deep_stem` and `avg_down` option

* Add Res2Net configs

* Add metafile&README and update model zoo

* Add unit tests

* Imporve docstring.

* Improve according to comments.
2021-10-20 16:34:22 +08:00
Ma Zerun 2932f9d8a3
[Refactor] Refator ViT (Continue #295) (#395)
* [Squash] Refator ViT (from #295)

* Use base variable to simplify auto_aug setting

* Use common PatchEmbed, remove HybridEmbed and refactor ViT init
structure.

* Add `output_cls_token` option and change the output format of ViT and
input format of ViT head.

* Update unit tests and add test for `output_cls_token`.

* Support out_indices.

* Standardize config files

* Support resize position embedding.

* Add readme file of vit

* Rename config file

* Improve docs about ViT.

* Update docstring

* Use local version `MultiheadAttention` instead of mmcv version.

* Fix MultiheadAttention

* Support `qk_scale` argument in `MultiheadAttention`

* Improve docs and change `layer_cfg` to `layer_cfgs` and support
sequence.

* Use init_cfg to init Linear layer in VisionTransformerHead

* update metafile

* Update checkpoints and configs

* Imporve docstring.

* Update README

* Revert GAP modification.
2021-10-18 16:07:00 +08:00
Ma Zerun 2e6c7cf87d
[Docs] Add code-spell pre-commit hook and fix a large mount of typos. (#470)
* Add code spell check hook

* Add codespell config

* Fix a lot of typos.

* Add formating.py to keep compatibility.
2021-10-13 14:33:07 +08:00
zhangrui_wolf 90496b4687
[Feature] Add RepVGG backbone and checkpoints. (#414)
* Add RepVGG code.

* Add se_module as plugin.

* Add the repvggA0 primitive config

* Change repvggA0.py to fit mmcls

* Add RepVGG configs

* Add repvgg_to_mmcls

* Add tools/deployment/convert_repvggblock_param_to_deploy.py

* Change configs/repvgg/README.md

* Streamlining the number of configuration files.

* Fix lints

* Delete plugins

* Delete code about plugin.

* Modify the code for using se module.

* Modify config to fit repvgg with se.

* Change se_cfg to allow loading of pre-training parameters.

* Reduce the complexity of the configuration file.

* Finsh unitest for repvgg.

* Fix bug about se in repvgg_to_mmcls.

* Rename convert_repvggblock_param_to_deploy.py to reparameterize_repvgg.py, and delete setting about device.

* test commit

* test commit

* test commit command

* Modify repvgg.py to make the code more readable.

* Add value=0 in F.pad()

* Add se_cfg to arch_settings.

* Fix bug.

* modeify some attr name and Update unit tests

* rename stage_0 to stem and branch_identity to branch_norm

* update unit tests

* add m.eval in unit tests

* [Enhance] Enhence SE layer to support custom squeeze channels. (#417)

* add enhenced SE

* Update

* rm basechannel

* fix docstring

* Update se_layer.py

fix docstring

* [Docs] Add algorithm readme and update meta yml (#418)

* Add README.md for models without checkpoints.

* Update model-index.yml

* Update metafile.yml of seresnet

* [Enhance] Add `hparams` argument in `AutoAugment` and `RandAugment` and some other improvement. (#398)

* Add hparams argument in `AutoAugment` and `RandAugment`.

And `pad_val` supports sequence instead of tuple only.

* Add unit tests for `AutoAugment` and `hparams` in `RandAugment`.

* Use smaller test image to speed up uni tests.

* Use hparams to simplify RandAugment config in swin-transformer.

* Rename augment config name from `pipeline` to `pipelines`.

* Add some commnet ad docstring.

* [Feature] Support classwise weight in losses (#388)

* Add classwise weight in losses:CE,BCE,softBCE

* Update unit test

* rm some extra code

* rm some extra code

* fix broadcast

* fix broadcast

* update unit tests

* use new_tensor

* fix lint

* [Enhance] Better result visualization (#419)

* Imporve result visualization to support wait time and change the backend
to matplotlib.

* Add unit test for visualization

* Add adaptive dpi function

* Rename `imshow_cls_result` to `imshow_infos`.

* Support str in `imshow_infos`

* Improve docstring.

* Bump version to v0.15.0 (#426)

* [CI] Add PyTorch 1.9 and Python 3.9 build workflow, and remove some CI. (#422)

* Add PyTorch 1.9 build workflow, and remove some CI.

* Add Python 3.9 CI

* Show Python 3.9 support.

* [Enhance] Rename the option `--options` in some tools to `--cfg-options`. (#425)

* [Docs] Fix sphinx version (#429)

* [Docs] Add `CITATION.cff` (#428)

* Add CITATION.cff

* Fix typo in setup.py

* Change author in setup.py

* modeify some attr name and Update unit tests

* rename stage_0 to stem and branch_identity to branch_norm

* update unit tests

* add m.eval in unit tests

* Update unit tests

* refactor

* refactor

* Alignment inference accuracy

* Update configs, readme and metafile

* Update readme

* return tuple and fix metafile

* fix unit test

* rm regnet and classifiers changes

* update auto_aug

* update metafile & readme

* use delattr

* rename cfgs

* Update checkpoint url

* Update readme

* Rename config files.

* Update readme and metafile

* add comment

* Update mmcls/models/backbones/repvgg.py

Co-authored-by: Ma Zerun <mzr1996@163.com>

* Update docstring

* Improve docstring.

* Update unittest_testblock

Co-authored-by: Ezra-Yu <1105212286@qq.com>
Co-authored-by: Ma Zerun <mzr1996@163.com>
2021-09-29 11:06:23 +08:00
Ma Zerun 75b087f27e
[Refactor] Fix TnT compatibility and verbose warning. (#436)
* Support return tuple in TnT

* Fix verbose warnings.
2021-09-22 19:37:24 +08:00
Ma Zerun a8f4f82b8e
[Enhance] Improve downstream repositories compatibility (#421)
* Defaults to return tuple in all backbones.

* Compat downstream of swin transformer.

* Support tuple input for multi label head and stacked head.

* Fix backbone unit tests for tuple output.

* Add downstream inference unit tests for mmdet.

* Update gitignore

* Add unit tests for `return_tuple` option

* Add unit tests for head input tuple.

* Add warning in `simple_test`

* Add TIMMBackbone return tuple.

* Modify timm backbone unit test.
2021-09-08 10:38:57 +08:00
Miras Amir 5cfaed6807
[Feature] timm backbones wrapper (#427)
* Add wrapper to use backbones from timm

* Add tests

* Remove timm from optional deps and modify GitHub workflow.

Co-authored-by: mzr1996 <mzr1996@163.com>
2021-09-06 11:05:31 +08:00
Ma Zerun 5383787512
[Enhance] Better result visualization (#419)
* Imporve result visualization to support wait time and change the backend
to matplotlib.

* Add unit test for visualization

* Add adaptive dpi function

* Rename `imshow_cls_result` to `imshow_infos`.

* Support str in `imshow_infos`

* Improve docstring.
2021-08-31 10:50:28 +08:00
Ezra-Yu 192b79eea0
[Feature] Support classwise weight in losses (#388)
* Add classwise weight in losses:CE,BCE,softBCE

* Update unit test

* rm some extra code

* rm some extra code

* fix broadcast

* fix broadcast

* update unit tests

* use new_tensor

* fix lint
2021-08-31 10:44:12 +08:00
Ezra-Yu 0184527bd4
[Enhance] Enhence SE layer to support custom squeeze channels. (#417)
* add enhenced SE

* Update

* rm basechannel

* fix docstring

* Update se_layer.py

fix docstring
2021-08-20 13:31:44 +08:00
Ma Zerun f9eb9b409b
[Docs] Add Copyright information. (#413) 2021-08-17 19:52:42 +08:00
Ma Zerun f8f1700860
[Refactor] Use post_process function to handle pred result processing. (#390)
Use post_process function to handle pred result processing in `simple_test`.
2021-08-12 11:54:24 +08:00
whcao 359f56ad58
[Feature] Add transformer in transformer (#339)
* add tnt_small configs

* add tnt backbone

* test tnt

* add tnt to model_zoo

* rename the config file name

* add optimizor

* move tnt backbone unitest

* add metric

* fix keyname in arch

* encapsulate "inner transformer block" and "outer transformer block"

* fix TnT

* Use `inner_block_cfg` and `outer_block_cfg` instead of `args` and
`kwargs`.

Co-authored-by: mzr1996 <mzr1996@163.com>
2021-07-30 09:16:27 +08:00
Junjun2016 e3b77e7508
[Fix] Fix `patch_cfg` argument bug in SwinTransformer (#368) 2021-07-27 19:19:44 +08:00
Junjun2016 1585ca83c8
[Fix] Fix docstring typo and init error in ShuffleNetV1 (#374) 2021-07-27 19:17:42 +08:00
Ma Zerun 33a4c0fe9f
[Fix] Use local ATTENTION registery to avoid conflict with other repo. (#375) 2021-07-27 17:59:05 +08:00
Ma Zerun 899047a3b3
Fix duplicate `init_weights` call in ViT init function. (#373) 2021-07-26 05:33:11 -04:00
Ma Zerun 15cd34bbef
[Fix] Use zero as default value of `thrs` in metrics. (#341)
* Use zero as default value of `thrs` in metrics. And it accepcts a number
instead of float now.

* Fix unit test comment

* Don't pass thrs if no thrs.
2021-07-18 16:57:21 +08:00
Ma Zerun a97ccd5579
[Fix] Fix swin transformer config (#355)
* Fix config bug in swin

* Format config and checkpoint name of swin transformer.

* Fix link in model zoo
2021-07-14 15:10:20 +08:00
Ma Zerun d04ebc1eb5
[Docs] Add API Reference in the docs (#342)
* Add API inference in the docs and fix readthedocs config.

* Replace some relative link in docs.

* Format docstring for reStructuredText syntax.

* Fix vit paper link

* Fix docstring of `show_results` function in `BaseClassifier`.
2021-07-14 15:06:50 +08:00
Ma Zerun 1a7cebe4b9
[Refactor] Refactor unittest (#321)
* Refactor unit tests folder structure.

* Remove label smooth and Vit test in `test_classifiers.py`

* Rename test_utils in dataset to test_dataset_utils

* Split test_models/test_utils/test_utils.py to multiple sub files.

* Add unit tests of classifiers and heads

* Use patch context manager.

* Add unit test of `is_tracing`, and add warning in `is_tracing` if torch
verison is smaller than 1.6.0
2021-07-08 22:49:05 +08:00
Ma Zerun 71621a5f62
Add `is_tracing` helper function to fix a tracing bug in PyTorch 1.6 (#347) 2021-07-07 11:55:53 +08:00
Ma Zerun 076ee10cac
[Feature] Add swin-transformer model. (#271)
* Add swin transformer archs S, B and L.

* Add SwinTransformer configs

* Add train config files of swin.

* Align init method with original code

* Use nn.Unfold to merge patch

* Change all ConfigDict to dict

* Add init_cfg for all subclasses of BaseModule.

* Use mmcv version init function

* Add Swin README

* Use safer cfg copy method

* Improve docstring and variable name.

* Fix some difference in randaug

Fix BGR bug, align scheduler config.

Fix label smoothing parameter difference.

* Fix missing droppath in attn

* Fix bug of relative posititon table if window width is not equal to
height.

* Make `PatchMerging` more general, support kernel, stride, padding and
dilation.

* Rename `residual` to `identity` in attention and FFN.

* Add `auto_pad` option to auto pad feature map

* Improve docstring.

* Fix bug in ShiftWMSA padding.

* Remove unused `key` and `value` in ShiftWMSA

* Move `PatchMerging` into utils and use common `PatchEmbed`.

* Use latest `LinearClsHead`, train augments and label smooth settings.
And remove original `SwinLinearClsHead`.

* Mark some configs as "Evalution Only".

* Remove useless comment in config

* 1. Move ShiftWindowMSA and WindowMSA to `utils/attention.py`
2. Add docstrings of each module.
3. Fix some variables' names.
4. Other small improvement.

* Add unit tests of swin-transformer and patchmerging.

* Fix some bugs in unit tests.

* Fix bug of rel_position_index if window is not square.

* Make WindowMSA implicit, and add unit tests.

* Add metafile.yml, update readme and model_zoo.
2021-07-01 09:30:42 +08:00
Mingqiang Ning 4ebee155e8
fix a bug when samples_per_gpu==1 (#311) 2021-06-30 20:57:21 +08:00
whcao bee0ac6b56
[Refactor]Modify patchembed (#330)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* add patchembed and hybridembed

* add patchembed and hybridembed to __init__

* test patchembed and hybridembed

* fix some comments
2021-06-30 20:48:04 +08:00
Ezra-Yu 06bbd6940d
[Fix] Fix init_weights bug in some backbones and update Readme (#318)
* some init_weights bugs

* init_weights

* add import Basemodule

* code stye

* isort

* Use recommend init_weights override method.

Add init_cfg parameter in InvertedResidual.

Remove some useless comment.

Co-authored-by: mzr1996 <mzr1996@163.com>
2021-06-30 19:41:58 +08:00
Ma Zerun 19cfb25e5e
Use parameter default value to control default behavior of init_cfg in (#319)
`LinearClsHead` and `MultiLabelLinearClsHead`

And remove the verbose `_init_layers` method of `LinearClsHead` and
`MultiLabelLinearClsHead`.
2021-06-30 19:13:27 +08:00
Ma Zerun 65410b05ad
Fix Mobilenetv3 structure and add pretrained model (#291)
* Refactor Mobilenetv3 structure and add ConvClsHead.

* Change model's name from 'MobileNetv3' to 'MobileNetV3'

* Modify configs for MobileNetV3 on CIFAR10.

And add MobileNetV3 configs for imagenet

* Fix activate setting bugs in MobileNetV3.

And remove bias in SELayer.

* Modify unittest

* Remove useless config and file.

* Fix mobilenetv3-large arch setting

* Add dropout option in ConvClsHead

* Fix MobilenetV3 structure according to torchvision version.

1. Remove with_expand_conv option in InvertedResidual, it should be decided by channels.

2. Revert activation function, should before SE layer.

* Format code.

* Rename MobilenetV3 arch "big" to "large".

* Add mobilenetv3_small torchvision training recipe

* Modify default `out_indices` of MobilenetV3, now it will change
according to `arch` if not specified.

* Add MobilenetV3 large config.

* Add mobilenetv3 README

* Modify InvertedResidual unit test.

* Refactor ConvClsHead to StackedLinearClsHead, and add unit tests.

* Add unit test for `simple_test` of `StackedLinearClsHead`.

* Fix typo

Co-authored-by: Yidi Shao <ydshao@smail.nju.edu.cn>
2021-06-27 23:19:36 +08:00
whcao 3a08db9182
[Feature]Add augments to models/utils (#278)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* add rand_bbox_minmax rand_bbox and cutmix_bbox_and_lam to BaseCutMixLayer

* add mixup_prob to BatchMixupLayer

* add cutmixup

* add cutmixup to __init__

* test classifier with cutmixup

* delete some comments

* set mixup_prob default to 1.0

* add cutmixup to classifier

* use cutmixup

* use cutmixup

* fix bugs

* test cutmixup

* move mixup and cutmix to augment

* inherit from BaseAugment

* add BaseAugment

* inherit from BaseAugment

* rename identity.py

* add @

* build augment

* register module

* rename to augment.py

* delete cutmixup.py

* do not inherit from BaseAugment

* add augments

* use augments in classifier

* prob default to 1.0

* add comments

* use augments

* use augments

* assert sum of augmentation probabilities should equal to 1

* augmentation probabilities equal to 1

* calculate Identity prob

* replace xxx with self.xxx

* add comments

* sync with augments

* for BC-breaking

* delete useless comments in mixup.py
2021-06-20 09:44:51 +08:00
whcao b9879a4667
[Bug]Fix linearclshead (#307)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* fix init_cfg bug
2021-06-16 00:37:16 +08:00
whcao 438c9da6eb
[Feature]Fix linear cls head (#303)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* move init_cfg to parameter

* isort

* Use a sentinel value to denote the default init_cfg
2021-06-15 21:08:30 +08:00
AllentDan a24a9f6faa
[Fix] Build compatible with low pytorch versions (#301)
* add version compatible for torchscript

* doc

* doc again

* fix lint

* fix lint isort
2021-06-14 23:25:35 +08:00
AllentDan c2f01e0dcd
[Feature] Add torchscript deployment (#279)
* add torchscript deploy

* fix lint

* add check and delete \
2021-06-12 21:50:48 +08:00
whcao 5e1a02103f
[Feature]Delete comments (#298)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* delete comments
2021-06-12 21:45:22 +08:00
Ma Zerun 84a939f858
Refactor LabelSmoothLoss (#285)
* Refector label smooth loss, now support mode `original`, `classy_vision`
and `multi_label`.

* Add unittests for label smooth loss.

* Improve docstring of LSR
2021-06-12 21:32:18 +08:00
Miao Zheng 4ca21c7d03
[WIP] Refactoring weights initialization (#270)
* [WIP] Refactoring weights initialization

* fix lint and constant init cfg

* fix pretrained bug

* fix typo

* fix isort

* revise model utils
2021-06-10 10:54:34 +08:00
Yinhao Li b67a11c548
Fix kwargss to kwargs. (#274) 2021-05-26 19:27:30 +08:00
Wenwei Zhang 5ee08767f2
inherits mmcv registry (#252) 2021-05-14 23:36:56 +08:00
mzr1996 8128900a12
GlabelAveragePooling support 1d, 2d and 3d by param, and add neck test (#236)
* GlabelAveragePooling support 1d, 2d and 3d by param, and add neck test

* Imporve neck test

* Change 'mode' attribute in GAP to 'dim', and add docstring
2021-05-10 15:00:50 +08:00
whcao b30f79ea4c
[Feature]Modify Parameters Passing in models.heads (#239)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* set cal_acc in ClsHead defaults to False

* set cal_acc defaults to False

* use *args, **kwargs instead

* change bs16 to 3 in test_image_classifier_vit

* fix some comments

* change cal_acc=True

* test LinearClsHead
2021-05-10 14:56:55 +08:00
whcao 16947f1239
[Bug]Fix weight decay (#227)
* add imagenet bs 4096

* add vit_base_patch16_224_finetune

* add vit_base_patch16_224_pretrain

* add vit_base_patch16_384_finetune

* add vit_base_patch16_384_finetune

* add vit_b_p16_224_finetune_imagenet

* add vit_b_p16_224_pretrain_imagenet

* add vit_b_p16_384_finetune_imagenet

* add vit

* add vit

* add vit head

* vit unitest

* keep up with ClsHead

* test vit

* add flag to determiine whether to calculate acc during training

* Changes related to mmcv1.3.0

* change checkpoint saving interval to 10

* add label smooth

* default_runtime.py recovery

* docformatter

* docformatter

* delete 2 lines of comments

* delete configs/_base_/schedules/imagenet_bs4096.py

* add configs/_base_/schedules/imagenet_bs2048_AdamW.py

* rename imagenet_bs4096.py to imagenet_bs2048_AdamW.py

* add AutoAugment

* fix weight decay in vit

* change eval interval to 10

* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* delete @torch.jit.ignore

* change eval interval back to 1

* add some comments to imagenet_bs2048_AdamW

* add some comments
2021-04-28 17:16:43 +08:00
whcao 31a6a362ba
Add some vit configs (#217)
* add vit_base_patch32_384_finetune.py

* add vit_base_patch32_384_finetune_imagenet.py to vision_transformer

* add vit_large_patch16_384_finetune.py to models

* add vit_large_patch16_384_finetune_imagenet.py to vision_transformer

* add vit_large_patch32_384_finetune to models

* add vit_large_patch32_384_finetune_imagenet to vision_transformer

* add vit_large_patch16_224_finetune.py to models

* add vit_large_patch16_224_finetune_imagenet.py to vision_transformer

* delete some useless comments
2021-04-20 11:32:20 +08:00
whcao affb39fe07
[Feature]Add Vit (#214)
* add imagenet bs 4096

* add vit_base_patch16_224_finetune

* add vit_base_patch16_224_pretrain

* add vit_base_patch16_384_finetune

* add vit_base_patch16_384_finetune

* add vit_b_p16_224_finetune_imagenet

* add vit_b_p16_224_pretrain_imagenet

* add vit_b_p16_384_finetune_imagenet

* add vit

* add vit

* add vit head

* vit unitest

* keep up with ClsHead

* test vit

* add flag to determiine whether to calculate acc during training

* Changes related to mmcv1.3.0

* change checkpoint saving interval to 10

* add label smooth

* default_runtime.py recovery

* docformatter

* docformatter

* delete 2 lines of comments

* delete configs/_base_/schedules/imagenet_bs4096.py

* add configs/_base_/schedules/imagenet_bs2048_AdamW.py

* rename imagenet_bs4096.py to imagenet_bs2048_AdamW.py

* add helpers.py

* test vit hybrid backbone

* fix HybridEmbed

* use to_2tuple instead
2021-04-16 19:22:41 +08:00
LXXXXR 7d618e6606
[Fix] Fix version (#209)
* fix version

* add projects in openmmlab

* minor fix

* empty

* add mmocr

* empty

* empty

* fix linting
2021-04-16 19:07:17 +08:00
whcao 1cde6f6e65
[Feature] Add cutmix option (#198)
* Add cutmix option

* fix code style

* add some annotations

* add annotation about custom_hooks

* check constraint of alpha > 0

* add test cutmix

* fix code style

* add cutmix to configs/models

* add cutmix to configs/resnet

* flake8

* empty
2021-04-14 21:27:42 +08:00
mzr1996 b7b520881f
Update CONTRIBUTING.md according to mmcv (#210)
* Update CONTRIBUTING.md according to mmcv

* Docstring formatting by docformatter

* Update openmmlab website.
2021-04-14 21:22:37 +08:00
whcao af83e981ac
[Bug]Fix label smooth bug (#203)
* add convert_to_one_hot

* add test_label_smooth_loss

* add my label_smooth_loss

* fix CELoss bug

* test new label smooth loss

* LabelSmoothLoss downward compatibility

* add some comments

* remove the old version of LabelSmoothLoss

* add some comments

* add some comments

* add some comments

* add label smooth to config
2021-04-13 13:53:56 +08:00
whcao dcf61173f6
[Feature]Add cal_acc to cls_head.py (#206)
* add cal_acc to cls_head.py

* test ClsHead with cal_acc

* 4 spaces indentation
2021-04-13 13:52:14 +08:00
LXXXXR e76c5a368d
[Feature] Support fp16 training (#178)
* change mmcls fp16 to mmcv hook

* support fp16

* clean unnessary stuff
2021-03-17 15:53:55 +08:00
ftbabi bdd6b01ae7
[Feature] Add "mixup" from Bag of Tricks (#160)
* Add mixup option

* Modify the structure of mixup and add configs

* Clean configs

* Add test for mixup and SoftCrossEntropyLoss

* Add simple test for ImageClassifier

* Fix bug in test_losses.py

* Add assertion in CrossEntropyLoss
2021-02-25 14:06:58 +08:00
LXXXXR a225cb6bdb
fix img_metas bug (#152) 2021-01-26 11:24:08 +08:00
LXXXXR 07bb15e5fd
[Feature] Add heads and config for multilabel task (#145)
* resolve conflicts
add heads and config for multilabel tasks

* minor change

* remove evaluating mAP in head

* add baseline config

* add configs

* reserve only one config

* minor change

* fix minor bug

* minor change

* minor change

* add unittests and fix docstrings
2021-01-25 18:10:14 +08:00
LXXXXR 13c1210741
[Feature] Add thrs in eval_metrics (#146)
* support thr

* replace thrs with thr

* fix docstring

* minor change

* revise according to comments

* revised according to comments

* revise according to comments

* rewrite basedataset.evaluate to avoid duplicate calculation

* minor change

* change thr to thrs

* add more unit test
2021-01-25 17:54:22 +08:00
LXXXXR 8e990b5654
[Feature] Support support and class-wise evaluation results (#143)
* support support, support class-wise evaluation results and move eval_metrics.py

* Fix docstring

* change average to be non-optional

* revise according to comments

* add more unittest
2021-01-19 16:42:16 +08:00
LXXXXR b98b317012
fix bug in vgg weight_init (#140) 2021-01-15 10:23:11 +08:00
LXXXXR 63f38988eb
[Fix] Fix optional issues in docstring (#138)
* fix optional issue in docstring

* revised according to comments

* add optional
2021-01-14 11:09:08 +08:00
LXXXXR 6916f33d56
[Feature] Add asymmetric loss for multilabel task (#132)
* add asymmetric loss

* minor change

* fix docstring

* do not apply sum over classes and fix docstring

* fix docstring

* fix weight shape

* fix weight shape

* add reference

* fix linkting issue

Co-authored-by: Y. Xiong <xiongyuxy@gmail.com>
2021-01-11 11:22:22 +08:00
LXXXXR 194ab7efda
[Feature] Add bce loss for multilabel task (#130)
* add bce loss for multilabel task

* minor change

* apply class wise sum

* fix docstring

* do not apply sum over classes and fix docstring

* fix docstring

* fix weight shape

* fix weight shape

* fix docstring

* fix linting issue

Co-authored-by: Y. Xiong <xiongyuxy@gmail.com>
2021-01-11 11:05:24 +08:00
LXXXXR 9578bfa0f1
[Feature] Add focal loss for multilabel task (#131)
* add focal loss

* apply class wise sum

* fix doctring

* do not apply sum over classes and fix docstring

* fix docstring

* fix weight shape

* fix weight shape
2021-01-08 20:44:23 +08:00
LXXXXR 4203b94643
fix bug in eval_metrics (#122) 2020-12-23 16:20:47 +08:00
LXXXXR b1e91f256b
fix bug in recall and precision (#112) 2020-12-09 16:27:42 +08:00
Ülkü Tuncer Küçüktaş 6f7698cb7c
Update accuracy.py (#104)
Co-authored-by: Ülkü Tuncer Küçüktaş <UlkuTuncerKucuktas@users.noreply.github.com>
2020-12-01 14:25:18 +08:00
WRH 44bbc71e14
Fix bug and optimize MNIST config (#98)
* add simple_test to ClsHead

* optimize lenet training config

* recover path setting
2020-11-26 15:27:04 +08:00
LXXXXR 21fd5019fb
add macro-averaged precision,recall,f1 options in evaluation (#93)
* add macro-averaged precision,recall,f1 options in evaluation

* remove unnecessary comments

* Revise according to comments

* Revise according to comments
2020-11-25 16:13:54 +08:00
Lei Yang 24fd4fb627
Visualize results on image demo (#58)
* visualize results on image demo

* add matplotlib in requirements
2020-10-10 16:33:27 +08:00
Lei Yang 9547e7b7a5
Add model inference (#16)
* add model inference on single image

* rm --eval

* revise doc

* add inference tool and demo

* fix linting

* rename inference_image to inference_model

* infer pred_label and pred_score

* fix linting

* add docstr for inference

* add remove_keys

* add doc for inference

* dump results rather than outputs

* add class_names

* add related infer scripts

* add demo image and the first part of colab tutorial

* conduct evaluation in dataset

* return lst in simple_test

* compuate topk accuracy with numpy

* return outputs in test api

* merge inference and evaluation tool

* fix typo

* rm gt_labels in test conifg

* get gt_labels during evaluation

* sperate the ipython notebook to another PR

* return tensor for onnx_export

* detach var in simple_test

* rm inference script

* rm inference script

* construct data dict to replace LoadImage

* print first predicted result if args.out is None

* modify test_pipeline in inference

* refactor class_names of imagenet

* set class_to_idx as a property in base dataset

* output pred_class during inference

* remove unused docstr
2020-09-30 19:00:20 +08:00
Lei Yang bc1b08ba41
Add VGG and pretained models (#27)
* add vgg

* add vgg model coversion tool

* fix out_indices and docstr

* add vgg models in configs

* add params, flops and accuracy in docs

* add pretrained models url

* use ConvModule and refine var names

* update vgg conversion tool

* modify bn config

* add docs for arch_setting

* add unit test for vgg

* rm debug code

* update vgg pretrained models
2020-09-29 17:49:42 +08:00
Jerry Jiarui XU 8d3acce307
Add ResNeSt (#25)
* Add ResNeSt

* fixed test

* refactor

* add ResNeSt base

* update modelzoo

* update modelzoo

* Add S-200,S269 _base_
2020-09-22 10:41:51 +08:00
Xiaojie Li 7af9419ffa
Fix init_weights in 'shufflenet_v2.py'. (#29)
* fix init_weights in shufflenetv2

* fix doc

* fix doc

Co-authored-by: lixiaojie <lixiaojie@sensetime.com>
2020-08-13 09:49:41 +08:00
Lei Yang 408d92bcbe
revise docstr of alexnet to correct input_size (#26) 2020-08-12 16:14:19 +08:00
yanglei 9a661ef981 Add ResNet_CIFAR 2020-07-12 00:06:56 +08:00
lixiaojie c1d0090700 Change the init_weight for shufflenet models 2020-07-12 00:06:56 +08:00
yanglei 45812e87bd add weighted_loss 2020-07-12 00:06:55 +08:00
lixiaojie 1d5f3d8c24 update shufflenet_v1 2020-07-12 00:06:55 +08:00
lixiaojie f6260b33bf Add RandomCrop, RandomResizedCrop, RandomGrayscale, impad 2020-07-12 00:06:55 +08:00
yanglei 8995e16834 Add AlexNet and LeNet5 2020-07-12 00:06:54 +08:00
yanglei fbc68bf266 Add classifiers, heads, necks and losses 2020-07-07 19:32:06 +08:00
yanglei 17092d8be4 Fix the mid_channels of SEResNeXt 2020-07-07 11:25:48 +08:00
lixiaojie eb24a94b68 add mobilenetv2 convert code 2020-07-05 21:59:22 +08:00
lixiaojie 046aa5c003 add shufflenet_v2 convert code 2020-07-03 19:21:18 +08:00
chenkai 91a43c5bac Add RegNet 2020-07-01 14:21:45 +08:00
louzan 03b75789c6 Dev mobilenetv3 2020-06-30 15:50:36 +08:00
chenkai 02e11cc1f3 Refactoring for ResNet family 2020-06-25 11:57:50 +08:00
lixiaojie 50208498dd Update mobilenetv2 2020-06-22 19:40:31 +08:00
yangmingmin 29e17ab326 Dev se resnext 2020-06-19 11:45:42 +08:00
yangmingmin f729a60f87 Dev se resnet 2020-06-17 14:20:20 +08:00
chenkai c168aa786e Merge branch 'fix/resnet_zero_init_residual' into 'master'
Set default zero_init_residual of ResNet to True

See merge request open-mmlab/mmclassification!22
2020-06-16 22:09:48 +08:00
louzan deee5d61d7 add mobilenetv2 2020-06-16 14:37:03 +08:00
yanglei 800ace3137 set default zero_init_residual to True 2020-06-16 11:36:54 +08:00
lixiaojie fcb1ad80e4 merge master 2020-06-16 00:49:55 +08:00
lixiaojie 66e9bb017d fix unit test 2020-06-16 00:03:45 +08:00
lixiaojie 8060e0f620 merge master 2020-06-15 20:54:52 +08:00
lixiaojie 1ce116a898 Merge branch 'master' of gitlab.sz.sensetime.com:open-mmlab/mmclassification into dev_shufflenetv1 2020-06-15 20:52:27 +08:00
lixiaojie edceab13e3 modify self.layers 2020-06-15 20:52:17 +08:00
qianchen 5662daeea8 Merge branch 'dev_shufflenetv1' into 'master'
add shufflenetv1

See merge request open-mmlab/mmclassification!10
2020-06-15 20:49:44 +08:00
lixiaojie 8249ad7205 merge master 2020-06-15 20:46:30 +08:00