Commit Graph

27 Commits (a865d64208a1c022d29c7ba3860a530e31c43d7b)

Author SHA1 Message Date
Ma Zerun 643fb192cd
[Enhance] Enhance feature extraction function. (#593)
* Fix MobileNet V3 configs

* Refactor to support more powerful feature extraction.

* Add unit tests

* Fix unit test

* Imporve according to comments

* Update checkpoints path

* Fix unit tests

* Add docstring of `simple_test`

* Add docstring of `extract_feat`

* Update model zoo
2021-12-17 15:55:02 +08:00
Ma Zerun f9a2b04cee
[Feature] Add DeiT backbone and checkpoints. (#576)
* Support DeiT backbone.

* Use hook to automatically resize pos embed

* Update ViT training setting

* Add deit configs and update docs

* Fix vit arch assertion

* Remove useless init function

* Add unit tests.

* Fix resize_pos_embed for DeiT

* Improve according to comments.
2021-12-15 22:44:57 +08:00
Zhiliang Peng 18f6bb0b10
[Feature] Implement the conformer backbone. (#494)
* implement the conformer

* format code style

* format code style

* reuse the TransformerEncoderLayer in the vision_transformer.py

* Modify variable name

* delete unused params

* Remove warning info in Conformer head since it already exists in
Conformer.

* Rename some variables

* Add unit tests

* Use `getattr` instead of `get_submodule`.

* Remove some useless layers

* Refactor conformer and add configs

* Update configs and add metafile.

* Fix unit tests

* Update README

Co-authored-by: mzr1996 <mzr1996@163.com>
2021-12-07 14:00:17 +08:00
Ezra-Yu 34d5a25281
[Refactor] Support passing arguments to loss from head. (#523) 2021-11-10 17:12:34 +08:00
Ma Zerun 2932f9d8a3
[Refactor] Refator ViT (Continue #295) (#395)
* [Squash] Refator ViT (from #295)

* Use base variable to simplify auto_aug setting

* Use common PatchEmbed, remove HybridEmbed and refactor ViT init
structure.

* Add `output_cls_token` option and change the output format of ViT and
input format of ViT head.

* Update unit tests and add test for `output_cls_token`.

* Support out_indices.

* Standardize config files

* Support resize position embedding.

* Add readme file of vit

* Rename config file

* Improve docs about ViT.

* Update docstring

* Use local version `MultiheadAttention` instead of mmcv version.

* Fix MultiheadAttention

* Support `qk_scale` argument in `MultiheadAttention`

* Improve docs and change `layer_cfg` to `layer_cfgs` and support
sequence.

* Use init_cfg to init Linear layer in VisionTransformerHead

* update metafile

* Update checkpoints and configs

* Imporve docstring.

* Update README

* Revert GAP modification.
2021-10-18 16:07:00 +08:00
Ma Zerun 2e6c7cf87d
[Docs] Add code-spell pre-commit hook and fix a large mount of typos. (#470)
* Add code spell check hook

* Add codespell config

* Fix a lot of typos.

* Add formating.py to keep compatibility.
2021-10-13 14:33:07 +08:00
Ma Zerun a8f4f82b8e
[Enhance] Improve downstream repositories compatibility (#421)
* Defaults to return tuple in all backbones.

* Compat downstream of swin transformer.

* Support tuple input for multi label head and stacked head.

* Fix backbone unit tests for tuple output.

* Add downstream inference unit tests for mmdet.

* Update gitignore

* Add unit tests for `return_tuple` option

* Add unit tests for head input tuple.

* Add warning in `simple_test`

* Add TIMMBackbone return tuple.

* Modify timm backbone unit test.
2021-09-08 10:38:57 +08:00
Ma Zerun f9eb9b409b
[Docs] Add Copyright information. (#413) 2021-08-17 19:52:42 +08:00
Ma Zerun f8f1700860
[Refactor] Use post_process function to handle pred result processing. (#390)
Use post_process function to handle pred result processing in `simple_test`.
2021-08-12 11:54:24 +08:00
Ma Zerun 71621a5f62
Add `is_tracing` helper function to fix a tracing bug in PyTorch 1.6 (#347) 2021-07-07 11:55:53 +08:00
Ma Zerun 19cfb25e5e
Use parameter default value to control default behavior of init_cfg in (#319)
`LinearClsHead` and `MultiLabelLinearClsHead`

And remove the verbose `_init_layers` method of `LinearClsHead` and
`MultiLabelLinearClsHead`.
2021-06-30 19:13:27 +08:00
Ma Zerun 65410b05ad
Fix Mobilenetv3 structure and add pretrained model (#291)
* Refactor Mobilenetv3 structure and add ConvClsHead.

* Change model's name from 'MobileNetv3' to 'MobileNetV3'

* Modify configs for MobileNetV3 on CIFAR10.

And add MobileNetV3 configs for imagenet

* Fix activate setting bugs in MobileNetV3.

And remove bias in SELayer.

* Modify unittest

* Remove useless config and file.

* Fix mobilenetv3-large arch setting

* Add dropout option in ConvClsHead

* Fix MobilenetV3 structure according to torchvision version.

1. Remove with_expand_conv option in InvertedResidual, it should be decided by channels.

2. Revert activation function, should before SE layer.

* Format code.

* Rename MobilenetV3 arch "big" to "large".

* Add mobilenetv3_small torchvision training recipe

* Modify default `out_indices` of MobilenetV3, now it will change
according to `arch` if not specified.

* Add MobilenetV3 large config.

* Add mobilenetv3 README

* Modify InvertedResidual unit test.

* Refactor ConvClsHead to StackedLinearClsHead, and add unit tests.

* Add unit test for `simple_test` of `StackedLinearClsHead`.

* Fix typo

Co-authored-by: Yidi Shao <ydshao@smail.nju.edu.cn>
2021-06-27 23:19:36 +08:00
whcao b9879a4667
[Bug]Fix linearclshead (#307)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* fix init_cfg bug
2021-06-16 00:37:16 +08:00
whcao 438c9da6eb
[Feature]Fix linear cls head (#303)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* move init_cfg to parameter

* isort

* Use a sentinel value to denote the default init_cfg
2021-06-15 21:08:30 +08:00
AllentDan a24a9f6faa
[Fix] Build compatible with low pytorch versions (#301)
* add version compatible for torchscript

* doc

* doc again

* fix lint

* fix lint isort
2021-06-14 23:25:35 +08:00
AllentDan c2f01e0dcd
[Feature] Add torchscript deployment (#279)
* add torchscript deploy

* fix lint

* add check and delete \
2021-06-12 21:50:48 +08:00
whcao 5e1a02103f
[Feature]Delete comments (#298)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* delete comments
2021-06-12 21:45:22 +08:00
Miao Zheng 4ca21c7d03
[WIP] Refactoring weights initialization (#270)
* [WIP] Refactoring weights initialization

* fix lint and constant init cfg

* fix pretrained bug

* fix typo

* fix isort

* revise model utils
2021-06-10 10:54:34 +08:00
Yinhao Li b67a11c548
Fix kwargss to kwargs. (#274) 2021-05-26 19:27:30 +08:00
whcao b30f79ea4c
[Feature]Modify Parameters Passing in models.heads (#239)
* add mytrain.py for test

* test before layers

* test attr in layers

* test classifier

* delete mytrain.py

* set cal_acc in ClsHead defaults to False

* set cal_acc defaults to False

* use *args, **kwargs instead

* change bs16 to 3 in test_image_classifier_vit

* fix some comments

* change cal_acc=True

* test LinearClsHead
2021-05-10 14:56:55 +08:00
whcao affb39fe07
[Feature]Add Vit (#214)
* add imagenet bs 4096

* add vit_base_patch16_224_finetune

* add vit_base_patch16_224_pretrain

* add vit_base_patch16_384_finetune

* add vit_base_patch16_384_finetune

* add vit_b_p16_224_finetune_imagenet

* add vit_b_p16_224_pretrain_imagenet

* add vit_b_p16_384_finetune_imagenet

* add vit

* add vit

* add vit head

* vit unitest

* keep up with ClsHead

* test vit

* add flag to determiine whether to calculate acc during training

* Changes related to mmcv1.3.0

* change checkpoint saving interval to 10

* add label smooth

* default_runtime.py recovery

* docformatter

* docformatter

* delete 2 lines of comments

* delete configs/_base_/schedules/imagenet_bs4096.py

* add configs/_base_/schedules/imagenet_bs2048_AdamW.py

* rename imagenet_bs4096.py to imagenet_bs2048_AdamW.py

* add helpers.py

* test vit hybrid backbone

* fix HybridEmbed

* use to_2tuple instead
2021-04-16 19:22:41 +08:00
mzr1996 b7b520881f
Update CONTRIBUTING.md according to mmcv (#210)
* Update CONTRIBUTING.md according to mmcv

* Docstring formatting by docformatter

* Update openmmlab website.
2021-04-14 21:22:37 +08:00
whcao dcf61173f6
[Feature]Add cal_acc to cls_head.py (#206)
* add cal_acc to cls_head.py

* test ClsHead with cal_acc

* 4 spaces indentation
2021-04-13 13:52:14 +08:00
LXXXXR 07bb15e5fd
[Feature] Add heads and config for multilabel task (#145)
* resolve conflicts
add heads and config for multilabel tasks

* minor change

* remove evaluating mAP in head

* add baseline config

* add configs

* reserve only one config

* minor change

* fix minor bug

* minor change

* minor change

* add unittests and fix docstrings
2021-01-25 18:10:14 +08:00
WRH 44bbc71e14
Fix bug and optimize MNIST config (#98)
* add simple_test to ClsHead

* optimize lenet training config

* recover path setting
2020-11-26 15:27:04 +08:00
Lei Yang 9547e7b7a5
Add model inference (#16)
* add model inference on single image

* rm --eval

* revise doc

* add inference tool and demo

* fix linting

* rename inference_image to inference_model

* infer pred_label and pred_score

* fix linting

* add docstr for inference

* add remove_keys

* add doc for inference

* dump results rather than outputs

* add class_names

* add related infer scripts

* add demo image and the first part of colab tutorial

* conduct evaluation in dataset

* return lst in simple_test

* compuate topk accuracy with numpy

* return outputs in test api

* merge inference and evaluation tool

* fix typo

* rm gt_labels in test conifg

* get gt_labels during evaluation

* sperate the ipython notebook to another PR

* return tensor for onnx_export

* detach var in simple_test

* rm inference script

* rm inference script

* construct data dict to replace LoadImage

* print first predicted result if args.out is None

* modify test_pipeline in inference

* refactor class_names of imagenet

* set class_to_idx as a property in base dataset

* output pred_class during inference

* remove unused docstr
2020-09-30 19:00:20 +08:00
yanglei fbc68bf266 Add classifiers, heads, necks and losses 2020-07-07 19:32:06 +08:00