zhuyipin
28dc67e3e2
convert npu roll op into paddle roll ( #3139 )
2024-05-15 17:11:28 +08:00
zhangyubo0722
25b65bb796
del load pretrained from url for resnet ( #2998 )
...
* del load pretrained from url for resnet
* del load_dygraph_pretrain_from_url
* modify save_load
* modify save_load
2023-10-30 13:45:53 +08:00
cuicheng01
aa5d103139
[cherry-pick] update PP-HGNetV2 ( #2994 )
...
* add hgnetv2 (#2987 )
* support load ssld state1 pretrain (#2988 )
* update PP-HGNetV2
2023-10-07 16:44:38 +08:00
cuicheng01
43e6382aa3
add hgnetv2 ( #2987 ) ( #2989 )
2023-09-26 23:59:38 +08:00
zhangyubo0722
a119eb4191
fix hgnet kwargs ( #2950 )
2023-09-01 23:45:04 +08:00
zhangyubo0722
82317a21e3
del head_init_scale ( #2948 )
2023-09-01 20:15:35 +08:00
Tingquan Gao
28ef1c00d8
[cherry-pick] some fix ( #2920 )
...
* fix model name
* fix: bug when distillation
* modify some default hyperparams to adapt to fine-tune downstream tasks
1. unset EMA because of the relatively small size of most downstream dataset;
2. use mean and std of IMN.
2023-08-25 22:02:57 +08:00
gaotingquan
cf5d629a64
fix
2023-06-06 11:19:01 +08:00
gaotingquan
4643fdee09
update pretrained url
2023-06-06 11:19:01 +08:00
gaotingquan
2823e48be5
fix head_init_scale
2023-05-26 15:40:48 +08:00
gaotingquan
6d924f85ee
fix for clip
...
1. fix bias_attr to False for conv of PatchEmbed;
2. support return_tokens_mean for Head of CLIP;
3. support remove_cls_token_in_forward for CLIP;
4. support head_init_scale argument for ViT backbone;
5. support get_num_layers() and no_weight_decay() for ViT backbone.
2023-05-26 15:40:48 +08:00
gaotingquan
bb831c3baa
code style
2023-05-17 15:19:13 +08:00
gaotingquan
07b9162bc0
fix pretrained url
2023-05-17 15:19:13 +08:00
gaotingquan
a1fa19cd29
rename: v3 -> V3
2023-05-17 15:19:13 +08:00
gaotingquan
2091a59ff5
fix reference url
2023-05-17 15:19:13 +08:00
gaotingquan
fc9c59c4b1
update pretrained url
2023-05-17 15:19:13 +08:00
Yang Nie
c351dac67e
add tinynet
2023-05-06 19:04:37 +08:00
zh-hike
d7bd275379
update foundation_vit from EVA_vit_huge to EVA_vit_giant
2023-04-23 10:16:08 +08:00
Yang Nie
cdd3c3a05c
clear type hint
2023-04-19 21:21:06 +08:00
Yang Nie
692204eee6
fix code style
2023-04-19 21:21:06 +08:00
Yang Nie
deb8e98779
rename v2 to V2
2023-04-19 21:21:06 +08:00
Yang Nie
be6a22be18
add MobileViTv2
2023-04-19 21:21:06 +08:00
Yang Nie
b8a1589377
update data augment and init method for MobileViTv3-v2
2023-04-06 15:33:30 +08:00
Yang Nie
de4129baa6
update
2023-04-06 15:33:30 +08:00
Yang Nie
dc4fdba0ab
add MobileViTv3
2023-04-06 15:33:30 +08:00
Yang Nie
beca8b2c1b
add mobilenext
...
add cooldown config
update optimizer
fix ParamAttr & update test_tipc
fix tipc
update tipc config
remove docs of `_make_divisible`
refactor the implementation of "no weight decay"
fix model name
remove cooldown config
2023-04-05 00:41:19 +08:00
Yang Nie
e0daf82dc0
rename micronet_m(\d) to MicroNet_M(\d)
2023-04-04 20:37:22 +08:00
Yang Nie
8a578a083e
remove the comma at the end
2023-04-04 20:37:22 +08:00
Yang Nie
4962f71289
remove `ChannelShuffle2`
2023-04-04 20:37:22 +08:00
Yang Nie
a881c7a7fa
remove useless comments
2023-04-04 20:37:22 +08:00
Yang Nie
d76defdefc
fix import bug
2023-04-04 20:37:22 +08:00
Yang Nie
e262a5f64d
add micronet
2023-04-04 20:37:22 +08:00
gaotingquan
6fdaf94a0d
fix concat error when fp16
2023-04-04 19:49:00 +08:00
Yang Nie
1433161edd
fix typo
2023-04-04 18:44:44 +08:00
Yang Nie
a2052232e6
add support for `CvT_21_244`, `CvT_13_384`, `CvT_21_384` and `CvT_W24_384`
2023-04-04 18:44:44 +08:00
Yang Nie
4cfd2159e5
rename cvt_{depth}_{size}x{size} to CvT_{depth}_{size}
2023-04-04 18:44:44 +08:00
Yang Nie
d7a1127559
add CvT
2023-04-04 18:44:44 +08:00
gaotingquan
5c39dfa6ba
rename gvt.py -> twins.py & twins-svt -> twins-alt-gvt
2023-03-30 17:29:49 +08:00
gaotingquan
0b3b621a81
fix concat error when fp16
2023-03-21 14:23:09 +08:00
gaotingquan
4e988692dd
fix concat error when fp16
2023-03-21 14:23:09 +08:00
Tingquan Gao
f91811dab9
Revert "use decorator to parse batch"
...
This reverts commit 97935164fe
.
2023-03-14 16:47:13 +08:00
Tingquan Gao
aa52682c55
Revert "rm amp code from train and eval & use decorator for amp training"
...
This reverts commit d3941dc1e9
.
2023-03-14 16:47:13 +08:00
Tingquan Gao
0a38723196
Revert "add amp decorator and parse_batch decorator"
...
This reverts commit 4008588343
.
2023-03-14 16:47:13 +08:00
Tingquan Gao
7865207096
Revert "revert for running"
...
This reverts commit 392b75b1ac
.
2023-03-14 16:47:13 +08:00
gaotingquan
392b75b1ac
revert for running
2023-03-10 16:56:55 +08:00
gaotingquan
4008588343
add amp decorator and parse_batch decorator
2023-03-10 16:56:55 +08:00
gaotingquan
d3941dc1e9
rm amp code from train and eval & use decorator for amp training
2023-03-10 16:56:55 +08:00
gaotingquan
97935164fe
use decorator to parse batch
2023-03-10 16:56:55 +08:00
tianyi1997
e0847f1800
Update pretrained backbone
2023-02-28 15:01:21 +08:00
tianyi1997
7c3bb2754b
Update files according to reviews
...
https://github.com/PaddlePaddle/PaddleClas/pull/2633
2023-02-28 15:01:21 +08:00