gaotingquan
0f86c55576
add amp args, use_amp=False
2023-05-29 19:52:09 +08:00
gaotingquan
2d8346cd3b
fix _init_amp when export
2023-05-29 19:52:09 +08:00
gaotingquan
f67cfe2c2a
fix ema: set_value() -> paddle.assign()
2023-05-26 15:40:48 +08:00
gaotingquan
2823e48be5
fix head_init_scale
2023-05-26 15:40:48 +08:00
gaotingquan
042d1e7ef8
fix layer key name for dynamic lr in adamwdl optimizer
2023-05-26 15:40:48 +08:00
gaotingquan
80ae9079cd
add clip finetune config
2023-05-26 15:40:48 +08:00
gaotingquan
6d924f85ee
fix for clip
...
1. fix bias_attr to False for conv of PatchEmbed;
2. support return_tokens_mean for Head of CLIP;
3. support remove_cls_token_in_forward for CLIP;
4. support head_init_scale argument for ViT backbone;
5. support get_num_layers() and no_weight_decay() for ViT backbone.
2023-05-26 15:40:48 +08:00
gaotingquan
74e6c8aa33
add fp32 and ampo2 ultra configs
2023-05-25 16:57:16 +08:00
gaotingquan
f469dfe8d2
decrease bs
2023-05-25 16:57:16 +08:00
gaotingquan
53ac4675ad
warmup 5 epochs
2023-05-25 16:57:16 +08:00
gaotingquan
c2802b90aa
increase bs, num_workers to speed up
2023-05-25 16:57:16 +08:00
gaotingquan
b2ba6994a0
add ultra configs
2023-05-25 16:57:16 +08:00
gaotingquan
14d06fb6bd
support AMP.use_amp arg
2023-05-25 16:16:02 +08:00
gaotingquan
b0877289f4
disable promote kernel for amp training
...
compatible with paddle 2.5 and older version.
ref: https://github.com/PaddlePaddle/PaddleClas/pull/2798
2023-05-25 11:58:05 +08:00
gaotingquan
162f013ebe
fix: minimize() dont support parameter_list of type dict
...
there are diffs that step()+update() and minimize().
this will be fixed in https://github.com/PaddlePaddle/Paddle/pull/53773 .
2023-05-25 11:58:05 +08:00
gaotingquan
8b218b01ac
refactor amp auto_cast context manager & loss scaler
2023-05-25 11:58:05 +08:00
gaotingquan
f884f28853
refactor amp
2023-05-25 11:58:05 +08:00
gaotingquan
b3678234fe
fix bug when update_freq > iter_per_epoch
2023-05-17 15:19:13 +08:00
gaotingquan
377950865c
getargspec -> getfullargspec
...
getargspec dont support param annotations
2023-05-17 15:19:13 +08:00
gaotingquan
bb831c3baa
code style
2023-05-17 15:19:13 +08:00
gaotingquan
a3e9e99fa0
revert: fix bs
2023-05-17 15:19:13 +08:00
gaotingquan
f42d6b6204
fix name: w24 -> W24
2023-05-17 15:19:13 +08:00
gaotingquan
07b9162bc0
fix pretrained url
2023-05-17 15:19:13 +08:00
gaotingquan
a1fa19cd29
rename: v3 -> V3
2023-05-17 15:19:13 +08:00
gaotingquan
2091a59ff5
fix reference url
2023-05-17 15:19:13 +08:00
gaotingquan
890f77411a
fix bs and unset update_freq to adapt to 8 gpus
2023-05-17 15:19:13 +08:00
gaotingquan
fc9c59c4b1
update pretrained url
2023-05-17 15:19:13 +08:00
Yang Nie
b66ee6384b
fix RMSProp one_dim_param_no_weight_decay
2023-05-06 19:04:37 +08:00
Yang Nie
c351dac67e
add tinynet
2023-05-06 19:04:37 +08:00
zhangting2020
e7bef51f9e
fix data dtype for amp training
2023-04-26 18:40:20 +08:00
kangguangli
731006f1fc
set seed by configs
2023-04-25 17:39:55 +08:00
kangguangli
293a216a0b
fix random seed
2023-04-25 17:39:55 +08:00
zh-hike
d7bd275379
update foundation_vit from EVA_vit_huge to EVA_vit_giant
2023-04-23 10:16:08 +08:00
Yang Nie
0af4680f86
set num_workers 8
2023-04-19 21:21:06 +08:00
Yang Nie
cdd3c3a05c
clear type hint
2023-04-19 21:21:06 +08:00
Yang Nie
692204eee6
fix code style
2023-04-19 21:21:06 +08:00
Yang Nie
e7ad3909c8
update configs for 8gpus
2023-04-19 21:21:06 +08:00
Yang Nie
deb8e98779
rename v2 to V2
2023-04-19 21:21:06 +08:00
Yang Nie
be6a22be18
add MobileViTv2
2023-04-19 21:21:06 +08:00
gaotingquan
9f621279b8
fix infer output
2023-04-17 20:28:40 +08:00
gaotingquan
73f4d8e4ce
to avoid cause issues for unset no_weight_decay models.
...
there seems be a diff for optimizer about using [] and [{"params":}, {"params":}] params
2023-04-12 20:55:38 +08:00
gaotingquan
31ea33c884
revert the cutmix, mixup, fmix fixes
...
because this change(commit: df31d808fc
) will cause other issues, such as a change in the value of QA monitoring, so revert temporary.
2023-04-12 20:55:38 +08:00
parap1uie-s
52f16cc85d
Update engine.py
2023-04-11 19:23:57 +08:00
parap1uie-s
6e6586f59b
Fixed the incorrect infer outputs
2023-04-11 19:23:57 +08:00
Yang Nie
f36ffbc492
fix
2023-04-10 15:02:54 +08:00
Yang Nie
a69bc945bf
modified batch_size and update_freq & add more tipc_test configs
2023-04-06 15:33:30 +08:00
Yang Nie
e135e2cd37
modified batch_size and update_freq
...
modified batch_size per gpu and update_freq in MobileViTv3_S.yaml for training with 4 gpus
2023-04-06 15:33:30 +08:00
Yang Nie
b8a1589377
update data augment and init method for MobileViTv3-v2
2023-04-06 15:33:30 +08:00
Yang Nie
c32e2b098a
Revert "Speedup EMA"
...
This reverts commit 35fc732dadac4761852b18512b5c5df8785e36df.
2023-04-06 15:33:30 +08:00
Yang Nie
001cdb0955
update MobileViTv3-v2 configs
2023-04-06 15:33:30 +08:00