zhuyipin
3dac3be506
adapt AdaptiveAvgPool2D for npu for PPHGNet ( #3163 ) ( #3223 )
2024-08-26 11:57:09 +08:00
Tingquan Gao
82034d1b3f
fix: there may not be weight ratio in multilabel dataset label ( #3226 )
2024-08-23 14:40:09 +08:00
zhuyipin
28dc67e3e2
convert npu roll op into paddle roll ( #3139 )
2024-05-15 17:11:28 +08:00
cuicheng01
168097fd61
remove PP-HGNetV2_B7 config ( #3024 )
2023-10-30 20:21:52 +08:00
zhangyubo0722
25b65bb796
del load pretrained from url for resnet ( #2998 )
...
* del load pretrained from url for resnet
* del load_dygraph_pretrain_from_url
* modify save_load
* modify save_load
2023-10-30 13:45:53 +08:00
cuicheng01
aa5d103139
[cherry-pick] update PP-HGNetV2 ( #2994 )
...
* add hgnetv2 (#2987 )
* support load ssld state1 pretrain (#2988 )
* update PP-HGNetV2
2023-10-07 16:44:38 +08:00
cuicheng01
bfea8e83a6
Release/2.5.1 ( #2990 )
...
* add hgnetv2 (#2987 )
* support load ssld state1 pretrain (#2988 )
2023-09-27 00:03:01 +08:00
cuicheng01
43e6382aa3
add hgnetv2 ( #2987 ) ( #2989 )
2023-09-26 23:59:38 +08:00
gaotingquan
10cf2775e4
fix Infer.transforms.ResizeImage
2023-09-19 16:02:03 +08:00
gaotingquan
b462847663
fix
2023-09-19 16:02:03 +08:00
zhangyubo0722
9e9d8c9d2b
support http pretrained
2023-09-12 18:00:49 +08:00
zhangyubo0722
a119eb4191
fix hgnet kwargs ( #2950 )
2023-09-01 23:45:04 +08:00
zhangyubo0722
82317a21e3
del head_init_scale ( #2948 )
2023-09-01 20:15:35 +08:00
zhangyubo0722
f4de50226a
fix gbk ( #2943 )
...
* fix gbk
* fix_bug
2023-09-01 19:29:25 +08:00
gaotingquan
8312dc3677
debug: fix model name
2023-08-29 16:40:37 +08:00
Tingquan Gao
48fb2c86f2
to be compatible with training and evaluation ( #2932 )
2023-08-29 11:43:41 +08:00
zhangyubo0722
a60d18a3d8
[uapi]save_predict_result ( #2928 )
...
* save_predict_result
* save_predict_result
2023-08-28 19:40:51 +08:00
Tingquan Gao
b34ed83708
always log 'topk=1' when k < output_dims to ensure consistent log formatting ( #2925 )
2023-08-28 14:55:32 +08:00
Tingquan Gao
28ef1c00d8
[cherry-pick] some fix ( #2920 )
...
* fix model name
* fix: bug when distillation
* modify some default hyperparams to adapt to fine-tune downstream tasks
1. unset EMA because of the relatively small size of most downstream dataset;
2. use mean and std of IMN.
2023-08-25 22:02:57 +08:00
gaotingquan
353c3f512b
debug: when using Piecewise.learning_rate while total epochs < 30
2023-08-07 11:30:25 +08:00
Tingquan Gao
4247fac82e
support Piecewise.learning_rate ( #2899 )
2023-08-04 12:16:01 +08:00
wuyefeilin
657037c4e7
add dataset alias for PaddleX ( #2892 )
2023-08-02 10:41:40 +08:00
cuicheng01
4da6f35bf2
support infer nested directory images
2023-06-29 19:43:40 +08:00
baocheny
75a5bb17ba
add 2 more custom devices intel_gpu and apple mps
2023-06-29 19:42:38 +08:00
baocheny
3d0c0eb59d
add 2 more custom devices intel_gpu and apple mps
2023-06-29 19:42:38 +08:00
Bobholamovic
bda65e0c87
Remove ClasModels_general_quantization.yaml
2023-06-26 14:20:38 +08:00
Bobholamovic
de5c4e1b1c
Change vdl dir
2023-06-26 14:20:38 +08:00
Bobholamovic
b4f10436cf
Rename variable
2023-06-26 14:20:38 +08:00
Bobholamovic
d6137854e2
Accommodate UAPI
2023-06-26 14:20:38 +08:00
gaotingquan
07b597f56e
increase bs to avoid oom
2023-06-06 11:19:01 +08:00
gaotingquan
caa6393cd4
set drop_last to False in train data
2023-06-06 11:19:01 +08:00
gaotingquan
cf5d629a64
fix
2023-06-06 11:19:01 +08:00
gaotingquan
4643fdee09
update pretrained url
2023-06-06 11:19:01 +08:00
mmglove
dd9b186e82
ppcls/utils/profiler.py
2023-06-05 21:23:35 +08:00
mmglove
54d27a1204
fix profiler
2023-06-05 21:23:35 +08:00
mmglove
259c0ca9de
fix profiler
2023-06-05 21:23:35 +08:00
gaotingquan
bdfa1feb2f
update for amp config refactoring
2023-05-29 19:52:09 +08:00
gaotingquan
09817fe859
complete amp args
2023-05-29 19:52:09 +08:00
gaotingquan
b3f7e3b974
unify comments
2023-05-29 19:52:09 +08:00
gaotingquan
8405882f11
debug
2023-05-29 19:52:09 +08:00
gaotingquan
0f86c55576
add amp args, use_amp=False
2023-05-29 19:52:09 +08:00
gaotingquan
2d8346cd3b
fix _init_amp when export
2023-05-29 19:52:09 +08:00
gaotingquan
f67cfe2c2a
fix ema: set_value() -> paddle.assign()
2023-05-26 15:40:48 +08:00
gaotingquan
2823e48be5
fix head_init_scale
2023-05-26 15:40:48 +08:00
gaotingquan
042d1e7ef8
fix layer key name for dynamic lr in adamwdl optimizer
2023-05-26 15:40:48 +08:00
gaotingquan
80ae9079cd
add clip finetune config
2023-05-26 15:40:48 +08:00
gaotingquan
6d924f85ee
fix for clip
...
1. fix bias_attr to False for conv of PatchEmbed;
2. support return_tokens_mean for Head of CLIP;
3. support remove_cls_token_in_forward for CLIP;
4. support head_init_scale argument for ViT backbone;
5. support get_num_layers() and no_weight_decay() for ViT backbone.
2023-05-26 15:40:48 +08:00
gaotingquan
74e6c8aa33
add fp32 and ampo2 ultra configs
2023-05-25 16:57:16 +08:00
gaotingquan
f469dfe8d2
decrease bs
2023-05-25 16:57:16 +08:00
gaotingquan
53ac4675ad
warmup 5 epochs
2023-05-25 16:57:16 +08:00