56 Commits

Author SHA1 Message Date
gaotingquan
8b218b01ac refactor amp auto_cast context manager & loss scaler 2023-05-25 11:58:05 +08:00
gaotingquan
f37cb543b1 rm op black list in amp
the op flatten_contiguous_range and greater_than has supported amp mode since paddle 2.4
2023-03-29 14:57:02 +08:00
Tingquan Gao
f2fc43baeb Revert "refactor: mv all dataloaders to engine.dataloader_dict"
This reverts commit 284e2a67564d4b1f5f4a9c04c7c1ab0e8d3ada75.
2023-03-14 16:47:13 +08:00
Tingquan Gao
a1e840e0da Revert "refactor: iter_per_epoch -> max_iter"
This reverts commit a38e42f644fcba8b60a9672762211b6f7054b290.
2023-03-14 16:47:13 +08:00
Tingquan Gao
915dde176a Revert "refactor: rm train and eval from engine"
This reverts commit 5a6fe171a7cfab842adc6a744f11a2e24deb5384.
2023-03-14 16:47:13 +08:00
Tingquan Gao
aa52682c55 Revert "rm amp code from train and eval & use decorator for amp training"
This reverts commit d3941dc1e9628fa7cc83de7c3a6da3dfcd03b5de.
2023-03-14 16:47:13 +08:00
Tingquan Gao
85e200edb6 Revert "refactor"
This reverts commit 32593b63751b922e17b59384ed64654e6fcef42d.
2023-03-14 16:47:13 +08:00
Tingquan Gao
03795249c1 Revert "revert for running"
This reverts commit d3374e897e162053d93a20c21142135c3e7ee11c.
2023-03-14 16:47:13 +08:00
Tingquan Gao
7865207096 Revert "revert for running"
This reverts commit 392b75b1acac742b74e808353059d0281df26dcc.
2023-03-14 16:47:13 +08:00
gaotingquan
392b75b1ac revert for running 2023-03-10 16:56:55 +08:00
gaotingquan
d3374e897e revert for running 2023-03-10 16:56:55 +08:00
gaotingquan
32593b6375 refactor 2023-03-10 16:56:55 +08:00
gaotingquan
d3941dc1e9 rm amp code from train and eval & use decorator for amp training 2023-03-10 16:56:55 +08:00
gaotingquan
5a6fe171a7 refactor: rm train and eval from engine 2023-03-10 16:56:55 +08:00
gaotingquan
a38e42f644 refactor: iter_per_epoch -> max_iter 2023-03-10 16:56:55 +08:00
gaotingquan
284e2a6756 refactor: mv all dataloaders to engine.dataloader_dict 2023-03-10 16:56:55 +08:00
HydrogenSulfate
7d9f4dcb59 change Tensor.numpy()[0] to float(Tensor) for 0-D tensor case 2022-11-01 14:37:11 +08:00
HydrogenSulfate
61b4153907 add batch Tensor collate to simplify dali code in train/eval/retrival code 2022-10-13 12:05:50 +08:00
cuicheng01
50fc7d0eae fix bugs to adapt to the new framework 2022-09-20 10:06:20 +00:00
cuicheng01
cc46db1586 fix bugs to adapt to the new framework 2022-09-19 02:01:31 +00:00
HydrogenSulfate
266db4d89c fix classification bug 2022-07-06 20:38:40 +08:00
gaotingquan
df3e75dde4 fix: warn when topk parameter setting is wrong 2022-06-06 16:47:57 +08:00
zhiboniu
edf1129e5d match new eval function 2022-05-23 10:27:55 +00:00
zhiboniu
699c10aaeb Merge remote-tracking branch 'ppcls/develop' into develop 2022-05-23 08:03:46 +00:00
zhiboniu
05ecf1d045 multi-card eval support 2022-05-18 04:54:44 +00:00
zhiboniu
50900443f3 remove strongbaseline_attr, etc... 2022-05-18 04:54:43 +00:00
zhiboniu
26d5b7d1cc adapted dataset and loss 2022-05-18 04:54:43 +00:00
zhiboniu
0a3ecf60b4 add attribute strongbaseline 2022-05-18 04:54:43 +00:00
cuicheng01
45b1296c25 Add cls_demo_person code 2022-05-14 09:31:52 +00:00
Wei Shengyu
4bc4b7e0e2
Merge pull request #1876 from TingquanGao/dev/fix_amp_eval
fix: amp eval
2022-05-05 20:39:02 +08:00
littletomatodonkey
bb13f3c4f5
fix single card dist (#1889)
* fix single card logit

* fix distillation yaml files
2022-05-05 09:48:56 +08:00
gaotingquan
275945dff9
fix: compatible with Paddle 2.2, 2.3, and develop. 2022-04-29 10:21:09 +00:00
gaotingquan
59a3dcfc1c
fix: amp eval 2022-04-26 12:26:17 +00:00
cuicheng01
4e6c36e269
Merge pull request #1833 from TingquanGao/dev/fix_dist_loss
fix calc metric error and calc loss error in distributed.
2022-04-22 22:41:57 +08:00
gaotingquan
83ed5195c3
fix: set use_fp16_test to True when AMP O2 is enabled 2022-04-18 06:14:43 +00:00
gaotingquan
efde56ffc6
fix: only fp16 evaluation is supported when ampO2 is enabled 2022-04-13 12:14:14 +00:00
gaotingquan
474c918b27
fix: fix bug of batch_size statistics error 2022-04-13 09:19:30 +00:00
gaotingquan
c46189bad0
fix: fix bug about calc loss in dist 2022-04-12 06:56:44 +00:00
gaotingquan
b761325faa fix: fp32 eval by default when enable amp
If you want to eval by fp16 when enable amp, please set Amp.use_fp16_test=True, False by default.
2022-04-02 19:22:10 +08:00
WangChen0902
7595ba6d70
add AFD (#1683)
* add AFD
2022-02-28 19:11:50 +08:00
gaotingquan
7040ce8314 refactor: change params to be consistent with amp 2022-01-25 11:58:07 +08:00
zhangbo9674
cd039a7b37 add save_dtype 2022-01-10 18:19:03 +08:00
zhangbo9674
d437bb0a7e use fp32 to eval 2022-01-10 18:19:03 +08:00
zhangbo9674
bb19c1f7a6 fix eval bug 2022-01-10 18:19:03 +08:00
littletomatodonkey
aea712cc87
add dist of rec model (#1574)
* add distillation loss func and rec distillation
2022-01-05 19:25:36 +08:00
gaotingquan
7732a69f1b fix: fix key error in distillation 2021-12-16 18:21:08 +08:00
dongshuilong
f7ccc874e2 fix dali distributed eval bug 2021-11-16 11:09:21 +08:00
dongshuilong
278f6d8050 fig goooglenet distributed eval bug 2021-10-26 11:56:30 +00:00
dongshuilong
fd6f1ad2ca fix clas distributed eval bug 2021-10-21 03:47:03 +00:00
dongshuilong
c93d638f4c fix clas distributed eval bug 2021-10-20 11:22:37 +00:00