Commit Graph

1783 Commits (0e0948d10b572c1df163bf6535b1eaf28a76043d)

Author SHA1 Message Date
夜雨飘零 5ca584ff19
增加MultiScaleDataset的分割符 (#2853)
* 增加MultiScaleDataset的分割符

* remove encoding
2023-10-10 20:07:44 +08:00
thsno02 6cd88dfea6
fix: support for cases without a weight function (#2973) 2023-10-10 20:02:11 +08:00
cuicheng01 e9a59678f0
update PP-HGNetV2 (#2993) 2023-10-07 16:45:16 +08:00
cuicheng01 203eba4d6d
support load ssld state1 pretrain (#2988) 2023-09-26 23:48:27 +08:00
cuicheng01 5a28bab6d0
add hgnetv2 (#2987) 2023-09-26 23:22:48 +08:00
Tingquan Gao fe12d63181
debug (#2934)
* debug: fix model name

* fix

* fix Infer.transforms.ResizeImage
2023-09-19 16:08:09 +08:00
xiongkun 2200f30052
fix sot-slow problem (#2976)
* fix sot-slow problem

* fix format

* add comment
2023-09-18 17:27:01 +08:00
feifei-111 9bcb71ab81
Fix resnext model, split main logic with guard (#2956)
* update

* update

* update
2023-09-18 17:25:57 +08:00
zhangyubo0722 ef796960c0 support http pretrained 2023-09-12 17:45:39 +08:00
zhenming lin 98935e0bb7
Add ML-Decoder Support (#2957)
* 添加ML-Decoder,并使其可以在Arch作用域下控制

* 添加MultiLabelAsymmetricLoss

* 添加MultiLabelMAP评价方式,并避免其在回合内计算,每回合计算一次,节约时间

* 添加COCO2017数据集格式转换脚本

* 添加OneCycleLR学习率调度策略

* 添加ResNet50_ml_decoder

* 添加ResNet_ml_decoder

* 添加ResNet_ml_decoder

* 添加ResNet_ml_decoder preprocess

* 添加ResNet_ml_decoder

* 添加ResNet_ml_decoder

* 直接从model中拉class_num参数

* fix message error

* fix message error

* 给出基于inference model预测的文档

* 融合cutout使其具有随机生成填充值的能力

* 更正变量名称

* 更新configs

* 更新README

* fix bugs

* fix bugs

* update

* update

* update
2023-09-08 16:17:35 +08:00
zhangyubo0722 cdecc0b437
fix hgnet kwargs (#2949) 2023-09-01 23:36:58 +08:00
zhangyubo0722 ed67436647
del head_init_scale (#2947) 2023-09-01 20:15:54 +08:00
zhangyubo0722 d7a7d3e5e1
fix topk bug (#2945) 2023-09-01 19:28:17 +08:00
zhangyubo0722 74a33b7f50
fix gbk (#2941) 2023-09-01 17:49:33 +08:00
gaotingquan b3fcc98610 to be compatible with training and evaluation 2023-08-29 16:11:16 +08:00
zhangyubo0722 f3b2b2f4ad
[uapi]Save predict result (#2926)
* sava predict result
2023-08-29 14:32:07 +08:00
gaotingquan ae96c979eb always log 'topk=1' when k < output_dims to ensure consistent log formatting 2023-08-28 17:01:49 +08:00
Tingquan Gao eddba911b1
modify some default hyperparams to adapt to fine-tune downstream tasks (#2921)
1. unset EMA because of the relatively small size of most downstream dataset;
2. use mean and std of IMN.
2023-08-25 22:02:21 +08:00
Tingquan Gao 1f8a830b58
fix model name (#2908)
* fix model name

* fix: bug when distillation
2023-08-23 14:55:17 +08:00
Tingquan Gao 0675971d0b
rm fluid api (#2907) 2023-08-11 15:51:42 +08:00
Tingquan Gao 962c540ca4
debug: when using Piecewise.learning_rate while total epochs < 30 (#2901) 2023-08-07 15:26:49 +08:00
Tingquan Gao 907e5f7832
support Piecewise.learning_rate (#2898) 2023-08-04 12:14:19 +08:00
wuyefeilin 83adff2acc
add dataset alias for PaddleX (#2893) 2023-08-02 10:41:25 +08:00
Tingquan Gao 607a07cb28
compatible with the AMP.use_amp field in config (#2889) 2023-07-28 17:48:52 +08:00
Tingquan Gao bb1596b8c3
compatibility with python 3.11 (#2878) 2023-07-20 10:50:52 +08:00
cuicheng01 4da6f35bf2 support infer nested directory images 2023-06-29 19:43:40 +08:00
baocheny 75a5bb17ba add 2 more custom devices intel_gpu and apple mps 2023-06-29 19:42:38 +08:00
baocheny 3d0c0eb59d add 2 more custom devices intel_gpu and apple mps 2023-06-29 19:42:38 +08:00
Bobholamovic bda65e0c87 Remove ClasModels_general_quantization.yaml 2023-06-26 14:20:38 +08:00
Bobholamovic de5c4e1b1c Change vdl dir 2023-06-26 14:20:38 +08:00
Bobholamovic b4f10436cf Rename variable 2023-06-26 14:20:38 +08:00
Bobholamovic d6137854e2 Accommodate UAPI 2023-06-26 14:20:38 +08:00
gaotingquan 07b597f56e increase bs to avoid oom 2023-06-06 11:19:01 +08:00
gaotingquan caa6393cd4 set drop_last to False in train data 2023-06-06 11:19:01 +08:00
gaotingquan cf5d629a64 fix 2023-06-06 11:19:01 +08:00
gaotingquan 4643fdee09 update pretrained url 2023-06-06 11:19:01 +08:00
mmglove dd9b186e82 ppcls/utils/profiler.py 2023-06-05 21:23:35 +08:00
mmglove 54d27a1204 fix profiler 2023-06-05 21:23:35 +08:00
mmglove 259c0ca9de fix profiler 2023-06-05 21:23:35 +08:00
gaotingquan bdfa1feb2f update for amp config refactoring 2023-05-29 19:52:09 +08:00
gaotingquan 09817fe859 complete amp args 2023-05-29 19:52:09 +08:00
gaotingquan b3f7e3b974 unify comments 2023-05-29 19:52:09 +08:00
gaotingquan 8405882f11 debug 2023-05-29 19:52:09 +08:00
gaotingquan 0f86c55576 add amp args, use_amp=False 2023-05-29 19:52:09 +08:00
gaotingquan 2d8346cd3b fix _init_amp when export 2023-05-29 19:52:09 +08:00
gaotingquan f67cfe2c2a fix ema: set_value() -> paddle.assign() 2023-05-26 15:40:48 +08:00
gaotingquan 2823e48be5 fix head_init_scale 2023-05-26 15:40:48 +08:00
gaotingquan 042d1e7ef8 fix layer key name for dynamic lr in adamwdl optimizer 2023-05-26 15:40:48 +08:00
gaotingquan 80ae9079cd add clip finetune config 2023-05-26 15:40:48 +08:00
gaotingquan 6d924f85ee fix for clip
1. fix bias_attr to False for conv of PatchEmbed;
2. support return_tokens_mean for Head of CLIP;
3. support remove_cls_token_in_forward for CLIP;
4. support head_init_scale argument for ViT backbone;
5. support get_num_layers() and no_weight_decay() for ViT backbone.
2023-05-26 15:40:48 +08:00
gaotingquan 74e6c8aa33 add fp32 and ampo2 ultra configs 2023-05-25 16:57:16 +08:00
gaotingquan f469dfe8d2 decrease bs 2023-05-25 16:57:16 +08:00
gaotingquan 53ac4675ad warmup 5 epochs 2023-05-25 16:57:16 +08:00
gaotingquan c2802b90aa increase bs, num_workers to speed up 2023-05-25 16:57:16 +08:00
gaotingquan b2ba6994a0 add ultra configs 2023-05-25 16:57:16 +08:00
gaotingquan 14d06fb6bd support AMP.use_amp arg 2023-05-25 16:16:02 +08:00
gaotingquan b0877289f4 disable promote kernel for amp training
compatible with paddle 2.5 and older version.
ref: https://github.com/PaddlePaddle/PaddleClas/pull/2798
2023-05-25 11:58:05 +08:00
gaotingquan 162f013ebe fix: minimize() dont support parameter_list of type dict
there are diffs that step()+update() and minimize().
this will be fixed in https://github.com/PaddlePaddle/Paddle/pull/53773.
2023-05-25 11:58:05 +08:00
gaotingquan 8b218b01ac refactor amp auto_cast context manager & loss scaler 2023-05-25 11:58:05 +08:00
gaotingquan f884f28853 refactor amp 2023-05-25 11:58:05 +08:00
gaotingquan b3678234fe fix bug when update_freq > iter_per_epoch 2023-05-17 15:19:13 +08:00
gaotingquan 377950865c getargspec -> getfullargspec
getargspec dont support param annotations
2023-05-17 15:19:13 +08:00
gaotingquan bb831c3baa code style 2023-05-17 15:19:13 +08:00
gaotingquan a3e9e99fa0 revert: fix bs 2023-05-17 15:19:13 +08:00
gaotingquan f42d6b6204 fix name: w24 -> W24 2023-05-17 15:19:13 +08:00
gaotingquan 07b9162bc0 fix pretrained url 2023-05-17 15:19:13 +08:00
gaotingquan a1fa19cd29 rename: v3 -> V3 2023-05-17 15:19:13 +08:00
gaotingquan 2091a59ff5 fix reference url 2023-05-17 15:19:13 +08:00
gaotingquan 890f77411a fix bs and unset update_freq to adapt to 8 gpus 2023-05-17 15:19:13 +08:00
gaotingquan fc9c59c4b1 update pretrained url 2023-05-17 15:19:13 +08:00
Yang Nie b66ee6384b fix RMSProp one_dim_param_no_weight_decay 2023-05-06 19:04:37 +08:00
Yang Nie c351dac67e add tinynet 2023-05-06 19:04:37 +08:00
zhangting2020 e7bef51f9e fix data dtype for amp training 2023-04-26 18:40:20 +08:00
kangguangli 731006f1fc set seed by configs 2023-04-25 17:39:55 +08:00
kangguangli 293a216a0b fix random seed 2023-04-25 17:39:55 +08:00
zh-hike d7bd275379 update foundation_vit from EVA_vit_huge to EVA_vit_giant 2023-04-23 10:16:08 +08:00
Yang Nie 0af4680f86 set num_workers 8 2023-04-19 21:21:06 +08:00
Yang Nie cdd3c3a05c clear type hint 2023-04-19 21:21:06 +08:00
Yang Nie 692204eee6 fix code style 2023-04-19 21:21:06 +08:00
Yang Nie e7ad3909c8 update configs for 8gpus 2023-04-19 21:21:06 +08:00
Yang Nie deb8e98779 rename v2 to V2 2023-04-19 21:21:06 +08:00
Yang Nie be6a22be18 add MobileViTv2 2023-04-19 21:21:06 +08:00
gaotingquan 9f621279b8 fix infer output 2023-04-17 20:28:40 +08:00
gaotingquan 73f4d8e4ce to avoid cause issues for unset no_weight_decay models.
there seems be a diff for optimizer about using [] and [{"params":}, {"params":}] params
2023-04-12 20:55:38 +08:00
gaotingquan 31ea33c884 revert the cutmix, mixup, fmix fixes
because this change(commit: df31d808fc) will cause other issues, such as a change in the value of QA monitoring, so revert temporary.
2023-04-12 20:55:38 +08:00
parap1uie-s 52f16cc85d Update engine.py 2023-04-11 19:23:57 +08:00
parap1uie-s 6e6586f59b Fixed the incorrect infer outputs 2023-04-11 19:23:57 +08:00
Yang Nie f36ffbc492 fix 2023-04-10 15:02:54 +08:00
Yang Nie a69bc945bf modified batch_size and update_freq & add more tipc_test configs 2023-04-06 15:33:30 +08:00
Yang Nie e135e2cd37 modified batch_size and update_freq
modified batch_size per gpu and update_freq in MobileViTv3_S.yaml for training with 4 gpus
2023-04-06 15:33:30 +08:00
Yang Nie b8a1589377 update data augment and init method for MobileViTv3-v2 2023-04-06 15:33:30 +08:00
Yang Nie c32e2b098a Revert "Speedup EMA"
This reverts commit 35fc732dadac4761852b18512b5c5df8785e36df.
2023-04-06 15:33:30 +08:00
Yang Nie 001cdb0955 update MobileViTv3-v2 configs 2023-04-06 15:33:30 +08:00
Yang Nie 400de7844f update RandAugmentV3 2023-04-06 15:33:30 +08:00
Yang Nie 5f2eaa7cb1 bugfix: set_epoch after reume 2023-04-06 15:33:30 +08:00
Yang Nie ee40e1fc76 bugfix: make the `epoch` in MultiScaleSampler self-incrementing 2023-04-06 15:33:30 +08:00
Yang Nie de4129baa6 update 2023-04-06 15:33:30 +08:00
Yang Nie dc4fdba0ab add MobileViTv3 2023-04-06 15:33:30 +08:00
Yang Nie df31d808fc bugfix: MixupOperator, CutmixOperator, FmixOperator 2023-04-06 15:33:30 +08:00
Yang Nie cabdc251fe Speedup EMA 2023-04-06 15:33:30 +08:00