Commit Graph

22 Commits (542b3200086c7163805ebee9f8f85b6c85ce8d8c)

Author SHA1 Message Date
cuicheng01 de0f57521d
update CLIP configs for PP-ShiTuV2-rec (#3239) 2024-09-05 10:49:04 +08:00
Tingquan Gao 91e8eb3632
update to be compatible with V100 (#3178) 2024-07-05 14:52:32 +08:00
Tingquan Gao b1ee8f911b
update to be compatible with V100 (#3177) 2024-07-04 16:56:04 +08:00
gaotingquan 0bfed92cb2 perf:
1. use nn.GELU instead of QuickGELU
2. support FusedLinear
2024-05-29 11:24:48 +08:00
gaotingquan 40042f89fa dbg: support fused attn 2024-05-24 14:27:29 +08:00
Tingquan Gao e3aaa3cefb
support fused attn (#3131) 2024-05-16 13:33:46 +08:00
wanghuancoder 80abf9f789
use tensor.shape bug not paddle.shape(tensor) (#3120) 2024-04-12 15:34:44 +08:00
sky 276e90d9a1
Bigmodel (#3032)
* fix the resolution problem for clip-vision transformer part and swim transformer

fix the resolution problem for clip-vision transformer part and swim transformer

* Revert "Revert "fix resolution problem for swin transformer and clip vit  (#3021)""

This reverts commit 174db431a82fb168c01b0be03fbb1d822314bbb1.

Update foundation_vit.py

Update foundation_vit.py

Revert "fix resolution problem for swin transformer and clip vit  (#3021)"

This reverts commit 61f748de67.

* add backbone function

* fix static graph problem

* remove text encoder framework and add classifier header directly

* fix bug in clip when using classifier header

* updated

* support embeding

Note, support embed is only for check the model since there is no related text encoder

* compatible with param transfer

* update setting
2024-02-06 21:46:25 +08:00
sky 61f748de67
fix resolution problem for swin transformer and clip vit (#3021)
* Update foundation_vit.py

Update .gitignore

fix time cost problem

Update swin_transformer.py

fix the speed and memory problem

reduce the unnecessary calculation when patch matches resolution

fix conflict

remove check resolution function

Revert "fix conflict"

This reverts commit d7a7dade71.

fix conflict

remove the conflict checkpoint function

【Hackathon 5th No.69】 分类大模型--人体视觉任务SOLIDER (#2995)

* add_solider

* add_solider

* add_solider

* add_solider

* add_solider

* add_solider

* add_solider

* add_solider

* add_solider

* add_solider

* add_solider

update doc about PPHGNetV2 (#3002)

fix clip patch embedding resolution problem

support non 224 resolution

integrate the pading function to one

adjust function name

fix the resolution problem for clip-vision transformer part and swim transformer

fix the resolution problem for clip-vision transformer part and swim transformer

* fix cache problem

using the huggingface plan and drop the cache

* Revert "fix cache problem"

This reverts commit 8f7ab55c75.

* fix resolution problem

* update big model backbone

* Revert "update big model backbone"

This reverts commit 04a39f701b.
2023-10-31 10:11:46 +08:00
zhangyubo0722 aae1e9543f
del load pretrained from url for resnet (#2997)
* del load pretrained from url for resnet

* del load_dygraph_pretrain_from_url

* del load_dygraph_pretrain_from_url

* modify save_load
2023-10-30 13:44:16 +08:00
sky e1a7840816
【Feature】fix the resolution problem for clip-vision transformer part and swim … (#3001)
* fix the resolution problem for clip-vision transformer part and swim transformer

fix the resolution problem for clip-vision transformer part and swim transformer

* adjust function name

* integrate the pading function to one

* support non 224 resolution

* fix clip patch embedding resolution problem

* fix conflict

remove the conflict checkpoint function

* Revert "fix conflict"

This reverts commit d7a7dade71.

* fix conflict

remove check resolution function
2023-10-18 20:55:37 +08:00
zhangyubo0722 ed67436647
del head_init_scale (#2947) 2023-09-01 20:15:54 +08:00
gaotingquan 2823e48be5 fix head_init_scale 2023-05-26 15:40:48 +08:00
gaotingquan 6d924f85ee fix for clip
1. fix bias_attr to False for conv of PatchEmbed;
2. support return_tokens_mean for Head of CLIP;
3. support remove_cls_token_in_forward for CLIP;
4. support head_init_scale argument for ViT backbone;
5. support get_num_layers() and no_weight_decay() for ViT backbone.
2023-05-26 15:40:48 +08:00
zh-hike d7bd275379 update foundation_vit from EVA_vit_huge to EVA_vit_giant 2023-04-23 10:16:08 +08:00
zengshao0622 adb9930317 fix name 2023-02-28 14:28:23 +08:00
zengshao0622 5604a13fac fix name 2023-02-28 14:28:23 +08:00
gaotingquan 811b483e30 fix: set dtype in paddle.to_tensor() 2023-02-10 15:51:53 +08:00
zh-hike 23f5af9f2a add field vit to foundationvit's name 2023-02-07 17:38:50 +08:00
zh-hike e8bf35d1b4 fix foundationvit load fail 2023-02-07 10:46:25 +08:00
zhangyubo0722 79cbd7350a Aesthetic 2023-02-01 10:43:11 +08:00
zengshao0622 5544dbaf8a add ViT model for foundation models forward 2023-01-19 17:42:45 +08:00