Cheng-Ling Lai
db06b56d34
Saved computational costs of get_intermediate_layers() from unused blocks
2024-03-17 21:34:06 +08:00
Cheng-Ling Lai
4731e4efc4
Modified ViT get_intermediate_layers() to support dynamic image size
2024-03-16 23:07:21 +08:00
Ross Wightman
ac1b08deb6
fix_init on vit & relpos vit
2024-02-10 20:15:37 -08:00
Ross Wightman
87fec3dc14
Update experimental vit model configs
2024-02-10 16:05:58 -08:00
Ross Wightman
ada145b016
Literal use w/ python < 3.8 requires typing_extension, cach instead of check sys ver
2023-11-21 09:48:03 -08:00
Laureηt
21647c0a0c
Add types to vision_transformers.py
2023-11-17 16:06:06 -08:00
Ross Wightman
7c685a4ef3
Fix openai quickgelu loading and add mnissing orig_in21k vit weights and remove zero'd classifier w/ matching hub update
2023-11-16 19:16:28 -08:00
Ross Wightman
dcfdba1f5f
Make quickgelu models appear in listing
2023-11-03 11:01:41 -07:00
Ross Wightman
96bd162ddb
Add cc-by-nc-4.0 license for metaclip, make note in quickgelu model def about pretrained_cfg mapping
2023-11-03 11:01:41 -07:00
Ross Wightman
6894ec7edc
Forgot about datcomp b32 models
2023-11-03 11:01:41 -07:00
Ross Wightman
a2e4a4c148
Add quickgelu vit clip variants, simplify get_norm_layer and allow string args in vit norm/act. Add metaclip CLIP weights
2023-11-03 11:01:41 -07:00
Ross Wightman
c55bc41a42
DFN CLIP ViT support
2023-10-31 12:16:21 -07:00
Ross Wightman
68a121402f
Added hub weights for dinov2 register models
2023-10-29 23:03:48 -07:00
Ross Wightman
3f02392488
Add DINOv2 models with register tokens. Convert pos embed to non-overlapping for consistency.
2023-10-29 23:03:48 -07:00
Patrick Labatut
97450d618a
Update DINOv2 license to Apache 2.0
2023-10-27 09:12:51 -07:00
Ross Wightman
d3ebdcfd93
Disable strict load when siglip vit pooling removed
2023-10-19 12:03:40 -07:00
Ross Wightman
e728f3efdb
Cleanup ijepa models, they're just gap (global-avg-pool) models w/o heads. fc-norm conversion was wrong, gigantic should have been giant
2023-10-17 15:44:46 -07:00
Ross Wightman
49a459e8f1
Merge remote-tracking branch 'upstream/main' into vit_siglip_and_reg
2023-10-17 09:36:48 -07:00
Ross Wightman
59b622233b
Change ijepa names, add pretrain cfg for reg experimentts
2023-10-17 07:16:17 -07:00
Ross Wightman
71365165a2
Add SigLIP weights
2023-10-16 23:26:08 -07:00
Ross Wightman
42daa3b497
Add full set of SigLIP models
2023-10-10 22:15:45 -07:00
Yassine
884ef88818
fix all SDPA dropouts
2023-10-05 08:58:41 -07:00
Ross Wightman
b9dde58076
Fixup attention pooling in siglip vit support
2023-10-02 11:44:12 -07:00
Ross Wightman
99cfd6702f
Use global pool arg to select attention pooling in head
2023-09-30 16:16:21 -07:00
Ross Wightman
82cc53237e
Working on support for siglip (w/ attn pool) vit backbone, and adding registers (reg tokens)
2023-09-30 16:03:01 -07:00
Ross Wightman
fc5d705b83
dynamic_size -> dynamic_img_size, add dynamic_img_pad for padding option
2023-08-27 15:58:35 -07:00
Ross Wightman
ea3519a5f0
Fix dynamic_resize for deit models (distilled or no_embed_cls) and vit w/o class tokens
2023-08-27 15:58:35 -07:00
Ross Wightman
4d8ecde6cc
Fix torchscript for vit-hybrid dynamic_resize
2023-08-27 15:58:35 -07:00
Ross Wightman
fdd8c7c2da
Initial impl of dynamic resize for existing vit models (incl vit-resnet hybrids)
2023-08-27 15:58:35 -07:00
Ross Wightman
a9d0615f42
Fix ijepa vit issue with 448 model, minor formatting fixes
2023-07-26 20:46:27 -07:00
SeeFun
c3f24a5ae5
‘add ViT weight from I-JEPA pretrain’
2023-06-14 22:30:31 +08:00
Lengyue
c308dbc6f2
update dinov2 layerscale init values
2023-05-24 12:20:17 -04:00
Ross Wightman
c5d3ee47f3
Add B/16 datacompxl CLIP weights
2023-05-16 11:27:20 -07:00
Ross Wightman
627b6315ba
Add typing to dinov2 entrypt fns, use hf hub for mae & dinov2 weights
2023-05-09 20:42:11 -07:00
Ross Wightman
a01d8f86f4
Tweak DinoV2 add, add MAE ViT weights, add initial intermediate layer getter experiment
2023-05-09 17:59:22 -07:00
Ross Wightman
59bea4c306
Merge branch 'main' into dot_nine_cleanup
2023-05-09 12:27:32 -07:00
Leng Yue
5cc87e6485
Add dinov2 pretrained models ( #1797 )
...
* add dinov2 small, base, and large
* fix input size
* fix swiglu & dinov2 vit giant
* use SwiGLUPacked to replace GluMlp
* clean up & add ffn_layer placeholder for ParallelScalingBlock
2023-05-09 12:24:47 -07:00
Ross Wightman
e4e43190ce
Add typing to all model entrypoint fns, add old cache check env var to builder
2023-05-08 08:52:38 -07:00
Ross Wightman
8fa86a28a8
Add datacomp L/14 (79.2 zs) image tower weights
2023-05-01 10:24:08 -07:00
Ross Wightman
5e64777804
0.8.21dev0
2023-04-28 13:46:59 -07:00
Ross Wightman
965d0a2d36
fast_attn -> fused_attn, implement global config for enable/disable fused_attn, add to more models. vit clip openai 336 weights.
2023-04-10 12:04:33 -07:00
Ross Wightman
4d135421a3
Implement patch dropout for eva / vision_transformer, refactor / improve consistency of dropout args across all vit based models
2023-04-07 20:27:23 -07:00
Ross Wightman
1bb3989b61
Improve kwarg passthrough for swin, vit, deit, beit, eva
2023-04-05 21:37:16 -07:00
Ross Wightman
9eaab795c2
Add some vit model deprecations
2023-04-05 17:21:03 -07:00
Ross Wightman
9aa1133bd2
Fix #1750 , uncomment weight that exists on HF hub, add FIXME to 3 others that are still on local storage
2023-03-31 14:49:30 -07:00
Ross Wightman
0737bd3ec8
eva02 non-CLIP weights on HF hub, add initial eva02 clip model configs w/ postnorm variant & attn LN
2023-03-30 23:43:59 -07:00
Ross Wightman
572f05096a
Swin and FocalNet weights on HF hub. Add model deprecation functionality w/ some registry tweaks.
2023-03-18 14:55:09 -07:00
Ross Wightman
2e38d53dca
Remove dead line
2023-02-16 16:57:42 -08:00
Ross Wightman
f77c04ff36
Torchscript fixes/hacks for rms_norm, refactor ParallelScalingBlock with manual combination of input projections, closer paper match
2023-02-16 16:57:42 -08:00
Ross Wightman
122621daef
Add Final annotation to attn_fas to avoid symbol lookup of new scaled_dot_product_attn fn on old PyTorch in jit
2023-02-16 16:57:42 -08:00