Commit Graph

119 Commits (f9a24fa19f3dd10722d61f7b0bb149e2d5a8bdf2)

Author SHA1 Message Date
SeeFun c3f24a5ae5
‘add ViT weight from I-JEPA pretrain’ 2023-06-14 22:30:31 +08:00
Lengyue c308dbc6f2
update dinov2 layerscale init values 2023-05-24 12:20:17 -04:00
Ross Wightman c5d3ee47f3 Add B/16 datacompxl CLIP weights 2023-05-16 11:27:20 -07:00
Ross Wightman 627b6315ba Add typing to dinov2 entrypt fns, use hf hub for mae & dinov2 weights 2023-05-09 20:42:11 -07:00
Ross Wightman a01d8f86f4 Tweak DinoV2 add, add MAE ViT weights, add initial intermediate layer getter experiment 2023-05-09 17:59:22 -07:00
Ross Wightman 59bea4c306 Merge branch 'main' into dot_nine_cleanup 2023-05-09 12:27:32 -07:00
Leng Yue 5cc87e6485
Add dinov2 pretrained models (#1797)
* add dinov2 small, base, and large

* fix input size

* fix swiglu & dinov2 vit giant

* use SwiGLUPacked to replace GluMlp

* clean up & add ffn_layer placeholder for ParallelScalingBlock
2023-05-09 12:24:47 -07:00
Ross Wightman e4e43190ce Add typing to all model entrypoint fns, add old cache check env var to builder 2023-05-08 08:52:38 -07:00
Ross Wightman 8fa86a28a8 Add datacomp L/14 (79.2 zs) image tower weights 2023-05-01 10:24:08 -07:00
Ross Wightman 5e64777804 0.8.21dev0 2023-04-28 13:46:59 -07:00
Ross Wightman 965d0a2d36 fast_attn -> fused_attn, implement global config for enable/disable fused_attn, add to more models. vit clip openai 336 weights. 2023-04-10 12:04:33 -07:00
Ross Wightman 4d135421a3 Implement patch dropout for eva / vision_transformer, refactor / improve consistency of dropout args across all vit based models 2023-04-07 20:27:23 -07:00
Ross Wightman 1bb3989b61 Improve kwarg passthrough for swin, vit, deit, beit, eva 2023-04-05 21:37:16 -07:00
Ross Wightman 9eaab795c2 Add some vit model deprecations 2023-04-05 17:21:03 -07:00
Ross Wightman 9aa1133bd2 Fix #1750, uncomment weight that exists on HF hub, add FIXME to 3 others that are still on local storage 2023-03-31 14:49:30 -07:00
Ross Wightman 0737bd3ec8 eva02 non-CLIP weights on HF hub, add initial eva02 clip model configs w/ postnorm variant & attn LN 2023-03-30 23:43:59 -07:00
Ross Wightman 572f05096a Swin and FocalNet weights on HF hub. Add model deprecation functionality w/ some registry tweaks. 2023-03-18 14:55:09 -07:00
Ross Wightman 2e38d53dca Remove dead line 2023-02-16 16:57:42 -08:00
Ross Wightman f77c04ff36 Torchscript fixes/hacks for rms_norm, refactor ParallelScalingBlock with manual combination of input projections, closer paper match 2023-02-16 16:57:42 -08:00
Ross Wightman 122621daef Add Final annotation to attn_fas to avoid symbol lookup of new scaled_dot_product_attn fn on old PyTorch in jit 2023-02-16 16:57:42 -08:00
Ross Wightman 621e1b2182 Add ideas from 'Scaling ViT to 22-B Params', testing PyTorch 2.0 fused F.scaled_dot_product_attention impl in vit, vit_relpos, maxxvit / coatnet. 2023-02-16 16:57:42 -08:00
Ross Wightman 64667bfa0e Add 'gigantic' vit clip variant for feature extraction and future fine-tuning 2023-01-25 18:02:10 -08:00
Ross Wightman 60ebb6cefa Re-order vit pretrained entries for more sensible default weights (no .tag specified) 2023-01-06 16:12:33 -08:00
Ross Wightman e861b74cf8 Pass through --model-kwargs (and --opt-kwargs for train) from command line through to model __init__. Update some models to improve arg overlay. Cleanup along the way. 2023-01-06 16:12:33 -08:00
Ross Wightman 8ece53e194 Switch BEiT to HF hub weights 2022-12-22 21:43:04 -08:00
Ross Wightman 9a51e4ea2e Add FlexiViT models and weights, refactoring, push more weights
* push all vision_transformer*.py weights to HF hub
* finalize more pretrained tags for pushed weights
* refactor pos_embed files and module locations, move some pos embed modules to layers
* tweak hf hub helpers to aid bulk uploading and updating
2022-12-22 17:23:09 -08:00
Ross Wightman 6a01101905 Update efficientnet.py and convnext.py to multi-weight, add ImageNet-12k pretrained EfficientNet-B5 and ConvNeXt-Nano. 2022-12-14 20:33:23 -08:00
Ross Wightman d5e7d6b27e Merge remote-tracking branch 'origin/main' into refactor-imports 2022-12-09 14:49:44 -08:00
Ross Wightman 7c4ed4d5a4 Add EVA-large models 2022-12-08 16:21:30 -08:00
Ross Wightman 927f031293 Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models 2022-12-06 15:00:06 -08:00
Ross Wightman 3785c234d7 Remove clip vit models that won't be ft and comment two that aren't uploaded yet 2022-12-05 10:21:34 -08:00
Ross Wightman 755570e2d6 Rename _pretrained.py -> pretrained.py, not feasible to change the other files to same scheme without breaking uses 2022-12-05 10:21:34 -08:00
Ross Wightman 72cfa57761 Add ported Tensorflow MaxVit weights. Add a few more CLIP ViT fine-tunes. Tweak some model tag names. Improve model tag name sorting. Update HF hub push config layout. 2022-12-05 10:21:34 -08:00
Ross Wightman 4d5c395160 MaxVit, ViT, ConvNeXt, and EfficientNet-v2 updates
* Add support for TF weights and modelling specifics to MaxVit (testing ported weights)
* More fine-tuned CLIP ViT configs
* ConvNeXt and MaxVit updated to new pretrained cfgs use
* EfficientNetV2, MaxVit and ConvNeXt high res models use squash crop/resize
2022-12-05 10:21:34 -08:00
Ross Wightman b2b6285af7 Add two more FT clip weights 2022-12-05 10:21:34 -08:00
Ross Wightman 5895056dc4 Add openai b32 ft 2022-12-05 10:21:34 -08:00
Ross Wightman 9dea5143d5 Adding more clip ft variants 2022-12-05 10:21:34 -08:00
Ross Wightman 444dcba4ad CLIP B16 12k weights added 2022-12-05 10:21:34 -08:00
Ross Wightman dff4717cbf Add clip b16 384x384 finetunes 2022-12-05 10:21:34 -08:00
Ross Wightman 883fa2eeaa Add fine-tuned B/16 224x224 in1k clip models 2022-12-05 10:21:34 -08:00
Ross Wightman 9a3d2ac2d5 Add latest CLIP ViT fine-tune pretrained configs / model entrypt updates 2022-12-05 10:21:34 -08:00
Ross Wightman def68befa7 Updating vit model defs for mult-weight support trial (vit first). Prepping for CLIP (laion2b and openai) fine-tuned weights. 2022-12-05 10:21:34 -08:00
Ross Wightman 0dadb4a6e9 Initial multi-weight support, handled so old pretraing config handling co-exists with new tags. 2022-12-05 10:21:34 -08:00
Mohamed Rashad 8fda68aff6
Fix repo id bug
This to fix this issue #1482
2022-10-05 16:26:06 +02:00
Ross Wightman 1199c5a1a4 clip_laion2b models need 1e-5 eps for LayerNorm 2022-09-25 10:36:54 -07:00
Ross Wightman e069249a2d Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7 2022-09-16 21:39:05 -07:00
Ross Wightman 9d65557be3 Fix errant import 2022-09-15 17:47:23 -07:00
Ross Wightman 9709dbaaa9 Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP 2022-09-15 17:25:59 -07:00
Ross Wightman e11efa872d Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights. 2022-09-13 16:35:26 -07:00
Ceshine Lee 0b64117592 Take `no_emb_class` into account when calling `resize_pos_embed` 2022-07-24 19:11:45 +08:00