Ross Wightman
c719f7eb86
More forward_intermediates() updates
...
* add convnext, resnet, efficientformer, levit support
* remove kwargs only for fn so that torchscript isn't broken for all :(
* use reset_classifier() consistently in prune
2024-05-03 16:22:32 -07:00
Ross Wightman
67332fce24
Add features_intermediate() support to coatnet, maxvit, swin* models. Refine feature interface. Start prep of new vit weights.
2024-04-30 16:56:33 -07:00
Ross Wightman
4b2565e4cb
More forward_intermediates() / FeatureGetterNet work
...
* include relpos vit
* refactor reduction / size calcs so hybrid vits work and dynamic_img_size works
* fix -ve feature indices when pruning
* fix mvitv2 w/ class token
* refine naming
* add tests
2024-04-10 15:11:34 -07:00
Ross Wightman
59239d9df5
Cleanup imports for vit relpos
2024-02-10 21:40:57 -08:00
Ross Wightman
ac1b08deb6
fix_init on vit & relpos vit
2024-02-10 20:15:37 -08:00
Yassine
884ef88818
fix all SDPA dropouts
2023-10-05 08:58:41 -07:00
Ross Wightman
e4e43190ce
Add typing to all model entrypoint fns, add old cache check env var to builder
2023-05-08 08:52:38 -07:00
Ross Wightman
3386af8c86
Final push to get remaining models using multi-weight pretrained configs, almost all weights on HF hub
2023-04-26 15:52:13 -07:00
Ross Wightman
2f25f73b90
Missed a fused_attn update in relpos vit
2023-04-10 23:30:50 -07:00
Ross Wightman
965d0a2d36
fast_attn -> fused_attn, implement global config for enable/disable fused_attn, add to more models. vit clip openai 336 weights.
2023-04-10 12:04:33 -07:00
Ross Wightman
4d135421a3
Implement patch dropout for eva / vision_transformer, refactor / improve consistency of dropout args across all vit based models
2023-04-07 20:27:23 -07:00
Ross Wightman
1bb3989b61
Improve kwarg passthrough for swin, vit, deit, beit, eva
2023-04-05 21:37:16 -07:00
Ross Wightman
572f05096a
Swin and FocalNet weights on HF hub. Add model deprecation functionality w/ some registry tweaks.
2023-03-18 14:55:09 -07:00
Ross Wightman
122621daef
Add Final annotation to attn_fas to avoid symbol lookup of new scaled_dot_product_attn fn on old PyTorch in jit
2023-02-16 16:57:42 -08:00
Ross Wightman
621e1b2182
Add ideas from 'Scaling ViT to 22-B Params', testing PyTorch 2.0 fused F.scaled_dot_product_attention impl in vit, vit_relpos, maxxvit / coatnet.
2023-02-16 16:57:42 -08:00
Ross Wightman
9a51e4ea2e
Add FlexiViT models and weights, refactoring, push more weights
...
* push all vision_transformer*.py weights to HF hub
* finalize more pretrained tags for pushed weights
* refactor pos_embed files and module locations, move some pos embed modules to layers
* tweak hf hub helpers to aid bulk uploading and updating
2022-12-22 17:23:09 -08:00
Ross Wightman
927f031293
Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models
2022-12-06 15:00:06 -08:00
Ross Wightman
ce65a7b29f
Update vit_relpos w/ some additional weights, some cleanup to match recent vit updates, more MLP log coord experiments.
2022-07-07 21:33:25 -07:00
Ross Wightman
7d657d2ef4
Improve resolve_pretrained_cfg behaviour when no cfg exists, warn instead of crash. Improve usability ex #1311
2022-06-24 14:55:25 -07:00
Ross Wightman
4b30bae67b
Add updated vit_relpos weights, and impl w/ support for official swin-v2 differences for relpos. Add bias control support for MLP layers
2022-05-13 13:53:57 -07:00
Ross Wightman
f5ca4141f7
Adjust arg order for recent vit model args, add a few comments
2022-05-02 22:41:38 -07:00
Ross Wightman
41dc49a337
Vision Transformer refactoring and Rel Pos impl
2022-05-02 15:37:39 -07:00