Ross Wightman
6a8bb03330
Initial MobileNetV4 pass
2024-05-23 10:49:18 -07:00
Ross Wightman
2bfa5e5d74
Remove JIT activations, take jit out of ME activations. Remove other instances of torch.jit.script. Breaks torch.compile and is much less performant. Remove SpaceToDepthModule
2024-05-06 16:32:49 -07:00
Ross Wightman
88889de923
Fix meshgrid deprecation warnings and backward compat with explicit 'ndgrid' and 'meshgrid' fn w/o indexing arg
2024-01-27 13:48:33 -08:00
a-r-r-o-w
5f14bdd564
include typing suggestions by @rwightman
2023-10-30 13:47:54 -07:00
Ross Wightman
a58f9162d7
Missed __init__.py update for attention pooling layer add
2023-10-17 09:28:21 -07:00
Ross Wightman
9caf32b93f
Move levit style pos bias resize with other rel pos bias utils
2023-09-01 11:05:56 -07:00
方曦
170a5b6e27
add tinyvit
2023-09-01 11:05:56 -07:00
Ross Wightman
c153cd4a3e
Add more advanced interpolation method from BEiT and support non-square window & image size adaptation for
...
* beit/beit-v2
* maxxvit/coatnet
* swin transformer
And non-square windows for swin-v2
2023-08-08 16:41:16 -07:00
Ross Wightman
e9373b1b92
Cleanup before samvit merge. Resize abs posembed on the fly, undo some line-wraps, remove redundant unbind, fix HF hub weight load
2023-05-18 16:43:48 -07:00
Ross Wightman
a01d8f86f4
Tweak DinoV2 add, add MAE ViT weights, add initial intermediate layer getter experiment
2023-05-09 17:59:22 -07:00
Ross Wightman
af48246a9a
Add SwiGLUPacked to layers __init__
2023-05-08 13:52:34 -07:00
Ross Wightman
a08e5aed1d
More models w/ multi-weight support, moving to HF hub. Removing inplace_abn from all models including TResNet
2023-04-20 22:44:49 -07:00
Ross Wightman
965d0a2d36
fast_attn -> fused_attn, implement global config for enable/disable fused_attn, add to more models. vit clip openai 336 weights.
2023-04-10 12:04:33 -07:00
Ross Wightman
4d135421a3
Implement patch dropout for eva / vision_transformer, refactor / improve consistency of dropout args across all vit based models
2023-04-07 20:27:23 -07:00
Ross Wightman
3863d63516
Adding EVA02 weights and model defs, move beit based eva_giant to same eva.py file. Cleanup rotary pos, add lang oriented freq bands to be compat with eva design choice. Fix #1738
2023-03-27 17:16:07 -07:00
Ross Wightman
acfd85ad68
All swin models support spatial output, add output_fmt to v1/v2 and use ClassifierHead.
...
* update ClassifierHead to allow different input format
* add output format support to patch embed
* fix some flatten issues for a few conv head models
* add Format enum and helpers for tensor format (layout) choices
2023-03-15 23:21:51 -07:00
Ross Wightman
621e1b2182
Add ideas from 'Scaling ViT to 22-B Params', testing PyTorch 2.0 fused F.scaled_dot_product_attention impl in vit, vit_relpos, maxxvit / coatnet.
2023-02-16 16:57:42 -08:00
Ross Wightman
6f28b562c6
Factor NormMlpClassifierHead from MaxxViT and use across MaxxViT / ConvNeXt / DaViT, refactor some type hints & comments
2023-01-27 14:57:01 -08:00
Fredo Guan
81ca323751
Davit update formatting and fix grad checkpointing ( #7 )
...
fixed head to gap->norm->fc as per convnext, along with option for norm->gap->fc
failed tests due to clip convnext models, davit tests passed
2023-01-15 14:34:56 -08:00
Ross Wightman
927f031293
Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models
2022-12-06 15:00:06 -08:00