22 Commits

Author SHA1 Message Date
Ross Wightman
bed350f5e5 Push all MaxxViT weights to HF hub, cleanup impl, add feature map extraction support and prompote to 'std' architecture. Fix norm head for proper embedding / feat map output. Add new in12k + ft 1k weights. 2023-01-20 14:45:25 -08:00
Ross Wightman
1825b5e314 maxxvit type 2023-01-09 08:57:31 -08:00
Ross Wightman
5078b28f8a More kwarg handling tweaks, maxvit_base_rw def added 2023-01-09 08:57:31 -08:00
Ross Wightman
c0d7388a1b Improving kwarg merging in more models 2023-01-09 08:57:31 -08:00
Ross Wightman
9a51e4ea2e Add FlexiViT models and weights, refactoring, push more weights
* push all vision_transformer*.py weights to HF hub
* finalize more pretrained tags for pushed weights
* refactor pos_embed files and module locations, move some pos embed modules to layers
* tweak hf hub helpers to aid bulk uploading and updating
2022-12-22 17:23:09 -08:00
Ross Wightman
927f031293 Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models 2022-12-06 15:00:06 -08:00
Ross Wightman
755570e2d6 Rename _pretrained.py -> pretrained.py, not feasible to change the other files to same scheme without breaking uses 2022-12-05 10:21:34 -08:00
Ross Wightman
72cfa57761 Add ported Tensorflow MaxVit weights. Add a few more CLIP ViT fine-tunes. Tweak some model tag names. Improve model tag name sorting. Update HF hub push config layout. 2022-12-05 10:21:34 -08:00
Ross Wightman
4d5c395160 MaxVit, ViT, ConvNeXt, and EfficientNet-v2 updates
* Add support for TF weights and modelling specifics to MaxVit (testing ported weights)
* More fine-tuned CLIP ViT configs
* ConvNeXt and MaxVit updated to new pretrained cfgs use
* EfficientNetV2, MaxVit and ConvNeXt high res models use squash crop/resize
2022-12-05 10:21:34 -08:00
Ross Wightman
9914f744dc Add more maxxvit weights includ ConvNeXt conv block based experiments. 2022-10-10 21:49:18 -07:00
Ross Wightman
fa8c84eede Update maxvit_tiny_256 weight to better iter, add coatnet / maxvit / maxxvit model defs for future runs 2022-09-07 12:37:37 -07:00
Ross Wightman
c1b3cea19d Add maxvit_rmlp_tiny_rw_256 model def and weights w/ 84.2 top-1 @ 256, 84.8 @ 320 2022-09-07 10:27:11 -07:00
Ross Wightman
dc90816f26 Add maxvit_tiny_rw_224 weights 83.5 @ 224 and maxvit_rmlp_pico_rw_256 relpos weights, 80.5 @ 256, 81.3 @ 320 2022-09-06 16:14:41 -07:00
Ross Wightman
7f1b223c02 Add maxvit_rmlp_nano_rw_256 model def & weights, make window/grid size dynamic wrt img_size by default 2022-08-29 15:49:32 -07:00
Ross Wightman
f1d2160d85 Update a few maxxvit comments, rename PartitionAttention -> PartitionAttenionCl for consistency with other blocks 2022-08-26 12:53:49 -07:00
Ross Wightman
eca6f0a25c Fix syntax error (extra dataclass comma) in maxxvit.py 2022-08-26 11:29:09 -07:00
Ross Wightman
7c2660576d Tweak init for convnext block using maxxvit/coatnext. 2022-08-25 15:30:59 -07:00
Ross Wightman
527f9a4cb2 Updated to correct maxvit_nano weights... 2022-08-24 12:42:11 -07:00
Ross Wightman
b2e8426fca Make k=stride=2 ('avg2') pooling default for coatnet/maxvit. Add weight links. Rename 'combined' partition to 'parallel'. 2022-08-24 11:01:20 -07:00
Ross Wightman
cac0a4570a More test fixes, pool size for 256x256 maxvit models 2022-08-23 13:38:26 -07:00
Ross Wightman
e939ed19b9 Rename internal creation fn for maxvit, has not been just coatnet for a while... 2022-08-22 17:44:51 -07:00
Ross Wightman
ffaf97f813 MaxxVit! A very configurable MaxVit and CoAtNet impl with lots of goodies.. 2022-08-22 17:42:10 -07:00