Commit Graph

1183 Commits (9613c7684408c4ca0c4a1448d0972b7ecb3564db)

Author SHA1 Message Date
Ross Wightman 9613c76844 Add mobilenet edgetpu defs for exp, add ol mobilenet v1 back for completeness / comparison 2024-06-13 17:33:04 -07:00
Ross Wightman 22de845add
Prepping for final MobileCLIP weight locations (#2199)
* Prepping for final MobileCLIP weight locations

* Update weight locations to coreml-projects

* Update mobileclip weight locations with final apple org location
2024-06-13 16:55:49 -07:00
Ross Wightman 575978ba55 Add mnv4_conv_large 384x384 weight location 2024-06-13 12:58:04 -07:00
Ross Wightman e42e453128 Fix mmnv4 conv_large weight link, reorder mnv4 pretrained cfg for proper precedence 2024-06-12 11:16:49 -07:00
Ross Wightman 7b0a5321cb
Merge pull request #2198 from huggingface/openai_clip_resnet
Mapping OpenAI CLIP Modified ResNet weights -> ByobNet.
2024-06-12 09:33:30 -07:00
Ross Wightman 57adc1acc8 Fix rotary embed version of attn pool. Bit of cleanup/naming 2024-06-11 23:49:17 -07:00
Ross Wightman cdc7bcea69 Make 2d attention pool modules compatible with head interface. Use attention pool in CLIP ResNets as head. Make separate set of GAP models w/ avg pool instead of attn pool. 2024-06-11 21:32:07 -07:00
Ross Wightman c63da1405c Pretrained cfg name mismatch 2024-06-11 21:16:54 -07:00
Ross Wightman 88efca1be2 First set of MobileNetV4 weights trained in timm 2024-06-11 18:53:01 -07:00
Ross Wightman 30ffa152de Fix load of larger ResNet CLIP models, experimenting with making AttentionPool *the* head, seems to fine-tune better, one less layer. 2024-06-10 12:07:14 -07:00
Ross Wightman 5e9ff5798f Adding pos embed resize fns to FX autowrap exceptions 2024-06-10 12:06:47 -07:00
Ross Wightman f0fb471b26 Remove separate ConvNormActAa class, merge with ConvNormAct 2024-06-10 12:05:35 -07:00
Ross Wightman 5efa15b2a2 Mapping OpenAI CLIP Modified ResNet weights -> ByobNet. Improve AttentionPool2d layers. Fix #1731 2024-06-09 16:54:48 -07:00
Ross Wightman 7702d9afa1 ViTamin in_chans !=3 weight load fix 2024-06-07 20:39:23 -07:00
Ross Wightman 5ee06760dc Fix classifier input dim for mnv3 after last changes 2024-06-07 13:53:13 -07:00
Ross Wightman a5a2ad2e48 Fix consistency, testing for forward_head w/ pre_logits, reset_classifier, models with pre_logits size != unpooled feature size
* add test that model supports forward_head(x, pre_logits=True)
* add head_hidden_size attr to all models and set differently from num_features attr when head has hidden layers
* test forward_features() feat dim == model.num_features and pre_logits feat dim == self.head_hidden_size
* more consistency in reset_classifier signature, add typing
* asserts in some heads where pooling cannot be disabled
Fix #2194
2024-06-07 13:53:00 -07:00
Ross Wightman 4535a5412a Change default serialization for push_to_hf_hub to 'both' 2024-06-07 13:40:31 -07:00
Ross Wightman 7ccb10ebff Disable efficient_builder debug flag 2024-06-06 21:50:27 -07:00
Ross Wightman ad026e6e33 Fix in_chans switching on create 2024-06-06 17:56:14 -07:00
Ross Wightman fc1b66a51d Fix first conv name for mci vit-b 2024-06-06 13:42:26 -07:00
Ross Wightman 88a1006e02 checkpoint filter fns with consistent name, add mobileclip-b pretrained cfgs 2024-06-06 12:38:52 -07:00
Ross Wightman 7d4ada6d16 Update ViTamin model defs 2024-06-06 09:16:43 -07:00
Ross Wightman cc8a03daac Add ConvStem and MobileCLIP hybrid model for B variant. Add full norm disable support to ConvNormAct layers 2024-06-06 09:15:27 -07:00
Ross Wightman 3c9d8e5b33 Merge remote-tracking branch 'origin/efficientnet_x' into fastvit_mobileclip 2024-06-05 17:35:15 -07:00
Ross Wightman 5756a81c55 Merge remote-tracking branch 'origin/Beckschen-vitamin' into fastvit_mobileclip 2024-06-05 15:20:54 -07:00
Ross Wightman 58591a97f7 Enable features_only properly 2024-06-04 16:57:16 -07:00
Ross Wightman 1b66ec7cf3 Fixup ViTamin, add hub weight reference 2024-06-03 17:14:03 -07:00
Ross Wightman b2c0aeb0ec Merge branch 'main' of https://github.com/Beckschen/pytorch-image-models into Beckschen-vitamin 2024-06-02 14:16:30 -07:00
Ross Wightman 7f96538052 Add missing lkc act for mobileclip fastvits 2024-05-31 11:59:51 -07:00
Ross Wightman a503639bcc Add mobileclip fastvit model defs, support extra SE. Add forward_intermediates API to fastvit 2024-05-30 10:17:38 -07:00
Ross Wightman 5fa6efa158 Add anti-aliasing support to mobilenetv3 and efficientnet family models. Update MobileNetV4 model defs, resolutions. Fix #599
* create_aa helper function centralized for all timm uses (resnet, convbnact helper)
* allow BlurPool w/ pre-defined channels (expand)
* mobilenetv4 UIB block using ConvNormAct layers for improved clarity, esp with AA added
* improve more mobilenetv3 and efficientnet related type annotations
2024-05-27 22:06:22 -07:00
Ross Wightman 5dce710101 Add vit_little in12k + in12k-ft-in1k weights 2024-05-27 14:56:03 -07:00
Ross Wightman 3c0283f9ef Fix reparameterize for NextViT. Fix #2187 2024-05-27 14:48:58 -07:00
Ross Wightman 4ff7c25766 Pass layer_scale_init_value to Mnv3Features module 2024-05-24 16:44:50 -07:00
Ross Wightman a12b72b5c4 Fix missing head_norm arg pop for feature model 2024-05-24 15:50:34 -07:00
Ross Wightman 7fe96e7a92 More MobileNet-v4 fixes
* missed final norm after post pooling 1x1 PW head conv
* improve repr of model by flipping a few modules to None when not used, nn.Sequential for MultiQueryAttention query/key/value/output
* allow layer scaling to be enabled/disabled at model variant level, conv variants don't use it
2024-05-24 15:09:29 -07:00
Ross Wightman 28d76a97db Mixed up kernel size for last blocks in mnv4-conv-small 2024-05-24 11:50:42 -07:00
Ross Wightman 0c6a69e7ef Add comments to MNV4 model defs with block variants 2024-05-23 15:54:05 -07:00
Ross Wightman cb33956b20 Fix some mistakes in mnv4 model defs 2024-05-23 14:24:32 -07:00
Ross Wightman cee79dada0 Merge remote-tracking branch 'origin/main' into efficientnet_x 2024-05-23 11:01:39 -07:00
Ross Wightman 6a8bb03330 Initial MobileNetV4 pass 2024-05-23 10:49:18 -07:00
Ross Wightman e748805be3 Add regex matching support to AttentionExtract. Add return_dict support to graph extractors and use returned output in AttentionExtractor 2024-05-22 14:33:39 -07:00
Ross Wightman 84cb225ecb Add in12k + 12k_ft_in1k vit_medium weights 2024-05-20 15:52:46 -07:00
Beckschen 7a2ad6bce1 Add link to model weights on Hugging Face 2024-05-17 06:51:35 -04:00
Beckschen 530fb49e7e Add link to model weights on Hugging Face 2024-05-17 06:48:59 -04:00
Ross Wightman cd0e7b11ff
Merge pull request #2180 from yvonwin/main
Remove a duplicate function in mobilenetv3.py
2024-05-15 07:54:17 -07:00
Ross Wightman 83aee5c28c Add explicit GAP (avg pool) variants of other SigLIP models. 2024-05-15 07:53:19 -07:00
yvonwin 58f2f79b04 Remove a duplicate function in mobilenetv3.py: `_gen_lcnet` is repeated in mobilenetv3.py.Remove the duplicate code. 2024-05-15 17:59:34 +08:00
Ross Wightman 7b3b11b63f Support loading of paligemma weights into GAP variants of SigLIP ViT. Minor tweak to npz loading for packed transformer weights. 2024-05-14 15:44:37 -07:00
Beckschen df304ffbf2 the dataclass init needs to use the default factory pattern, according to Ross 2024-05-14 15:10:05 -04:00