Ross Wightman
b751da692d
Add latest ix (xavier init for mqa) hybrid medium & large weights for MobileNetV4
2024-06-24 13:49:55 -07:00
Ross Wightman
d4d4d84fda
Dev version 1.0.8.dev0
2024-06-24 11:34:13 -07:00
Ross Wightman
f8342a045a
Merge pull request #2213 from huggingface/florence2
...
Fix #2212 map florence2 image tower to davit with a few changes
2024-06-24 11:01:08 -07:00
Ross Wightman
e7b4ab6a8d
Merge pull request #2214 from Sejik/main
...
Fix typo
2024-06-24 09:23:25 -07:00
Sejik
c33a001397
Fix typo
2024-06-24 21:54:38 +09:00
Ross Wightman
02d0f27721
cleanup davit padding
2024-06-22 12:06:46 -07:00
Ross Wightman
c715c724e7
Fix tracing by removing float cast, should end up float anyways
2024-06-22 08:35:30 -07:00
Ross Wightman
fb58a73033
Fix #2212 map florence2 image tower to davit with a few changes
2024-06-21 15:31:29 -07:00
Ross Wightman
b28945ff05
Version 1.0.7, prep for release
2024-06-18 16:19:43 -07:00
Ross Wightman
fb13e6385e
Merge pull request #2203 from huggingface/more_mobile
...
Add mobilenet edgetpu defs for exp, add ol mobilenet v1 back for comp…
2024-06-18 15:20:01 -07:00
Ross Wightman
427b3e46bd
Update README.md
2024-06-17 11:09:55 -07:00
Ross Wightman
16e082e1c2
Add mobilenetv4 hybrid-large weights
2024-06-17 11:08:31 -07:00
Ross Wightman
e41125cc83
Merge pull request #2209 from huggingface/fcossio-vit-maxpool
...
ViT pooling refactor
2024-06-17 07:51:12 -07:00
Ross Wightman
6254dfaece
Add numpy<2.0 to requirements until tests are sorted out for pytorch 2.3 vs older
2024-06-16 11:24:45 -07:00
Ross Wightman
a22466852d
Add 2400 epoch mobilenetv4 small weights, almost at paper, rounds to 73.8
2024-06-16 10:51:00 -07:00
Ross Wightman
b1a6f4a946
Some missed reset_classifier() type annotations
2024-06-16 10:39:27 -07:00
Ross Wightman
71101ebba0
Refactor vit pooling to add more reduction options, separately callable
2024-06-14 23:16:58 -07:00
Ross Wightman
a0bb5b4a44
Missing stem_kernel_size argument in EfficientNetFeatures
2024-06-14 13:39:31 -07:00
Fernando Cossio
9567cf6d84
Feature: add option global_pool='max' to VisionTransformer
...
Most of the CNNs have a max global pooling option. I would like to extend ViT to have this option.
2024-06-14 15:24:54 +02:00
Ross Wightman
9613c76844
Add mobilenet edgetpu defs for exp, add ol mobilenet v1 back for completeness / comparison
2024-06-13 17:33:04 -07:00
Ross Wightman
22de845add
Prepping for final MobileCLIP weight locations ( #2199 )
...
* Prepping for final MobileCLIP weight locations
* Update weight locations to coreml-projects
* Update mobileclip weight locations with final apple org location
2024-06-13 16:55:49 -07:00
Ross Wightman
575978ba55
Add mnv4_conv_large 384x384 weight location
2024-06-13 12:58:04 -07:00
Ross Wightman
832d3618a5
Update README.md
2024-06-12 23:26:05 -07:00
Ross Wightman
7b5f17d1bd
Update README.md, bump dev version 1.0.6
2024-06-12 12:35:44 -07:00
Ross Wightman
e42e453128
Fix mmnv4 conv_large weight link, reorder mnv4 pretrained cfg for proper precedence
2024-06-12 11:16:49 -07:00
Ross Wightman
7b0a5321cb
Merge pull request #2198 from huggingface/openai_clip_resnet
...
Mapping OpenAI CLIP Modified ResNet weights -> ByobNet.
2024-06-12 09:33:30 -07:00
Ross Wightman
57adc1acc8
Fix rotary embed version of attn pool. Bit of cleanup/naming
2024-06-11 23:49:17 -07:00
Ross Wightman
5aa49d56bf
Merge pull request #2202 from huggingface/mnv4_first_weights
...
First set of MobileNetV4 weights trained in timm
2024-06-11 23:06:22 -07:00
Ross Wightman
cdc7bcea69
Make 2d attention pool modules compatible with head interface. Use attention pool in CLIP ResNets as head. Make separate set of GAP models w/ avg pool instead of attn pool.
2024-06-11 21:32:07 -07:00
Ross Wightman
c63da1405c
Pretrained cfg name mismatch
2024-06-11 21:16:54 -07:00
Ross Wightman
88efca1be2
First set of MobileNetV4 weights trained in timm
2024-06-11 18:53:01 -07:00
Ross Wightman
30ffa152de
Fix load of larger ResNet CLIP models, experimenting with making AttentionPool *the* head, seems to fine-tune better, one less layer.
2024-06-10 12:07:14 -07:00
Ross Wightman
5e9ff5798f
Adding pos embed resize fns to FX autowrap exceptions
2024-06-10 12:06:47 -07:00
Ross Wightman
f0fb471b26
Remove separate ConvNormActAa class, merge with ConvNormAct
2024-06-10 12:05:35 -07:00
Ross Wightman
2673693897
Merge pull request #2200 from McPatate/feat/add_trufflehog_ci
...
feat(ci): add trufflehog secrets detection
2024-06-10 07:58:18 -07:00
Luc Georges
af7eef4aba
fix(ci): remove unnecessary permissions
2024-06-10 10:52:57 +02:00
Luc Georges
2585028524
feat(ci): add trufflehog secrets detection
2024-06-10 09:54:43 +02:00
Ross Wightman
5efa15b2a2
Mapping OpenAI CLIP Modified ResNet weights -> ByobNet. Improve AttentionPool2d layers. Fix #1731
2024-06-09 16:54:48 -07:00
Ross Wightman
60d35735ee
Update README.md
2024-06-08 22:01:53 -07:00
Ross Wightman
7702d9afa1
ViTamin in_chans !=3 weight load fix
2024-06-07 20:39:23 -07:00
Ross Wightman
66a0eb4673
Experimenting with tiny test models, how small can they go and be useful for regression tests?
2024-06-07 16:09:25 -07:00
Ross Wightman
5517b054dd
Merge pull request #2195 from huggingface/refactor_pre_logits
...
Fix consistency, testing for forward_head w/ pre_logits, reset_classifier, models with pre_logits size != unpooled feature size
2024-06-07 15:30:24 -07:00
Ross Wightman
5ee06760dc
Fix classifier input dim for mnv3 after last changes
2024-06-07 13:53:13 -07:00
Ross Wightman
a5a2ad2e48
Fix consistency, testing for forward_head w/ pre_logits, reset_classifier, models with pre_logits size != unpooled feature size
...
* add test that model supports forward_head(x, pre_logits=True)
* add head_hidden_size attr to all models and set differently from num_features attr when head has hidden layers
* test forward_features() feat dim == model.num_features and pre_logits feat dim == self.head_hidden_size
* more consistency in reset_classifier signature, add typing
* asserts in some heads where pooling cannot be disabled
Fix #2194
2024-06-07 13:53:00 -07:00
Ross Wightman
4535a5412a
Change default serialization for push_to_hf_hub to 'both'
2024-06-07 13:40:31 -07:00
Ross Wightman
5cce2185e1
Update version.py
2024-06-07 13:13:23 -07:00
Ross Wightman
52659842cc
Merge pull request #2196 from huggingface/mega_merge
...
Mega merge
2024-06-07 13:12:36 -07:00
Ross Wightman
7ccb10ebff
Disable efficient_builder debug flag
2024-06-06 21:50:27 -07:00
Ross Wightman
ad026e6e33
Fix in_chans switching on create
2024-06-06 17:56:14 -07:00
Ross Wightman
fc1b66a51d
Fix first conv name for mci vit-b
2024-06-06 13:42:26 -07:00