Ross Wightman
2b3f1a4633
Make channels for classic resnet configurable
2024-07-22 10:47:40 -07:00
Ross Wightman
9b2b8014e8
Add weights for test models
2024-07-22 10:08:57 -07:00
Ross Wightman
1a05ed29a1
Add to 'abswin' hiera models for train trials
2024-07-19 11:05:31 -07:00
Ross Wightman
0cbf4fa586
_orig_mod still causing issues even though I thought it was fixed in pytorch, add unwrap / clean helpers
2024-07-19 11:03:45 -07:00
Ross Wightman
3a8a965891
Implement absolute+window pos embed for hiera, resizable but needs new weights
2024-07-18 21:43:37 -07:00
Ross Wightman
392b78aee7
set_input_size initial impl for vit & swin v1. Move HybridEmbed to own location in timm/layers
2024-07-17 15:25:48 -07:00
Ross Wightman
f920119f3b
Fixing tests
2024-07-09 14:53:20 -07:00
Ross Wightman
644abf9588
Fix default_cfg test for mobilenet_100
2024-07-09 12:52:24 -07:00
Ross Wightman
d5afe106dc
Merge remote-tracking branch 'origin/tiny_test_models' into small_things
2024-07-09 12:49:57 -07:00
Ross Wightman
55101028bb
Rename test_tiny* -> test*. Fix ByobNet BasicBlock attn location and add test_byobnet model.
2024-07-09 11:53:11 -07:00
Ross Wightman
1334598462
Add support back to EfficientNet to disable head_conv / bn2 so mobilnetv1 can be implemented properly
2024-07-08 13:51:26 -07:00
Ross Wightman
800405d941
Add conv_large mobilenetv3 aa/blur model defs
2024-07-08 13:50:05 -07:00
Ross Wightman
f81b094aaa
Add 'qkv_bias_separate' flag for EVA/beit/swinv2 attn modules to allow an override for easy quantization wrappers. Fix #2098
2024-07-08 13:48:38 -07:00
Steffen Schneider
c01a47c9e7
Fix typo in type annotations in timm.models.hrnet
2024-07-08 00:53:16 +02:00
Daniel Suess
197c10463b
Fix jit.script breaking with features_fx
2024-06-28 03:58:51 +00:00
Ross Wightman
b751da692d
Add latest ix (xavier init for mqa) hybrid medium & large weights for MobileNetV4
2024-06-24 13:49:55 -07:00
Ross Wightman
f8342a045a
Merge pull request #2213 from huggingface/florence2
...
Fix #2212 map florence2 image tower to davit with a few changes
2024-06-24 11:01:08 -07:00
Sejik
c33a001397
Fix typo
2024-06-24 21:54:38 +09:00
Ross Wightman
02d0f27721
cleanup davit padding
2024-06-22 12:06:46 -07:00
Ross Wightman
c715c724e7
Fix tracing by removing float cast, should end up float anyways
2024-06-22 08:35:30 -07:00
Ross Wightman
fb58a73033
Fix #2212 map florence2 image tower to davit with a few changes
2024-06-21 15:31:29 -07:00
Ross Wightman
fb13e6385e
Merge pull request #2203 from huggingface/more_mobile
...
Add mobilenet edgetpu defs for exp, add ol mobilenet v1 back for comp…
2024-06-18 15:20:01 -07:00
Ross Wightman
16e082e1c2
Add mobilenetv4 hybrid-large weights
2024-06-17 11:08:31 -07:00
Ross Wightman
e41125cc83
Merge pull request #2209 from huggingface/fcossio-vit-maxpool
...
ViT pooling refactor
2024-06-17 07:51:12 -07:00
Ross Wightman
a22466852d
Add 2400 epoch mobilenetv4 small weights, almost at paper, rounds to 73.8
2024-06-16 10:51:00 -07:00
Ross Wightman
b1a6f4a946
Some missed reset_classifier() type annotations
2024-06-16 10:39:27 -07:00
Ross Wightman
71101ebba0
Refactor vit pooling to add more reduction options, separately callable
2024-06-14 23:16:58 -07:00
Ross Wightman
a0bb5b4a44
Missing stem_kernel_size argument in EfficientNetFeatures
2024-06-14 13:39:31 -07:00
Fernando Cossio
9567cf6d84
Feature: add option global_pool='max' to VisionTransformer
...
Most of the CNNs have a max global pooling option. I would like to extend ViT to have this option.
2024-06-14 15:24:54 +02:00
Ross Wightman
9613c76844
Add mobilenet edgetpu defs for exp, add ol mobilenet v1 back for completeness / comparison
2024-06-13 17:33:04 -07:00
Ross Wightman
22de845add
Prepping for final MobileCLIP weight locations ( #2199 )
...
* Prepping for final MobileCLIP weight locations
* Update weight locations to coreml-projects
* Update mobileclip weight locations with final apple org location
2024-06-13 16:55:49 -07:00
Ross Wightman
575978ba55
Add mnv4_conv_large 384x384 weight location
2024-06-13 12:58:04 -07:00
Ross Wightman
e42e453128
Fix mmnv4 conv_large weight link, reorder mnv4 pretrained cfg for proper precedence
2024-06-12 11:16:49 -07:00
Ross Wightman
7b0a5321cb
Merge pull request #2198 from huggingface/openai_clip_resnet
...
Mapping OpenAI CLIP Modified ResNet weights -> ByobNet.
2024-06-12 09:33:30 -07:00
Ross Wightman
57adc1acc8
Fix rotary embed version of attn pool. Bit of cleanup/naming
2024-06-11 23:49:17 -07:00
Ross Wightman
cdc7bcea69
Make 2d attention pool modules compatible with head interface. Use attention pool in CLIP ResNets as head. Make separate set of GAP models w/ avg pool instead of attn pool.
2024-06-11 21:32:07 -07:00
Ross Wightman
c63da1405c
Pretrained cfg name mismatch
2024-06-11 21:16:54 -07:00
Ross Wightman
88efca1be2
First set of MobileNetV4 weights trained in timm
2024-06-11 18:53:01 -07:00
Ross Wightman
30ffa152de
Fix load of larger ResNet CLIP models, experimenting with making AttentionPool *the* head, seems to fine-tune better, one less layer.
2024-06-10 12:07:14 -07:00
Ross Wightman
5e9ff5798f
Adding pos embed resize fns to FX autowrap exceptions
2024-06-10 12:06:47 -07:00
Ross Wightman
f0fb471b26
Remove separate ConvNormActAa class, merge with ConvNormAct
2024-06-10 12:05:35 -07:00
Ross Wightman
5efa15b2a2
Mapping OpenAI CLIP Modified ResNet weights -> ByobNet. Improve AttentionPool2d layers. Fix #1731
2024-06-09 16:54:48 -07:00
Ross Wightman
7702d9afa1
ViTamin in_chans !=3 weight load fix
2024-06-07 20:39:23 -07:00
Ross Wightman
66a0eb4673
Experimenting with tiny test models, how small can they go and be useful for regression tests?
2024-06-07 16:09:25 -07:00
Ross Wightman
5ee06760dc
Fix classifier input dim for mnv3 after last changes
2024-06-07 13:53:13 -07:00
Ross Wightman
a5a2ad2e48
Fix consistency, testing for forward_head w/ pre_logits, reset_classifier, models with pre_logits size != unpooled feature size
...
* add test that model supports forward_head(x, pre_logits=True)
* add head_hidden_size attr to all models and set differently from num_features attr when head has hidden layers
* test forward_features() feat dim == model.num_features and pre_logits feat dim == self.head_hidden_size
* more consistency in reset_classifier signature, add typing
* asserts in some heads where pooling cannot be disabled
Fix #2194
2024-06-07 13:53:00 -07:00
Ross Wightman
4535a5412a
Change default serialization for push_to_hf_hub to 'both'
2024-06-07 13:40:31 -07:00
Ross Wightman
7ccb10ebff
Disable efficient_builder debug flag
2024-06-06 21:50:27 -07:00
Ross Wightman
ad026e6e33
Fix in_chans switching on create
2024-06-06 17:56:14 -07:00
Ross Wightman
fc1b66a51d
Fix first conv name for mci vit-b
2024-06-06 13:42:26 -07:00