Ross Wightman
a5a2ad2e48
Fix consistency, testing for forward_head w/ pre_logits, reset_classifier, models with pre_logits size != unpooled feature size
...
* add test that model supports forward_head(x, pre_logits=True)
* add head_hidden_size attr to all models and set differently from num_features attr when head has hidden layers
* test forward_features() feat dim == model.num_features and pre_logits feat dim == self.head_hidden_size
* more consistency in reset_classifier signature, add typing
* asserts in some heads where pooling cannot be disabled
Fix #2194
2024-06-07 13:53:00 -07:00
Ross Wightman
c838c4233f
Add typing to reset_classifier() on other models
2024-05-12 11:12:00 -07:00
Ross Wightman
0d124ffd4f
Update README. Fine-grained layer-wise lr decay working for tiny_vit and both efficientvits. Minor fixes.
2023-09-01 15:05:29 -07:00
方曦
170a5b6e27
add tinyvit
2023-09-01 11:05:56 -07:00
Ross Wightman
5242ba6edc
MobileOne and FastViT weights on HF hub, more code cleanup and tweaks, features_only working. Add reparam flag to validate and benchmark, support reparm of all models with fuse(), reparameterize() or switch_to_deploy() methods on modules
2023-08-23 22:50:37 -07:00
Ross Wightman
7d7589e8da
Fixing efficient_vit torchscript, fx, default_cfg issues
2023-08-18 23:23:11 -07:00
Ross Wightman
58ea1c02c4
Add fixed_input_size flag to msra efficient_vit
2023-08-18 16:48:17 -07:00
Ross Wightman
c28324a150
Update efficient_vit (msra), hf hub weights
2023-08-18 16:45:37 -07:00
方曦
00f670fa69
fix bug in ci for efficientvits
2023-08-17 14:40:17 +08:00
方曦
a56e2bbf19
fix efficientvit_msra pretrained load
2023-08-03 18:44:38 +08:00
方曦
e94c60b546
efficientvit_msra refactor
2023-08-03 17:45:50 +08:00
方曦
e8fb866ccf
fix efficientvit_msra pool
2023-08-02 14:40:01 +08:00
方曦
43443f64eb
fix efficientvits
2023-08-02 14:12:37 +08:00
方曦
82d1e99e1a
add efficientvit(msra)
2023-08-01 18:51:08 +08:00