Commit Graph

1066 Commits (711c5dee6db9fa98ed0e69abb498a43e5348a042)

Author SHA1 Message Date
Ross Wightman 9a53c3f727 Finalize DaViT, some formatting and modelling simplifications (separate PatchEmbed to Stem + Downsample, weights on HF hub. 2023-01-27 13:54:04 -08:00
Fredo Guan fb717056da Merge remote-tracking branch 'upstream/main' 2023-01-26 10:49:15 -08:00
nateraw 14b84e8895 📝 update docstrings 2023-01-26 00:49:44 -05:00
nateraw f0dc8a8267 📝 update docstrings for create_model 2023-01-25 21:10:41 -05:00
Ross Wightman 64667bfa0e Add 'gigantic' vit clip variant for feature extraction and future fine-tuning 2023-01-25 18:02:10 -08:00
Ross Wightman 36989cfae4 Factor out readme generation in hub helper, add more readme fields 2023-01-20 14:49:40 -08:00
Ross Wightman 32f252381d Change order of checkpoitn filtering fn application in builder, try dict, model variant first 2023-01-20 14:48:54 -08:00
Ross Wightman bed350f5e5 Push all MaxxViT weights to HF hub, cleanup impl, add feature map extraction support and prompote to 'std' architecture. Fix norm head for proper embedding / feat map output. Add new in12k + ft 1k weights. 2023-01-20 14:45:25 -08:00
Ross Wightman ca38e1e73f Update ClassifierHead module, add reset() method, update in_chs -> in_features for consistency 2023-01-20 14:44:05 -08:00
Ross Wightman 8ab573cd26 Add convnext_tiny and convnext_small 384x384 fine-tunes of in12k weights, fix pool size for laion CLIP convnext weights 2023-01-20 14:40:16 -08:00
Fredo Guan 81ca323751
Davit update formatting and fix grad checkpointing (#7)
fixed head to gap->norm->fc as per convnext, along with option for norm->gap->fc
failed tests due to clip convnext models, davit tests passed
2023-01-15 14:34:56 -08:00
Ross Wightman e9aac412de Correct mean/std for CLIP convnexts 2023-01-14 22:53:56 -08:00
Ross Wightman 42bd8f7bcb Add convnext_base CLIP image tower weights for fine-tuning / features 2023-01-14 21:16:29 -08:00
Ross Wightman a2c14c2064 Add tiny/small in12k pretrained and fine-tuned ConvNeXt models 2023-01-11 14:50:39 -08:00
Ross Wightman 01fdf44438 Initial focalnet import, more refactoring needed for timm. 2023-01-09 16:18:19 -08:00
Ross Wightman 2e83bba142 Revert head norm changes to ConvNeXt as it broke some downstream use, alternate workaround for fcmae weights 2023-01-09 13:37:40 -08:00
Ross Wightman 1825b5e314 maxxvit type 2023-01-09 08:57:31 -08:00
Ross Wightman 5078b28f8a More kwarg handling tweaks, maxvit_base_rw def added 2023-01-09 08:57:31 -08:00
Ross Wightman c0d7388a1b Improving kwarg merging in more models 2023-01-09 08:57:31 -08:00
Ross Wightman 60ebb6cefa Re-order vit pretrained entries for more sensible default weights (no .tag specified) 2023-01-06 16:12:33 -08:00
Ross Wightman e861b74cf8 Pass through --model-kwargs (and --opt-kwargs for train) from command line through to model __init__. Update some models to improve arg overlay. Cleanup along the way. 2023-01-06 16:12:33 -08:00
Ross Wightman add3fb864e Working on improved model card template for push_to_hf_hub 2023-01-06 16:12:33 -08:00
Ross Wightman 6e5553da5f
Add ConvNeXt-V2 support (model additions and weights) (#1614)
* Add ConvNeXt-V2 support (model additions and weights)

* ConvNeXt-V2 weights on HF Hub, tweaking some tests

* Update README, fixing convnextv2 tests
2023-01-05 07:53:32 -08:00
Ross Wightman 6902c48a5f Fix ResNet based models to work w/ norm layers w/o affine params. Reformat long arg lists into vertical form. 2022-12-29 16:32:26 -08:00
Ross Wightman 8ece53e194 Switch BEiT to HF hub weights 2022-12-22 21:43:04 -08:00
Ross Wightman 9a51e4ea2e Add FlexiViT models and weights, refactoring, push more weights
* push all vision_transformer*.py weights to HF hub
* finalize more pretrained tags for pushed weights
* refactor pos_embed files and module locations, move some pos embed modules to layers
* tweak hf hub helpers to aid bulk uploading and updating
2022-12-22 17:23:09 -08:00
Fredo Guan 10b3f696b4
Davit std (#6)
Separate patch_embed module
2022-12-16 21:50:28 -08:00
Ross Wightman 656e1776de Convert mobilenetv3 to multi-weight, tweak PretrainedCfg metadata 2022-12-16 09:29:13 -08:00
Ross Wightman 6a01101905 Update efficientnet.py and convnext.py to multi-weight, add ImageNet-12k pretrained EfficientNet-B5 and ConvNeXt-Nano. 2022-12-14 20:33:23 -08:00
Fredo Guan 84178fca60
Merge branch 'rwightman:main' into main 2022-12-12 23:13:58 -08:00
Fredo Guan c43340ddd4
Davit std (#5)
* Update davit.py

* Update test_models.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* starting point

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update test_models.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Davit revised (#4)

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

clean up

* Update test_models.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update davit.py

* Update test_models.py

* Update davit.py
2022-12-11 03:03:22 -08:00
Ross Wightman d5e7d6b27e Merge remote-tracking branch 'origin/main' into refactor-imports 2022-12-09 14:49:44 -08:00
Ross Wightman cda39b35bd Add a deprecation phase to module re-org 2022-12-09 14:39:45 -08:00
Fredo Guan edea013dd1
Davit std (#3)
Davit with all features working
2022-12-09 02:53:21 -08:00
Ross Wightman 7c4ed4d5a4 Add EVA-large models 2022-12-08 16:21:30 -08:00
Fredo Guan 434a03937d
Merge branch 'rwightman:main' into main 2022-12-08 08:05:16 -08:00
Ross Wightman 98047ef5e3 Add EVA FT results, hopefully fix BEiT test failures 2022-12-07 08:54:06 -08:00
Ross Wightman 3cc4d7a894 Fix missing register for 224 eva model 2022-12-07 08:54:06 -08:00
Ross Wightman eba07b0de7 Add eva models to beit.py 2022-12-07 08:54:06 -08:00
Fredo Guan 3bd96609c8
Davit (#1)
Implement the davit model from https://arxiv.org/abs/2204.03645 and https://github.com/dingmyu/davit
2022-12-06 17:19:25 -08:00
Ross Wightman 927f031293 Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models 2022-12-06 15:00:06 -08:00
Ross Wightman 3785c234d7 Remove clip vit models that won't be ft and comment two that aren't uploaded yet 2022-12-05 10:21:34 -08:00
Ross Wightman 755570e2d6 Rename _pretrained.py -> pretrained.py, not feasible to change the other files to same scheme without breaking uses 2022-12-05 10:21:34 -08:00
Ross Wightman 72cfa57761 Add ported Tensorflow MaxVit weights. Add a few more CLIP ViT fine-tunes. Tweak some model tag names. Improve model tag name sorting. Update HF hub push config layout. 2022-12-05 10:21:34 -08:00
Ross Wightman 4d5c395160 MaxVit, ViT, ConvNeXt, and EfficientNet-v2 updates
* Add support for TF weights and modelling specifics to MaxVit (testing ported weights)
* More fine-tuned CLIP ViT configs
* ConvNeXt and MaxVit updated to new pretrained cfgs use
* EfficientNetV2, MaxVit and ConvNeXt high res models use squash crop/resize
2022-12-05 10:21:34 -08:00
Ross Wightman 9da7e3a799 Add crop_mode for pretraind config / image transforms. Add support for dynamo compilation to benchmark/train/validate 2022-12-05 10:21:34 -08:00
Ross Wightman b2b6285af7 Add two more FT clip weights 2022-12-05 10:21:34 -08:00
Ross Wightman 5895056dc4 Add openai b32 ft 2022-12-05 10:21:34 -08:00
Ross Wightman 9dea5143d5 Adding more clip ft variants 2022-12-05 10:21:34 -08:00
Ross Wightman 444dcba4ad CLIP B16 12k weights added 2022-12-05 10:21:34 -08:00
Ross Wightman dff4717cbf Add clip b16 384x384 finetunes 2022-12-05 10:21:34 -08:00
Ross Wightman 883fa2eeaa Add fine-tuned B/16 224x224 in1k clip models 2022-12-05 10:21:34 -08:00
Ross Wightman 9a3d2ac2d5 Add latest CLIP ViT fine-tune pretrained configs / model entrypt updates 2022-12-05 10:21:34 -08:00
Ross Wightman 42bbbddee9 Add missing model config 2022-12-05 10:21:34 -08:00
Ross Wightman def68befa7 Updating vit model defs for mult-weight support trial (vit first). Prepping for CLIP (laion2b and openai) fine-tuned weights. 2022-12-05 10:21:34 -08:00
Ross Wightman 0dadb4a6e9 Initial multi-weight support, handled so old pretraing config handling co-exists with new tags. 2022-12-05 10:21:34 -08:00
Wauplin 9b114754db refactor push_to_hub helper 2022-11-16 12:03:34 +01:00
Wauplin ae0a0db7de Create repo before cloning with Repository.clone_from 2022-11-15 15:17:20 +01:00
Ross Wightman 803254bb40 Fix spacing misalignment for fast norm path in LayerNorm modules 2022-10-24 21:43:49 -07:00
Ross Wightman 6635bc3f7d
Merge pull request #1479 from rwightman/script_cleanup
Train / val script enhancements, non-GPU (ie CPU) device support, HF datasets support, TFDS/WDS dataloading improvements
2022-10-15 09:29:39 -07:00
Ross Wightman 0e6023f032
Merge pull request #1381 from ChristophReich1996/master
Fix typo in PositionalEncodingFourier
2022-10-14 18:34:33 -07:00
Ross Wightman 66f4af7090 Merge remote-tracking branch 'origin/master' into script_cleanup 2022-10-14 15:54:00 -07:00
Ross Wightman 9914f744dc Add more maxxvit weights includ ConvNeXt conv block based experiments. 2022-10-10 21:49:18 -07:00
Mohamed Rashad 8fda68aff6
Fix repo id bug
This to fix this issue #1482
2022-10-05 16:26:06 +02:00
Ross Wightman 1199c5a1a4 clip_laion2b models need 1e-5 eps for LayerNorm 2022-09-25 10:36:54 -07:00
Ross Wightman e858912e0c Add brute-force checkpoint remapping option 2022-09-23 16:07:03 -07:00
Ross Wightman b293dfa595 Add CL SE module 2022-09-23 16:06:09 -07:00
Ross Wightman a383ef99f5 Make huggingface_hub necessary if it's the only source for a pretrained weight 2022-09-23 13:54:21 -07:00
Ross Wightman e069249a2d Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7 2022-09-16 21:39:05 -07:00
Ross Wightman 9d65557be3 Fix errant import 2022-09-15 17:47:23 -07:00
Ross Wightman 9709dbaaa9 Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP 2022-09-15 17:25:59 -07:00
Ross Wightman a520da9b49 Update tresnet features_info for v2 2022-09-13 20:54:54 -07:00
Ross Wightman c8ab747bf4 BEiT-V2 checkpoints didn't remove 'module' from weights, adapt checkpoint filter 2022-09-13 17:56:49 -07:00
Ross Wightman 73049dc2aa Fix type in dla weight update 2022-09-13 17:52:45 -07:00
Ross Wightman e11efa872d Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights. 2022-09-13 16:35:26 -07:00
Ross Wightman fa8c84eede Update maxvit_tiny_256 weight to better iter, add coatnet / maxvit / maxxvit model defs for future runs 2022-09-07 12:37:37 -07:00
Ross Wightman c1b3cea19d Add maxvit_rmlp_tiny_rw_256 model def and weights w/ 84.2 top-1 @ 256, 84.8 @ 320 2022-09-07 10:27:11 -07:00
Ross Wightman 914544fc81 Add beitv2 224x224 checkpoints from https://github.com/microsoft/unilm/tree/master/beit2 2022-09-06 20:25:18 -07:00
Ross Wightman dc90816f26 Add `maxvit_tiny_rw_224` weights 83.5 @ 224 and `maxvit_rmlp_pico_rw_256` relpos weights, 80.5 @ 256, 81.3 @ 320 2022-09-06 16:14:41 -07:00
Ross Wightman f489f02ad1 Make gcvit window size ratio based to improve resolution changing support #1449. Change default init to original. 2022-09-06 16:14:00 -07:00
Ross Wightman 7f1b223c02 Add maxvit_rmlp_nano_rw_256 model def & weights, make window/grid size dynamic wrt img_size by default 2022-08-29 15:49:32 -07:00
Ross Wightman e6a4361306 pretrained_cfg entry for mvitv2_small_cls 2022-08-28 15:27:01 -07:00
Ross Wightman f66e5f0e35 Fix class token support in MViT-V2, add small_class variant to ensure it's tested. Fix #1443 2022-08-28 15:24:04 -07:00
Ross Wightman f1d2160d85 Update a few maxxvit comments, rename PartitionAttention -> PartitionAttenionCl for consistency with other blocks 2022-08-26 12:53:49 -07:00
Ross Wightman eca6f0a25c Fix syntax error (extra dataclass comma) in maxxvit.py 2022-08-26 11:29:09 -07:00
Ross Wightman ff6a919cf5 Add --fast-norm arg to benchmark.py, train.py, validate.py 2022-08-25 17:20:46 -07:00
Ross Wightman 769ab4b98a Clean up no_grad for trunc normal weight inits 2022-08-25 16:29:52 -07:00
Ross Wightman 48e1df8b37 Add norm/norm_act header comments 2022-08-25 16:29:34 -07:00
Ross Wightman 7c2660576d Tweak init for convnext block using maxxvit/coatnext. 2022-08-25 15:30:59 -07:00
Ross Wightman 1d8d6f6072 Fix two default args in DenseNet blocks... fix #1427 2022-08-25 15:00:35 -07:00
Ross Wightman 527f9a4cb2 Updated to correct maxvit_nano weights... 2022-08-24 12:42:11 -07:00
Ross Wightman b2e8426fca Make k=stride=2 ('avg2') pooling default for coatnet/maxvit. Add weight links. Rename 'combined' partition to 'parallel'. 2022-08-24 11:01:20 -07:00
Ross Wightman 837c68263b For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode 2022-08-23 15:17:12 -07:00
Ross Wightman cac0a4570a More test fixes, pool size for 256x256 maxvit models 2022-08-23 13:38:26 -07:00
Ross Wightman e939ed19b9 Rename internal creation fn for maxvit, has not been just coatnet for a while... 2022-08-22 17:44:51 -07:00
Ross Wightman ffaf97f813 MaxxVit! A very configurable MaxVit and CoAtNet impl with lots of goodies.. 2022-08-22 17:42:10 -07:00
Ross Wightman 8c9696c9df More model and test fixes 2022-08-22 17:40:31 -07:00
Ross Wightman ca52108c2b Fix some model support functions 2022-08-19 10:20:51 -07:00
Ross Wightman f332fc2db7 Fix some test failures, torchscript issues 2022-08-18 16:19:46 -07:00
Ross Wightman 6e559e9b5f Add MViT (Multi-Scale) V2 2022-08-17 15:12:31 -07:00
Ross Wightman 43aa84e861 Add 'fast' layer norm that doesn't cast to float32, support APEX LN impl for slight speed gain, update norm and act factories, tweak SE for ability to disable bias (needed by GCVit) 2022-08-17 14:32:58 -07:00
Ross Wightman c486aa71f8 Add GCViT 2022-08-17 14:29:18 -07:00
Ross Wightman fba6ecd39b Add EfficientFormer 2022-08-17 14:08:53 -07:00
Ross Wightman ff4a38e2c3 Add PyramidVisionTransformerV2 2022-08-17 12:06:05 -07:00
Ross Wightman 1d8ada359a Add timm ConvNeXt 'atto' weights, change test resolution for FB ConvNeXt 224x224 weights, add support for different dw kernel_size 2022-08-15 17:56:08 -07:00
Ross Wightman 2544d3b80f ConvNeXt pico, femto, and nano, pico, femto ols (overlapping stem) weights and model defs 2022-08-05 17:05:50 -07:00
Ross Wightman 13565aad50 Add edgenext_base model def & weight link, update to improve ONNX export #1385 2022-08-05 16:58:34 -07:00
Ross Wightman 8ad4bdfa06 Allow ntuple to be used with string values 2022-07-28 16:18:18 -07:00
Christoph Reich faae93e62d
Fix typo in PositionalEncodingFourier 2022-07-28 19:08:08 -04:00
Ross Wightman ec6a28830f Add DeiT-III 'medium' model defs and weights 2022-07-28 15:03:20 -07:00
Ross Wightman 6f103a442b Add convnext_nano weights, 80.8 @ 224, 81.5 @ 288 2022-07-26 16:40:27 -07:00
Ross Wightman 4042a94f8f Add weights for two 'Edge' block (3x3->1x1) variants of CS3 networks. 2022-07-26 16:40:27 -07:00
Ross Wightman c8f69e04a9
Merge pull request #1365 from veritable-tech/fix-resize-pos-embed
Take `no_emb_class` into account when calling  `resize_pos_embed`
2022-07-24 21:03:01 -07:00
Ceshine Lee 0b64117592 Take `no_emb_class` into account when calling `resize_pos_embed` 2022-07-24 19:11:45 +08:00
Jasha10 56c3a84db3
Update type hint for `register_notrace_module`
register_notrace_module is used to decorate types (i.e. subclasses of nn.Module).
It is not called on module instances.
2022-07-22 16:59:55 -05:00
Ross Wightman 1b278136c3 Change models with mean 0,0,0 std 1,1,1 from int to float for consistency as mentioned in #1355 2022-07-21 17:36:15 -07:00
Ross Wightman 909705e7ff Remove some redundant requires_grad=True from nn.Parameter in third party code 2022-07-20 12:37:41 -07:00
Ross Wightman c5e0d1c700 Add dilation support to convnext, allows output_stride=8 and 16 use. Fix #1341 2022-07-19 17:52:10 -07:00
Ross Wightman dc376e3676 Ensure all model entrypoint fn default to `pretrained=False` (a few didn't) 2022-07-19 13:58:41 -07:00
Ross Wightman 23b102064a Add cs3sedarknet_x weights w/ 82.65 @ 288 top1. Add 2 cs3 edgenet models (w/ 3x3-1x1 block), remove aa from cspnet blocks (not needed) 2022-07-19 13:56:44 -07:00
Ross Wightman 05313940e2 Add cs3darknet_x, cs3sedarknet_l, and darknetaa53 weights from TPU sessions. Move SE btwn conv1 & conv2 in DarkBlock. Improve SE/attn handling in Csp/DarkNet. Fix leaky_relu bug on older csp models. 2022-07-15 16:55:16 -07:00
nateraw 51cca82aa1 👽 use hf_hub_download instead of cached_download 2022-07-14 16:41:45 -04:00
Ross Wightman a45b4bce9a x and xx small edgenext models do benefit from larger test input size 2022-07-08 10:53:27 -07:00
Ross Wightman a8e34051c1 Unbreak gamma remap impacting beit checkpoint load, version bump to 0.6.4 2022-07-07 23:07:43 -07:00
Ross Wightman a1cb25066e Add edgnext_small_rw weights trained with swin like recipe. Better than original 'small' but not the recent 'USI' distilled weights. 2022-07-07 22:02:57 -07:00
Ross Wightman 7c7ecd2492 Add --use-train-size flag to force use of train input_size (over test input size) for validation. Default test-time pooling to use train input size (fixes issues). 2022-07-07 22:01:24 -07:00
Ross Wightman ce65a7b29f Update vit_relpos w/ some additional weights, some cleanup to match recent vit updates, more MLP log coord experiments. 2022-07-07 21:33:25 -07:00
Ross Wightman 58621723bd Add CrossStage3 DarkNet (cs3) weights 2022-07-07 17:43:38 -07:00
Ross Wightman db0cee9910 Refactor cspnet configuration using dataclasses, update feature extraction for new cs3 variants. 2022-07-07 14:43:27 -07:00
Ross Wightman eca09b8642 Add MobileVitV2 support. Fix #1332. Move GroupNorm1 to common layers (used in poolformer + mobilevitv2). Keep ol custom ConvNeXt LayerNorm2d impl as LayerNormExp2d for reference. 2022-07-07 14:41:01 -07:00
Ross Wightman 06307b8b41 Remove experimental downsample in block support in ConvNeXt. Experiment further before keeping it in. 2022-07-07 14:37:58 -07:00
Ross Wightman 7d4b3807d5 Support DeiT-3 (Revenge of the ViT) checkpoints. Add non-overlapping (w/ class token) pos-embed support to vit. 2022-07-04 22:25:22 -07:00
Ross Wightman d0c5bd5722 Rename cs2->cs3 for darknets. Fix features_only for cs3 darknets. 2022-07-03 08:32:41 -07:00
Ross Wightman d765305821 Remove first_conv for resnetaa50 def 2022-07-02 15:56:17 -07:00
Ross Wightman dd9b8f57c4 Add feature_info to edgenext for features_only support, hopefully fix some fx / test errors 2022-07-02 15:20:45 -07:00
Ross Wightman 377e9bfa21 Add TPU trained darknet53 weights. Add mising pretrain_cfg for some csp/darknet models. 2022-07-02 15:18:52 -07:00
Ross Wightman c170ba3173 Add weights for resnet10t, resnet14t, and resnetaa50 models. Fix #1314 2022-07-02 15:18:06 -07:00
Ross Wightman 188c194b0f Left some experiment stem code in convnext by mistake 2022-07-02 15:17:28 -07:00
Ross Wightman 6064d16a2d Add initial EdgeNeXt import. Significant cleanup / reorg (like ConvNeXt). Fix #1320
* edgenext refactored for torchscript compat, stage base organization
* slight refactor of ConvNeXt to match some EdgeNeXt additions
* remove use of funky LayerNorm layer in ConvNeXt and just use nn.LayerNorm and LayerNorm2d (permute)
2022-07-01 15:18:42 -07:00
Ross Wightman 7a9c6811c9 Add eps arg to LayerNorm2d, add 'tf' (tensorflow) variant of trunc_normal_ that applies scale/shift after sampling (instead of needing to move a/b) 2022-07-01 15:15:39 -07:00
Ross Wightman 82c311d082 Add more experimental darknet and 'cs2' darknet variants (different cross stage setup, closer to newer YOLO backbones) for train trials. 2022-07-01 15:14:01 -07:00
Ross Wightman a050fde5cd Add resnet10t (basic block) and resnet14t (bottleneck) with 1,1,1,1 repeats 2022-07-01 15:03:28 -07:00
Ross Wightman e6d7df40ec no longer a point using kwargs for pretrain_cfg resolve, just pass explicit arg 2022-06-24 21:36:23 -07:00
Ross Wightman 07d0c4ae96 Improve repr for DropPath module 2022-06-24 14:58:15 -07:00
Ross Wightman e27c16b8a0 Remove unecessary code for synbn guard 2022-06-24 14:57:42 -07:00
Ross Wightman 0da3c9ebbf Remove SiLU layer in default args that breaks import on old old PyTorch 2022-06-24 14:56:58 -07:00
Ross Wightman 7d657d2ef4 Improve resolve_pretrained_cfg behaviour when no cfg exists, warn instead of crash. Improve usability ex #1311 2022-06-24 14:55:25 -07:00
Ross Wightman 879df47c0a Support BatchNormAct2d for sync-bn use. Fix #1254 2022-06-24 14:51:26 -07:00
Ross Wightman 4b30bae67b Add updated vit_relpos weights, and impl w/ support for official swin-v2 differences for relpos. Add bias control support for MLP layers 2022-05-13 13:53:57 -07:00
Ross Wightman d4c0588012 Remove persistent buffers from Swin-V2. Change SwinV2Cr cos attn + tau/logit_scale to match official, add ckpt convert, init_value zeros resid LN weight by default 2022-05-13 10:50:59 -07:00
Ross Wightman 27c42f0830 Fix torchscript use for offician Swin-V2, add support for non-square window/shift to WindowAttn/Block 2022-05-13 09:29:33 -07:00
Ross Wightman c0211b0bf7 Swin-V2 test fixes, typo 2022-05-12 22:31:55 -07:00
Ross Wightman 9a86b900fa Official SwinV2 models 2022-05-12 15:05:10 -07:00
Ross Wightman d07d015173
Merge pull request #1249 from okojoalg/sequencer
Add Sequencer
2022-05-09 20:42:43 -07:00
Ross Wightman 39b725e1c9 Fix tests for rank-4 output where feature channels dim is -1 (3) and not 1 2022-05-09 15:20:24 -07:00
Ross Wightman 78a32655fa Fix poolformer group_matcher to merge proj downsample with previous block, support coarse 2022-05-09 12:20:04 -07:00
Ross Wightman d79f3d9d1e Fix torchscript use for sequencer, add group_matcher, forward_head support, minor formatting 2022-05-09 12:09:39 -07:00
Ross Wightman 37b6920df3 Fix group_matcher regex for regnet.py 2022-05-09 10:40:40 -07:00
okojoalg 93a79a3dd9 Fix num_features in Sequencer 2022-05-06 23:16:32 +09:00
okojoalg 578d52e752 Add Sequencer 2022-05-06 00:36:01 +09:00
Ross Wightman f5ca4141f7 Adjust arg order for recent vit model args, add a few comments 2022-05-02 22:41:38 -07:00
Ross Wightman 41dc49a337 Vision Transformer refactoring and Rel Pos impl 2022-05-02 15:37:39 -07:00
Ross Wightman b7cb8d0337 Add Swin-V2 Small-NS weights (83.5 @ 224). Add layer scale like 'init_values' via post-norm LN weight scaling 2022-04-26 17:32:49 -07:00
jjsjann123 f88c606fcf fixing channels_last on cond_conv2d; update nvfuser debug env variable 2022-04-25 12:41:46 -07:00
Li Dong 09e9f3defb
migrate azure blob for beit checkpoints
## Motivation

We are going to use a new blob account to store the checkpoints.

## Modification

Modify the azure blob storage URLs for BEiT checkpoints.
2022-04-23 13:02:29 +08:00
Ross Wightman 52ac881402 Missed first_conv in latest seresnext 'D' default_cfgs 2022-04-22 20:55:52 -07:00
Ross Wightman 7629d8264d Add two new SE-ResNeXt101-D 32x8d weights, one anti-aliased and one not. Reshuffle default_cfgs vs model entrypoints for resnet.py so they are better aligned. 2022-04-22 16:54:53 -07:00
SeeFun 8f0bc0591e fix convnext args 2022-04-05 20:00:57 +08:00
Ross Wightman c5a8e929fb Add initial swinv2 tiny / small weights 2022-04-03 15:22:55 -07:00
Ross Wightman f670d98cb8 Make a few more layers symbolically traceable (remove from FX leaf modules)
* remove dtype kwarg from .to() calls in EvoNorm as it messed up script + trace combo
* BatchNormAct2d always uses custom forward (cut & paste from original) instead of super().forward. Fixes #1176
* BlurPool groups==channels, no need to use input.dim[1]
2022-03-24 21:43:56 -07:00
SeeFun ec4e9aa5a0
Add ConvNeXt tiny and small pretrain in22k
Add ConvNeXt tiny and small pretrain  in22k from ConvNeXt  repo:
06f7b05f92
2022-03-24 15:18:08 +08:00
Ross Wightman 575924ed60 Update test crop for new RegNet-V weights to match Y 2022-03-23 21:40:53 -07:00
Ross Wightman 1618527098 Add layer scale and parallel blocks to vision_transformer 2022-03-23 16:09:07 -07:00
Ross Wightman c42be74621 Add attrib / comments about Swin-S3 (AutoFormerV2) weights 2022-03-23 16:07:09 -07:00
Ross Wightman 474ac906a2 Add 'head norm first' convnext_tiny_hnf weights 2022-03-23 16:06:00 -07:00
Ross Wightman dc51334cdc Fix pruned adapt for EfficientNet models that are now using BatchNormAct layers 2022-03-22 20:33:01 -07:00
Ross Wightman 024fc4d9ab version 0.6.1 for master 2022-03-21 22:03:13 -07:00
Ross Wightman e1e037ba52 Fix bad tuple typing fix that was on XLA branch bust missed on master merge 2022-03-21 22:00:33 -07:00
Ross Wightman fe457c1996 Update SwinTransformerV2Cr post-merge, update with grad checkpointing / grad matcher
* weight compat break, activate norm3 for final block of final stage (equivalent to pre-head norm, but while still in BLC shape)
* remove fold/unfold for TPU compat, add commented out roll code for TPU
* add option for end of stage norm in all stages
* allow weight_init to be selected between pytorch default inits and xavier / moco style vit variant
2022-03-21 14:50:28 -07:00
Ross Wightman b049a5c5c6 Merge remote-tracking branch 'origin/master' into norm_norm_norm 2022-03-21 13:41:43 -07:00
Ross Wightman 9440a50c95 Merge branch 'mrT23-master' 2022-03-21 12:30:02 -07:00
Ross Wightman d98aa47d12 Revert ml-decoder changes to model factory and train script 2022-03-21 12:29:02 -07:00
Ross Wightman b20665d379
Merge pull request #1007 from qwertyforce/patch-1
update arxiv link
2022-03-21 12:12:58 -07:00
Ross Wightman 61d3493f87 Fix hf-hub handling when hf-hub is config source 2022-03-21 11:12:55 -07:00
Ross Wightman 5f47518f27 Fix pit implementation to be clsoer to deit/levit re distillation head handling 2022-03-21 11:12:14 -07:00
Ross Wightman 0862e6ebae Fix correctness of some group matching regex (no impact on result), some formatting, missed forward_head for resnet 2022-03-19 14:58:54 -07:00
Ross Wightman 94bcdebd73 Add latest weights trained on TPU-v3 VM instances 2022-03-18 21:35:41 -07:00
Ross Wightman 0557c8257d Fix bug introduced in non layer_decay weight_decay application. Remove debug print, fix arg desc. 2022-02-28 17:06:32 -08:00
Ross Wightman 372ad5fa0d Significant model refactor and additions:
* All models updated with revised foward_features / forward_head interface
* Vision transformer and MLP based models consistently output sequence from forward_features (pooling or token selection considered part of 'head')
* WIP param grouping interface to allow consistent grouping of parameters for layer-wise decay across all model types
* Add gradient checkpointing support to a significant % of models, especially popular architectures
* Formatting and interface consistency improvements across models
* layer-wise LR decay impl part of optimizer factory w/ scale support in scheduler
* Poolformer and Volo architectures added
2022-02-28 13:56:23 -08:00
Ross Wightman 1420c118df Missed comitting outstanding changes to default_cfg keys and test exclusions for swin v2 2022-02-23 19:50:26 -08:00
Ross Wightman c6e4b7895a Swin V2 CR impl refactor.
* reformat and change some naming so closer to existing timm vision transformers
* remove typing that wasn't adding clarity (or causing torchscript issues)
* support non-square windows
* auto window size adjust from image size
* post-norm + main-branch no
2022-02-23 17:28:52 -08:00
Christoph Reich 67d140446b Fix bug in classification head 2022-02-20 22:28:05 +01:00
Christoph Reich 29add820ac Refactor (back to relative imports) 2022-02-20 00:46:48 +01:00
Christoph Reich 74a04e0016 Add parameter to change normalization type 2022-02-20 00:46:00 +01:00
Christoph Reich 2a4f6c13dd Create model functions 2022-02-20 00:40:22 +01:00
Christoph Reich 87b4d7a29a Add get and reset classifier method 2022-02-19 22:47:02 +01:00
Christoph Reich ff5f6bcd6c Check input resolution 2022-02-19 22:42:02 +01:00
Christoph Reich 81bf0b4033 Change parameter names to match Swin V1 2022-02-19 22:37:22 +01:00
Christoph Reich f227b88831 Add initials (CR) to model and file 2022-02-19 22:14:38 +01:00
Christoph Reich 90dc74c450 Add code from https://github.com/ChristophReich1996/Swin-Transformer-V2 and change docstring style to match timm 2022-02-19 22:12:11 +01:00
Ross Wightman 2c3870e107 semobilevit_s for good measure 2022-01-31 22:36:09 -08:00
Ross Wightman 58ba49c8ef Add MobileViT models (w/ ByobNet base). Close #1038. 2022-01-31 15:39:34 -08:00
Ross Wightman 5f81d4de23 Move DeiT to own file, vit getting crowded. Working towards fixing #1029, make pooling interface for transformers and mlp closer to convnets. Still working through some details... 2022-01-26 22:53:57 -08:00
Ross Wightman 95cfc9b3e8 Merge remote-tracking branch 'origin/master' into norm_norm_norm 2022-01-25 22:20:45 -08:00
Ross Wightman abc9ba2544 Transitioning default_cfg -> pretrained_cfg. Improving handling of pretrained_cfg source (HF-Hub, files, timm config, etc). Checkpoint handling tweaks. 2022-01-25 21:54:13 -08:00
Ross Wightman 07379c6d5d Add vit_base2_patch32_256 for a model between base_patch16 and patch32 with a slightly larger img size and width 2022-01-24 14:46:47 -08:00
Ross Wightman 83b40c5a58 Last batch of small model weights (for now). mobilenetv3_small 050/075/100 and updated mnasnet_small with lambc/lamb optimizer. 2022-01-19 10:02:02 -08:00
Ross Wightman 1aa617cb3b Add AvgPool2d anti-aliasing support to ResNet arch (as per OpenAI CLIP models), add a few blur aa models as well 2022-01-18 21:57:24 -08:00
Ross Wightman 010b486590 Add Dino pretrained weights (no head) for vit models. Add support to tests and helpers for models w/ no classifier (num_classes=0 in pretrained cfg) 2022-01-17 12:20:02 -08:00
Ross Wightman 738a9cd635 unbiased=False for torch.var_mean path of ConvNeXt LN. Fix #1090 2022-01-17 09:25:06 -08:00
Ross Wightman e0c4eec4b6 Default conv_mlp to False across the board for ConvNeXt, causing issues on more setups than it's improving right now... 2022-01-16 14:20:08 -08:00
Ross Wightman b669f4a588 Add ConvNeXt 22k->1k fine-tuned and 384 22k-1k fine-tuned weights after testing 2022-01-15 15:44:36 -08:00
Ross Wightman e967c72875 Update REAMDE.md. Sneak in g/G (giant / gigantic?) ViT defs from scaling paper 2022-01-14 16:28:27 -08:00
Ross Wightman 9ca3437178 Add some more small model weights lcnet, mnas, mnv2 2022-01-14 16:28:27 -08:00
Ross Wightman fa81164378 Fix stem width for really small mobilenetv3 arch defs 2022-01-14 16:28:27 -08:00
Ross Wightman edd3d73695 Add missing dropout for head reset in ConvNeXt default head 2022-01-14 16:28:27 -08:00
Ross Wightman b093dcb46d Some convnext cleanup, remove in place mul_ for gamma, breaking symbolic trace, cleanup head a bit... 2022-01-14 16:28:27 -08:00
Ross Wightman 18934debc5 Add initial ConvNeXt impl (mods of official code) 2022-01-14 16:28:27 -08:00
Ross Wightman 656757d26b Fix MobileNetV2 head conv size for multiplier < 1.0. Add some missing modification copyrights, fix starting date of some old ones. 2022-01-14 16:28:27 -08:00
Ross Wightman ccfeb06936 Fix out_indices handling breakage, should have left as per vgg approach. 2022-01-07 19:30:51 -08:00
Ross Wightman a9f91483a6 Fix #1078, DarkNet has 6 feature maps. Make vgg and darknet out_indices handling/comments equivalent 2022-01-07 15:08:32 -08:00
Ross Wightman c21b21660d visformer supports spatial feat map, update pool_size in pretrained cfg to match 2022-01-07 14:31:43 -08:00
Ross Wightman 9c11dfd9cb Fix fbnetv3 pretrained cfg changes 2022-01-07 14:09:50 -08:00
Ross Wightman 1406cddc2e FBNetV3 timm trained weights added for b/d/g variants. Update version to 0.5.2 for pypi release. 2022-01-07 12:05:08 -08:00
Ross Wightman 4df51f3932 Add lcnet_100 and mnasnet_small weights 2022-01-06 22:21:05 -08:00
Ross Wightman 5ccf682a8f Remove deprecated bn-tf train arg and create_model handler. Add evos/evob models back into fx test filter until norm_norm_norm branch merged. 2022-01-06 18:08:39 -08:00
Ross Wightman b9a715c86a Add more small model defs for MobileNetV3/V2/LCNet 2022-01-06 16:06:43 -08:00
Ross Wightman b27c21b09a Update drop_path and drop_block (fast impl) to be symbolically traceable, slightly faster 2022-01-06 16:04:58 -08:00
Ross Wightman 214c84a235 Disable use of timm nn.Linear wrapper since AMP autocast + torchscript use appears fixed 2022-01-06 16:01:51 -08:00
Ross Wightman 72b57163d1 Merge branch 'master' of https://github.com/mrT23/pytorch-image-models into mrT23-master 2022-01-06 13:57:16 -08:00
Ross Wightman de5fa791c6 Merge branch 'master' into norm_norm_norm 2022-01-03 11:37:00 -08:00
Ross Wightman 26ff57f953 Add more small model defs for MobileNetV3/V2/LCNet 2022-01-03 11:30:54 -08:00
Ross Wightman 450ac6a0f5 Post merge tinynet fixes for pool_size, feature extraction 2021-12-21 23:51:54 -08:00
Ross Wightman a04164cd75 Merge branch 'tinynet' of https://github.com/rsomani95/pytorch-image-models into rsomani95-tinynet 2021-12-21 22:45:56 -08:00
Ross Wightman 8a93ce6ee3 Fix regnetv/w tests, refactor regnet generator code a bit 2021-12-16 17:08:25 -08:00
Ross Wightman 4dec8c8087 Fix skip path regression for updated EfficientNet and RegNet def. Add Pre-Act RegNet support (experimental). Remove BN-TF flag. Add efficientnet_b0_g8_gn model. 2021-12-16 14:53:57 -08:00
Ross Wightman a52a614475 Remove layer experiment which should not have been added 2021-12-14 14:29:32 -08:00
Ross Wightman ab49d275de Significant norm update
* ConvBnAct layer renamed -> ConvNormAct and ConvNormActAa for anti-aliased
* Significant update to EfficientNet and MobileNetV3 arch to support NormAct layers and grouped conv (as alternative to depthwise)
* Update RegNet to add Z variant
* Add Pre variant of XceptionAligned that works with NormAct layers
* EvoNorm matches bits_and_tpu branch for merge
2021-12-14 13:48:30 -08:00
Rahul Somani 31bcd36e46 add tinynet models 2021-12-14 19:34:04 +05:30
KAI ZHAO b4b8d1ec18 fix hard-coded strides 2021-12-14 17:22:54 +08:00
Ross Wightman d04f2f1377 Update drop_path and drop_block (fast impl) to be symbolically traceable, slightly faster 2021-12-05 15:36:56 -08:00
Ross Wightman 834a9ec721 Disable use of timm nn.Linear wrapper since AMP autocast + torchscript use appears fixed 2021-12-01 14:58:09 -08:00
Ross Wightman 78912b6375 Updated EvoNorm implementations with some experimentation. Add FilterResponseNorm. Updated RegnetZ and ResNetV2 model defs for trials. 2021-12-01 12:09:01 -08:00
talrid c11f4c3218 support CNNs 2021-11-30 08:48:08 +02:00
mrT23 d6701d8a81
Merge branch 'rwightman:master' into master 2021-11-30 08:07:44 +02:00
qwertyforce ccb3815360
update arxiv link 2021-11-29 21:41:00 +03:00
Ross Wightman 3dc71695bf
Merge pull request #989 from martinsbruveris/feat/resmlp-dino
Added DINO pretrained ResMLP models.
2021-11-24 09:26:07 -08:00
Ross Wightman 480c676ffa Fix FX breaking assert in evonorm 2021-11-24 09:24:47 -08:00
Martins Bruveris 85c5ff26d7 Added DINO pretrained ResMLP models. 2021-11-24 15:02:46 +02:00
Ross Wightman d633a014e6 Post merge cleanup. Fix potential security issue passing kwargs directly through to serialized web data. 2021-11-23 16:54:01 -08:00
Nathan Raw b18c9e323b
Update helpers.py 2021-11-22 23:43:44 -05:00
Nathan Raw 308d0b9554
Merge branch 'master' into hf-save-and-push 2021-11-22 23:39:27 -05:00
talrid 41559247e9 use_ml_decoder_head 2021-11-22 17:50:39 +02:00
Ross Wightman 1f53db2ece Updated lamhalobotnet weights, 81.5 top-1 2021-11-21 19:49:51 -08:00
Ross Wightman 15ef108eb4 Add better halo2botnet50ts weights, 82 top-1 @ 256 2021-11-21 14:09:12 -08:00
Ross Wightman 734b2244fe Add RegNetZ-D8 (83.5 @ 256, 84 @ 320) and RegNetZ-E8 (84.5 @ 256, 85 @ 320) weights. Update names of existing RegZ models to include group size. 2021-11-20 15:52:04 -08:00
Ross Wightman 93cc08fdc5 Make evonorm variables 1d to match other PyTorch norm layers, will break weight compat for any existing use (likely minimal, easy to fix). 2021-11-20 15:50:51 -08:00
Ross Wightman af607b75cc Prep a set of ResNetV2 models with GroupNorm, EvoNormB0, EvoNormS0 for BN free model experiments on TPU and IPU 2021-11-19 17:37:00 -08:00
Ross Wightman c976a410d9 Add ResNet-50 w/ GN (resnet50_gn) and SEBotNet-33-TS (sebotnet33ts_256) model defs and weights. Update halonet50ts weights w/ slightly better variant in1k val, more robust to test sets. 2021-11-19 14:24:43 -08:00
Ross Wightman f2006b2437 Cleanup qkv_bias cat in beit model so it can be traced 2021-11-18 21:25:00 -08:00
Ross Wightman 1076a65df1 Minor post FX merge cleanup 2021-11-18 19:47:07 -08:00
Ross Wightman 32c9937dec Merge branch 'fx-feature-extract-new' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-fx-feature-extract-new 2021-11-18 16:31:29 -08:00
Alexander Soare 65d827c7a6 rename notrace registration and standardize trace_utils imports 2021-11-15 21:03:21 +00:00
Ross Wightman 9b2daf2a35 Add ResNeXt-50 weights 81.1 top-1 @ 224, 82 @ 288 with A1 'high aug' recipe 2021-11-14 13:17:27 -08:00
Martins Bruveris 5220711d87 Added B/8 models to ViT. 2021-11-14 11:01:48 +00:00
Alexander Soare 0262a0e8e1 fx ready for review 2021-11-13 00:06:33 +00:00
Alexander Soare d2994016e9 Add try/except guards 2021-11-12 21:16:53 +00:00
Alexander Soare b25ff96768 wip - pre-rebase 2021-11-12 20:45:05 +00:00
Alexander Soare e051dce354 Make all models FX traceable 2021-11-12 20:45:05 +00:00
Alexander Soare cf4561ca72 Add FX based FeatureGraphNet capability 2021-11-12 20:45:05 +00:00
Alexander Soare 0149ec30d7 wip - attempting to rebase 2021-11-12 20:45:05 +00:00
Alexander Soare 02c3a75a45 wip - make it possible to use fx graph in train and eval mode 2021-11-12 20:45:05 +00:00
Alexander Soare bc3d4eb403 wip -rebase 2021-11-12 20:45:05 +00:00
Alexander Soare ab3ac3f25b Add FX based FeatureGraphNet capability 2021-11-12 20:45:05 +00:00
Ross Wightman ddc29da974 Add ResNet101 and ResNet152 weights from higher aug RSB recipes. 81.93 and 82.82 top-1 at 224x224. 2021-11-02 17:59:16 -07:00
Ross Wightman b328e56f49 Update eca_halonext26ts weights to a better set 2021-11-02 16:52:53 -07:00
Ross Wightman 2ddef942b9 Better fix for #954 that doesn't break torchscript, pull torch._assert into timm namespace when it exists 2021-11-02 11:22:33 -07:00
Ross Wightman 4f0f9cb348 Fix #954 by bringing traceable _assert into timm to allow compat w/ PyTorch < 1.8 2021-11-02 09:21:40 -07:00
Ross Wightman ae72d009fa Add weights for lambda_resnet50ts, halo2botnet50ts, lamhalobotnet50ts, updated halonet50ts 2021-10-27 22:08:54 -07:00
Ross Wightman b745d30a3e Fix formatting of last commit 2021-10-25 15:15:14 -07:00
Ross Wightman 3478f1d7f1 Traceability fix for vit models for some experiments 2021-10-25 15:13:08 -07:00
Ross Wightman f658a72e72 Cleanup re-use of Dropout modules in Mlp modules after some twitter feedback :p 2021-10-25 00:40:59 -07:00
Thomas Viehmann f805ba86d9 use .unbind instead of explicitly listing the indices 2021-10-24 21:08:47 +02:00
Ross Wightman 0fe4fd3f1f add d8 and e8 regnetz models with group size 8 2021-10-23 20:34:21 -07:00
Ross Wightman 25e7c8c5e5 Update broken resnetv2_50 weight url, add resnetv1_101 a1h recipe weights for 224x224 train 2021-10-20 22:14:12 -07:00
Ross Wightman b6caa356d2 Fixed eca_botnext26ts_256 weights added, 79.27 2021-10-19 12:44:28 -07:00
Ross Wightman c02334d9fa Add weights for regnetz_d and haloregnetz_c, update regnetz_c weights. Add commented PyTorch XLA code for halo attention 2021-10-19 12:32:09 -07:00
Ross Wightman 02daf2ab94 Add option to include relative pos embedding in the attention scaling as per references. See discussion #912 2021-10-12 15:37:01 -07:00
Ross Wightman cd34913278 Remove some outdated comments, botnet networks working great now. 2021-10-11 22:43:41 -07:00
Ross Wightman 6ed4cdccca Update lambda_resnet26t weights with better set 2021-10-10 16:32:54 -07:00
ICLR Author 44d6d51668 Add ConvMixer 2021-10-09 21:09:51 -04:00
Ross Wightman a85df34993 Update lambda_resnet26rpt weights to 78.9, add better halonet26t weights at 79.1 with tweak to attention dim 2021-10-08 17:44:13 -07:00
Ross Wightman b544ad4d3f regnetz model default cfg tweaks 2021-10-06 21:14:59 -07:00
Ross Wightman e2b8d44ff0 Halo, bottleneck attn, lambda layer additions and cleanup along w/ experimental model defs
* align interfaces of halo, bottleneck attn and lambda layer
* add qk_ratio to all of above, control q/k dim relative to output dim
* add experimental haloregnetz, and trionet (lambda + halo + bottle) models
2021-10-06 16:32:48 -07:00
Ross Wightman fbf59c04ee Change crop ratio on correct resnet50 variant. 2021-10-04 22:31:08 -07:00
Ross Wightman ae1ff5792f Clean a1/a2/3 rsb _0 checkpoints properly, fix v2 loading. 2021-10-04 16:46:00 -07:00
Ross Wightman da0d39bedd Update default crop_pct for byoanet 2021-10-03 17:33:16 -07:00
Ross Wightman cc9bedf373 Add initial ResNet Strikes Back weights for ResNet50 and ResNetV2-50 models 2021-10-03 17:32:02 -07:00
Ross Wightman 64495505b7 Add updated lambda resnet26 and botnet26 checkpoints with fixes applied 2021-10-03 17:31:39 -07:00
Ross Wightman b2094f4ee8 support bits checkpoints in avg/load 2021-10-03 17:31:22 -07:00