Commit Graph

574 Commits (a52a614475b0a869b9b30c0ed629043407da4dd2)

Author SHA1 Message Date
Ross Wightman b544ad4d3f regnetz model default cfg tweaks 2021-10-06 21:14:59 -07:00
Ross Wightman e2b8d44ff0 Halo, bottleneck attn, lambda layer additions and cleanup along w/ experimental model defs
* align interfaces of halo, bottleneck attn and lambda layer
* add qk_ratio to all of above, control q/k dim relative to output dim
* add experimental haloregnetz, and trionet (lambda + halo + bottle) models
2021-10-06 16:32:48 -07:00
Ross Wightman fbf59c04ee Change crop ratio on correct resnet50 variant. 2021-10-04 22:31:08 -07:00
Ross Wightman ae1ff5792f Clean a1/a2/3 rsb _0 checkpoints properly, fix v2 loading. 2021-10-04 16:46:00 -07:00
Ross Wightman da0d39bedd Update default crop_pct for byoanet 2021-10-03 17:33:16 -07:00
Ross Wightman cc9bedf373 Add initial ResNet Strikes Back weights for ResNet50 and ResNetV2-50 models 2021-10-03 17:32:02 -07:00
Ross Wightman 64495505b7 Add updated lambda resnet26 and botnet26 checkpoints with fixes applied 2021-10-03 17:31:39 -07:00
Ross Wightman b2094f4ee8 support bits checkpoints in avg/load 2021-10-03 17:31:22 -07:00
Ross Wightman 007bc39323 Some halo and bottleneck attn code cleanup, add halonet50ts weights, use optimal crop ratios 2021-10-02 15:51:42 -07:00
Ross Wightman b1c2e3eb92 Match rel_pos_indices attr rename in conv branch 2021-09-30 23:19:05 -07:00
Ross Wightman b49630a138 Add relative pos embed option to LambdaLayer, fix last transpose/reshape. 2021-09-30 22:45:09 -07:00
Ross Wightman d657e2cc0b Remove dead code line from efficientnet 2021-09-30 21:54:42 -07:00
Ross Wightman 0ca687f224 Make 'regnetz' model experiments closer to actual RegNetZ, bottleneck expansion, expand from in_chs, no shortcut on stride 2, tweak model sizes 2021-09-30 21:49:38 -07:00
Ross Wightman b81e79aae9 Fix bottleneck attn transpose typo, hopefully these train better now.. 2021-09-28 16:38:41 -07:00
Ross Wightman 6478bcd02c Fix regnetz_d conv layer name, use inception mean/std 2021-09-26 14:54:17 -07:00
Ross Wightman 515121cca1 Use reshape instead of view in std_conv, causing issues in recent PyTorch in channels_last 2021-09-23 15:43:48 -07:00
Ross Wightman da06cc61d4 ResNetV2 seems to work best without zero_init residual 2021-09-23 15:43:22 -07:00
Ross Wightman 8e11da0ce3 Add experimental RegNetZ(ish) models for training / perf trials. 2021-09-23 15:42:57 -07:00
Alexander Soare 6bbc50beb4 make it possible to provide norm_layer via create_model 2021-09-21 10:19:04 +01:00
nateraw adcb74f87f 🎨 Import load_state_dict_from_url directly 2021-09-14 01:11:40 -04:00
nateraw e65a2cba3d 🎨 cleanup and add a couple comments 2021-09-14 01:07:04 -04:00
nateraw 2b6ade24b3 🎨 write model card to enable inference 2021-09-13 23:31:28 -04:00
Ross Wightman cf5ac2800c BotNet models were still off, remove weights for bad configs. Add good SE-HaloNet33-TS weights. 2021-09-13 17:18:59 -07:00
Ross Wightman 24720abe3b Merge branch 'master' into attn_update 2021-09-13 16:51:10 -07:00
Ross Wightman 1c9284c640 Add BeiT 'finetuned' 1k weights and pretrained 22k weights, pretraining specific (masked) model excluded for now 2021-09-13 16:38:23 -07:00
Ross Wightman f8a215cfe6 A few more crossvit tweaks, fix training w/ no_weight_decay names, add crop option for scaling, adjust default crop_pct for large img size to 1.0 for better results 2021-09-13 14:17:34 -07:00
Ross Wightman 7ab2491ab7 Better handling of crossvit for tests / forward_features, fix torchscript regression in my changes 2021-09-13 13:01:05 -07:00
Ross Wightman f1808e0970 Post crossvit merge cleanup, change model names to reflect input size, cleanup img size vs scale handling, fix tests 2021-09-13 11:49:54 -07:00
Ross Wightman 4027412757 Add resnet33ts weights, update resnext26ts baseline weights 2021-09-09 14:46:41 -07:00
Richard Chen 9fe5798bee fix bug for reset classifier and fix for validating the dimension 2021-09-08 21:58:17 -04:00
Richard Chen 3718c5a5bd fix loading pretrained model 2021-09-08 11:53:05 -04:00
Richard Chen bb50b69a57 fix for torch script 2021-09-08 11:20:59 -04:00
nateraw abf9d51bc3 🚧 wip 2021-09-07 18:39:26 -06:00
Ross Wightman 5bd04714e4 Cleanup weight init for byob/byoanet and related 2021-09-05 15:34:05 -07:00
Ross Wightman 8642401e88 Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low... 2021-09-05 15:17:19 -07:00
Ross Wightman 5f12de4875 Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test. 2021-09-05 12:41:14 -07:00
Ross Wightman 76881d207b Add baseline resnet26t @ 256x256 weights. Add 33ts variant of halonet with at least one halo in stage 2,3,4 2021-09-04 14:52:54 -07:00
Ross Wightman 484e61648d Adding the attn series weights, tweaking model names, comments... 2021-09-03 18:09:42 -07:00
nateraw 28d2841acf 💄 apply isort 2021-09-01 18:15:08 -06:00
Ross Wightman 492c0a4e20 Update HaloAttn comment 2021-09-01 17:14:31 -07:00
nateraw e72c989973 add ability to push to hf hub 2021-09-01 18:14:28 -06:00
Richard Chen 7ab9d4555c add crossvit 2021-09-01 17:13:12 -04:00
Ross Wightman 3b9032ea48 Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity 2021-08-27 12:45:53 -07:00
Ross Wightman 78933122c9 Fix silly typo 2021-08-27 09:22:20 -07:00
Ross Wightman 2568ffc5ef Merge branch 'master' into attn_update 2021-08-27 09:21:22 -07:00
Ross Wightman 708d87a813 Fix ViT SAM weight compat as weights at URL changed to not use repr layer. Fix #825. Tweak optim test. 2021-08-27 09:20:13 -07:00
Ross Wightman 8449ba210c Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc). 2021-08-26 21:56:44 -07:00
Ross Wightman a8b65695f1 Add resnet26ts and resnext26ts models for non-attn baselines 2021-08-21 12:42:10 -07:00
Ross Wightman a5a542f17d Fix typo 2021-08-20 17:47:23 -07:00
Ross Wightman 925e102982 Update attention / self-attn based models from a series of experiments:
* remove dud attention, involution + my swin attention adaptation don't seem worth keeping
* add or update several new 26/50 layer ResNe(X)t variants that were used in experiments
* remove models associated with dead-end or uninteresting experiment results
* weights coming soon...
2021-08-20 16:13:11 -07:00
Ross Wightman 01cb46a9a5 Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode. 2021-08-07 16:45:29 -07:00
Ross Wightman d3f7440650 Add EfficientNetV2 XL model defs 2021-07-22 13:15:24 -07:00
Ross Wightman 72b227dcf5
Merge pull request #750 from drjinying/master
Specify "interpolation" mode in vision_transformer's resize_pos_embed
2021-07-13 11:01:20 -07:00
Ross Wightman 748ab852ca Allow act_layer switch for xcit, fix in_chans for some variants 2021-07-12 13:27:29 -07:00
Ying Jin 20b2d4b69d Use bicubic interpolation in resize_pos_embed() 2021-07-12 10:38:31 -07:00
Ross Wightman d3255adf8e Merge branch 'xcit' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-xcit 2021-07-12 08:30:30 -07:00
Ross Wightman f8039c7492 Fix gc effv2 model cfg name 2021-07-11 12:14:31 -07:00
Alexander Soare 3a55a30ed1 add notes from author 2021-07-11 14:25:58 +01:00
Alexander Soare 899cf84ccc bug fix - missing _dist postfix for many of the 224_dist models 2021-07-11 12:41:51 +01:00
Alexander Soare 623e8b8eb8 wip xcit 2021-07-11 09:39:38 +01:00
Ross Wightman 392368e210 Add efficientnetv2_rw_t defs w/ weights, and gc variant, as well as gcresnet26ts for experiments. Version 0.4.13 2021-07-09 16:46:52 -07:00
Ross Wightman 6d8272e92c Add SAM pretrained model defs/weights for ViT B16 and B32 models. 2021-07-08 11:51:12 -07:00
Ross Wightman ee4d8fc69a Remove unecessary line from nest post refactor 2021-07-05 21:22:46 -07:00
Ross Wightman 8165cacd82 Realized LayerNorm2d won't work in all cases as is, fixed. 2021-07-05 18:21:34 -07:00
Ross Wightman 81cd6863c8 Move aggregation (convpool) for nest into NestLevel, cleanup and enable features_only use. Finalize weight url. 2021-07-05 18:20:49 -07:00
Ross Wightman 6ae0ac6420 Merge branch 'nested_transformer' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-nested_transformer 2021-07-03 12:45:26 -07:00
Alexander Soare 7b8a0017f1 wip to review 2021-07-03 12:10:12 +01:00
Alexander Soare b11d949a06 wip checkpoint with some feature extraction work 2021-07-03 11:45:19 +01:00
Alexander Soare 23bb72ce5e nested_transformer wip 2021-07-02 20:12:29 +01:00
Ross Wightman 766b4d3262 Fix features for resnetv2_50t 2021-06-28 15:56:24 -07:00
Ross Wightman e8045e712f Fix BatchNorm for ResNetV2 non GN models, add more ResNetV2 model defs for future experimentation, fix zero_init of last residual for pre-act. 2021-06-28 10:52:45 -07:00
Ross Wightman 20a2be14c3 Add gMLP-S weights, 79.6 top-1 2021-06-23 10:40:30 -07:00
Ross Wightman 85f894e03d Fix ViT in21k representation (pre_logits) layer handling across old and new npz checkpoints 2021-06-23 10:38:34 -07:00
Ross Wightman b41cffaa93 Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713 2021-06-22 23:16:05 -07:00
Ross Wightman 9c9755a808 AugReg release 2021-06-20 17:46:06 -07:00
Ross Wightman 381b279785 Add hybrid model fwds back 2021-06-19 22:28:44 -07:00
Ross Wightman 26f04a8e3e Fix a weight link 2021-06-19 16:39:36 -07:00
Ross Wightman 8f4a0222ed Add GMixer-24 MLP model weights, trained w/ TPU + PyTorch XLA 2021-06-18 16:49:28 -07:00
Ross Wightman b319eb5b5d Update ViT weights, more details to be added before merge. 2021-06-18 16:16:49 -07:00
Ross Wightman 8257b86550 Fix up resnetv2 bit/bitm model default res 2021-06-18 16:16:06 -07:00
Ross Wightman 1228f5a3d8 Add BiT distilled 50x1 and teacher 152x2 models from 'A good teacher is patient and consistent' paper. 2021-06-18 11:40:33 -07:00
Ross Wightman 511a8e8c96 Add official ResMLP weights. 2021-06-14 17:03:16 -07:00
Ross Wightman b9cfb64412 Support npz custom load for vision transformer hybrid models. Add posembed rescale for npz load. 2021-06-14 12:31:44 -07:00
Ross Wightman 8319e0c373 Add file docstring to std_conv.py 2021-06-13 12:31:06 -07:00
Ross Wightman 4d96165989 Merge branch 'master' into cleanup_xla_model_fixes 2021-06-12 23:19:25 -07:00
Ross Wightman 8880f696b6 Refactoring, cleanup, improved test coverage.
* Add eca_nfnet_l2 weights, 84.7 @ 384x384
* All 'non-std' (ie transformer / mlp) models have classifier / default_cfg test added
* Fix #694 reset_classifer / num_features / forward_features / num_classes=0 consistency for transformer / mlp models
* Add direct loading of npz to vision transformer (pure transformer so far, hybrid to come)
* Rename vit_deit* to deit_*
* Remove some deprecated vit hybrid model defs
* Clean up classifier flatten for conv classifiers and unusual cases (mobilenetv3/ghostnet)
* Remove explicit model fns for levit conv, just pass in arg
2021-06-12 16:40:02 -07:00
Ross Wightman ba2ca4b464 One codepath for stdconv, switch layernorm to batchnorm so gain included. Tweak epsilon values for nfnet, resnetv2, vit hybrid. 2021-06-12 12:27:43 -07:00
Ross Wightman b7a568f065 Fix torchscript issue in bat 2021-06-08 23:19:51 -07:00
Ross Wightman d17b374f0f Minimum input_size needed to be higher 2021-06-08 21:31:39 -07:00
Ross Wightman b3b90d944d Add min_input_size to bat_resnext to prevent test breakage. 2021-06-08 17:32:08 -07:00
Ross Wightman d413eef1bf Add ResMLP-24 model weights that I trained in PyTorch XLA on TPU-VM. 79.2 top-1. 2021-06-08 14:22:05 -07:00
Ross Wightman 10d8fa4620 Add gc and bat attention resnext26ts variants to byob for test. 2021-06-08 14:21:07 -07:00
Ross Wightman 2f5ed2dec1 Update `init_values` const for 24 and 36 layer ResMLP models 2021-06-07 17:15:04 -07:00
Ross Wightman 8e4ac3549f All ScaledStdConv and StdConv uses default to using F.layernorm so that they work with PyTorch XLA. eps value tweaking is a WIP. 2021-06-07 17:14:19 -07:00
Ross Wightman 2a63d0246b Post merge cleanup 2021-06-07 14:38:30 -07:00
Ross Wightman 45dec179e5
Merge pull request #681 from lmk123568/master
Update convit.py
2021-06-07 14:10:53 -07:00
Dongyoon Han ded1671483 Fix stochastic depth working only with a shortcut 2021-06-07 23:08:55 +09:00
Mike b87d98b238
Update convit.py
Cut out the duplicates
2021-06-06 17:58:31 +08:00
Ross Wightman bda8ab015a Remove min channels for SelectiveKernel, divisor should cover cases well enough. 2021-05-31 15:38:56 -07:00
Ross Wightman a27f4aec4a Missed args for skresnext w/ refactoring. 2021-05-31 14:06:34 -07:00