Commit Graph

983 Commits (79927baaecb6cdd1a25eed7f0f8c122b99712c72)
 

Author SHA1 Message Date
Ross Wightman 79927baaec
Merge pull request #702 from rwightman/cleanup_xla_model_fixes
AugReg Vision Transformers, XLA model compat for ResNetV2-BiT / NFNet, ECA-NFNet-L2, GMixer-24 weights, ResMLP official weights, and cleanup
2021-06-20 17:49:14 -07:00
Ross Wightman 9c9755a808 AugReg release 2021-06-20 17:46:06 -07:00
Ross Wightman 381b279785 Add hybrid model fwds back 2021-06-19 22:28:44 -07:00
Ross Wightman 26f04a8e3e Fix a weight link 2021-06-19 16:39:36 -07:00
Ross Wightman 8f4a0222ed Add GMixer-24 MLP model weights, trained w/ TPU + PyTorch XLA 2021-06-18 16:49:28 -07:00
Ross Wightman 4c09a2f169 Bump version 0.4.12 2021-06-18 16:17:34 -07:00
Ross Wightman b319eb5b5d Update ViT weights, more details to be added before merge. 2021-06-18 16:16:49 -07:00
Ross Wightman 8257b86550 Fix up resnetv2 bit/bitm model default res 2021-06-18 16:16:06 -07:00
Ross Wightman 1228f5a3d8 Add BiT distilled 50x1 and teacher 152x2 models from 'A good teacher is patient and consistent' paper. 2021-06-18 11:40:33 -07:00
Ross Wightman 511a8e8c96 Add official ResMLP weights. 2021-06-14 17:03:16 -07:00
Ross Wightman b9cfb64412 Support npz custom load for vision transformer hybrid models. Add posembed rescale for npz load. 2021-06-14 12:31:44 -07:00
Ross Wightman 8319e0c373 Add file docstring to std_conv.py 2021-06-13 12:31:06 -07:00
Ross Wightman 0020268d9b Try lower max size for non_std default_cfg test 2021-06-12 23:31:24 -07:00
Ross Wightman 4d96165989 Merge branch 'master' into cleanup_xla_model_fixes 2021-06-12 23:19:25 -07:00
Ross Wightman 8880f696b6 Refactoring, cleanup, improved test coverage.
* Add eca_nfnet_l2 weights, 84.7 @ 384x384
* All 'non-std' (ie transformer / mlp) models have classifier / default_cfg test added
* Fix #694 reset_classifer / num_features / forward_features / num_classes=0 consistency for transformer / mlp models
* Add direct loading of npz to vision transformer (pure transformer so far, hybrid to come)
* Rename vit_deit* to deit_*
* Remove some deprecated vit hybrid model defs
* Clean up classifier flatten for conv classifiers and unusual cases (mobilenetv3/ghostnet)
* Remove explicit model fns for levit conv, just pass in arg
2021-06-12 16:40:02 -07:00
Ross Wightman ba2ca4b464 One codepath for stdconv, switch layernorm to batchnorm so gain included. Tweak epsilon values for nfnet, resnetv2, vit hybrid. 2021-06-12 12:27:43 -07:00
Ross Wightman 07fb05cc3d Update results csv files 2021-06-09 22:33:29 -07:00
Ross Wightman b79dfd4fc2
Merge pull request #693 from SamuelGabriel/patch-1
Let only the _globally_ 0th rank write checkpoints in `train.py`
2021-06-09 14:30:05 -07:00
SamuelGabriel 7c19c35d9f
Global instead of local rank. 2021-06-09 19:11:58 +02:00
Ross Wightman b7a568f065 Fix torchscript issue in bat 2021-06-08 23:19:51 -07:00
Ross Wightman d17b374f0f Minimum input_size needed to be higher 2021-06-08 21:31:39 -07:00
Ross Wightman b3b90d944d Add min_input_size to bat_resnext to prevent test breakage. 2021-06-08 17:32:08 -07:00
Ross Wightman 758c4438a7 Update README.md 2021-06-08 15:19:11 -07:00
Ross Wightman d413eef1bf Add ResMLP-24 model weights that I trained in PyTorch XLA on TPU-VM. 79.2 top-1. 2021-06-08 14:22:05 -07:00
Ross Wightman 10d8fa4620 Add gc and bat attention resnext26ts variants to byob for test. 2021-06-08 14:21:07 -07:00
Ross Wightman 2f5ed2dec1 Update `init_values` const for 24 and 36 layer ResMLP models 2021-06-07 17:15:04 -07:00
Ross Wightman 8e4ac3549f All ScaledStdConv and StdConv uses default to using F.layernorm so that they work with PyTorch XLA. eps value tweaking is a WIP. 2021-06-07 17:14:19 -07:00
Ross Wightman 2a63d0246b Post merge cleanup 2021-06-07 14:38:30 -07:00
Ross Wightman 45dec179e5
Merge pull request #681 from lmk123568/master
Update convit.py
2021-06-07 14:10:53 -07:00
Ross Wightman 4907f8f70d
Merge pull request #685 from dyhan0920/master
Update rexnet.py
2021-06-07 14:08:45 -07:00
Dongyoon Han ded1671483 Fix stochastic depth working only with a shortcut 2021-06-07 23:08:55 +09:00
Mike b87d98b238
Update convit.py
Cut out the duplicates
2021-06-06 17:58:31 +08:00
Ross Wightman 54a6cca27a
Merge pull request #668 from rwightman/more_attn
Add Gather-Excite, Global Context, BAT, Non-Local attn modules and refactored all attn modules and factory for improved consistency. EfficientNet / MobileNetV3 backbones able to use a wider variety of attention modules.
2021-05-31 15:52:24 -07:00
Ross Wightman 02320c3e3d Bump version to 0.4.11 2021-05-31 15:41:51 -07:00
Ross Wightman bda8ab015a Remove min channels for SelectiveKernel, divisor should cover cases well enough. 2021-05-31 15:38:56 -07:00
Ross Wightman a27f4aec4a Missed args for skresnext w/ refactoring. 2021-05-31 14:06:34 -07:00
Ross Wightman 307a935b79 Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA. 2021-05-31 13:18:11 -07:00
Ross Wightman 17dc47c8e6 Missed comma in test filters. 2021-05-30 22:00:43 -07:00
Ross Wightman 34522097b1 See if we can use tcmalloc in test runner 2021-05-30 21:12:10 -07:00
Ross Wightman 8bf63b6c6c Able to use other attn layer in EfficientNet now. Create test ECA + GC B0 configs. Make ECA more configurable. 2021-05-30 12:47:02 -07:00
Ross Wightman bcec14d3b5 Bring EfficientNet SE layer in line with others, pull se_ratio outside of blocks. Allows swapping w/ other attn layers. 2021-05-29 23:41:38 -07:00
Ross Wightman 9611458e19 Throw in some FBNetV3 code I had lying around, some refactoring of SE reduction channel calcs for all EffNet archs. 2021-05-28 20:47:24 -07:00
Ross Wightman 01b9108619 Merge branch 'master' into more_attn 2021-05-28 11:09:37 -07:00
Ross Wightman d7bab8a6c5 Fix strict flag change for checkpoint load. 2021-05-28 09:54:50 -07:00
Ross Wightman 02f9d4bc34 Add weights for resnet51q model, add 61q def. 2021-05-28 09:53:16 -07:00
Ross Wightman f615474be3 Fix broken test, repvgg block doesn't have attn_last attr. 2021-05-27 18:12:22 -07:00
Ross Wightman 742c2d5247 Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy. 2021-05-27 18:03:29 -07:00
Ross Wightman 9c78de8c02 Fix #661, move hardswish out of default args for LeViT. Enable native torch support for hardswish, hardsigmoid, mish if present. 2021-05-26 15:28:42 -07:00
Ross Wightman 07d952c7a7
Merge pull request #637 from rwightman/levit_visformer_rednet
LeVit, Visformer, RedNet/Involution models and layers
2021-05-25 14:27:06 -07:00
Ross Wightman 7f368782b7
Merge pull request #660 from petervandenabeele/readme_fix_typos
README: fix simple typos
2021-05-25 14:26:51 -07:00