Ross Wightman
492c0a4e20
Update HaloAttn comment
2021-09-01 17:14:31 -07:00
Ross Wightman
3b9032ea48
Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity
2021-08-27 12:45:53 -07:00
Ross Wightman
8449ba210c
Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc).
2021-08-26 21:56:44 -07:00
Ross Wightman
925e102982
Update attention / self-attn based models from a series of experiments:
...
* remove dud attention, involution + my swin attention adaptation don't seem worth keeping
* add or update several new 26/50 layer ResNe(X)t variants that were used in experiments
* remove models associated with dead-end or uninteresting experiment results
* weights coming soon...
2021-08-20 16:13:11 -07:00
Ross Wightman
01cb46a9a5
Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode.
2021-08-07 16:45:29 -07:00
Ross Wightman
8165cacd82
Realized LayerNorm2d won't work in all cases as is, fixed.
2021-07-05 18:21:34 -07:00
Ross Wightman
b9cfb64412
Support npz custom load for vision transformer hybrid models. Add posembed rescale for npz load.
2021-06-14 12:31:44 -07:00
Ross Wightman
8319e0c373
Add file docstring to std_conv.py
2021-06-13 12:31:06 -07:00
Ross Wightman
4d96165989
Merge branch 'master' into cleanup_xla_model_fixes
2021-06-12 23:19:25 -07:00
Ross Wightman
8880f696b6
Refactoring, cleanup, improved test coverage.
...
* Add eca_nfnet_l2 weights, 84.7 @ 384x384
* All 'non-std' (ie transformer / mlp) models have classifier / default_cfg test added
* Fix #694 reset_classifer / num_features / forward_features / num_classes=0 consistency for transformer / mlp models
* Add direct loading of npz to vision transformer (pure transformer so far, hybrid to come)
* Rename vit_deit* to deit_*
* Remove some deprecated vit hybrid model defs
* Clean up classifier flatten for conv classifiers and unusual cases (mobilenetv3/ghostnet)
* Remove explicit model fns for levit conv, just pass in arg
2021-06-12 16:40:02 -07:00
Ross Wightman
ba2ca4b464
One codepath for stdconv, switch layernorm to batchnorm so gain included. Tweak epsilon values for nfnet, resnetv2, vit hybrid.
2021-06-12 12:27:43 -07:00
Ross Wightman
b7a568f065
Fix torchscript issue in bat
2021-06-08 23:19:51 -07:00
Ross Wightman
8e4ac3549f
All ScaledStdConv and StdConv uses default to using F.layernorm so that they work with PyTorch XLA. eps value tweaking is a WIP.
2021-06-07 17:14:19 -07:00
Ross Wightman
bda8ab015a
Remove min channels for SelectiveKernel, divisor should cover cases well enough.
2021-05-31 15:38:56 -07:00
Ross Wightman
a27f4aec4a
Missed args for skresnext w/ refactoring.
2021-05-31 14:06:34 -07:00
Ross Wightman
307a935b79
Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA.
2021-05-31 13:18:11 -07:00
Ross Wightman
8bf63b6c6c
Able to use other attn layer in EfficientNet now. Create test ECA + GC B0 configs. Make ECA more configurable.
2021-05-30 12:47:02 -07:00
Ross Wightman
9611458e19
Throw in some FBNetV3 code I had lying around, some refactoring of SE reduction channel calcs for all EffNet archs.
2021-05-28 20:47:24 -07:00
Ross Wightman
f615474be3
Fix broken test, repvgg block doesn't have attn_last attr.
2021-05-27 18:12:22 -07:00
Ross Wightman
742c2d5247
Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy.
2021-05-27 18:03:29 -07:00
Ross Wightman
9c78de8c02
Fix #661 , move hardswish out of default args for LeViT. Enable native torch support for hardswish, hardsigmoid, mish if present.
2021-05-26 15:28:42 -07:00
Ross Wightman
f45de37690
Merge branch 'master' into levit_visformer_rednet
2021-05-22 16:34:31 -07:00
Ross Wightman
d5af752117
Add preliminary gMLP and ResMLP impl to Mlp-Mixer
2021-05-19 09:55:05 -07:00
Ross Wightman
3bffc701f1
Merge branch 'master' into levit_visformer_rednet
2021-05-14 23:02:12 -07:00
Ross Wightman
ecc7552c5c
Add levit, levit_c, and visformer model defs. Largely untested and not finished cleanup.
2021-05-14 17:16:34 -07:00
Ross Wightman
165fb354b2
Add initial RedNet model / Involution layer impl for testing
2021-05-14 17:16:34 -07:00
Ross Wightman
c4f482a08b
EfficientNetV2 official impl w/ weights ported from TF. Cleanup/refactor of related EfficientNet classes and models.
2021-05-14 15:50:00 -07:00
Ross Wightman
715519a5ef
Rethink name of patch embed grid info
2021-05-06 14:08:20 -07:00
Ross Wightman
b2c305c2aa
Move Mlp and PatchEmbed modules into layers. Being used in lots of models now...
2021-05-06 14:03:23 -07:00
Ross Wightman
0721559511
Improved (hopefully) init for SA/SA-like layers used in ByoaNets
2021-05-04 21:40:39 -07:00
Ross Wightman
0d87650fea
Remove filter hack from BlurPool w/ non-persistent buffer. Use BlurPool2d instead of AntiAliasing.. for TResNet. Breaks PyTorch < 1.6.
2021-05-04 16:56:28 -07:00
Ross Wightman
9cc7dda6e5
Fixup byoanet configs to pass unit tests. Add swin_attn and swinnet26t model for testing.
2021-04-29 21:08:37 -07:00
Ross Wightman
e15c3886ba
Defaul lambda r=7. Define '26t' stage 4/5 256x256 variants for all of bot/halo/lambda nets for experiment. Add resnet50t for exp. Fix a few comments.
2021-04-29 10:58:49 -07:00
Ross Wightman
4e4b863b15
Missed norm.py
2021-04-12 09:57:56 -07:00
Ross Wightman
ce62f96d4d
ByoaNet with bottleneck transformer, lambda resnet, and halo net experiments
2021-04-12 09:38:02 -07:00
Ross Wightman
a5310a3451
Merge remote-tracking branch 'origin/benchmark-fixes-vit_hybrids' into pit_and_vit_update
2021-04-01 12:15:34 -07:00
Ross Wightman
cf5fec5047
Cleanup experimental vit weight init a bit
2021-03-20 09:44:24 -07:00
Ross Wightman
740f32c96a
Add ECA-NFNet-L0 weights and update model name. Update README and bump version to 0.4.6
2021-03-17 13:55:32 -07:00
Ross Wightman
f57db99101
Update README, fix iabn pip version print.
2021-03-07 16:17:06 -08:00
Ross Wightman
8563609b28
Update notes in ScaledStdConv impl
2021-02-18 12:44:08 -08:00
Ross Wightman
678ba4e0a2
Add NFNet-F model weights ported from DeepMind Haiku impl and new set of models w/ compatible config.
2021-02-18 12:28:46 -08:00
Ross Wightman
d8e69206be
Merge pull request #419 from rwightman/byob_vgg_models
...
More models, GPU-Efficient Nets, RepVGG, classic VGG, and flexible Byob backbone.
2021-02-10 15:44:09 -08:00
Reuben
94ca140b67
update collections.abc import
2021-02-10 23:54:35 +11:00
Ross Wightman
1bcc69e0ad
Use in_channels for depthwise groups, allows using `out_channels=N * in_channels` (does not impact existing models). Fix #354 .
2021-02-09 16:22:52 -08:00
Ross Wightman
9811e229f7
Fix regression in models with 1001 class pretrained weights. Improve batchnorm arg and BatchNormAct layer handling in several models.
2021-02-09 16:22:52 -08:00
Ross Wightman
a39c3ee216
Merge branch 'master' into eca-weights
2021-02-08 11:52:31 -08:00
Ross Wightman
68a4144882
Add new weights for ecaresnet26t/50t/269d models. Remove distinction between 't' and 'tn' (tiered models), tn is now t. Add test time img size spec to default cfg.
2021-02-06 16:30:02 -08:00
Ross Wightman
b9843f954b
Merge pull request #282 from tigert1998/patch-1
...
Add symbolic for SwishJitAutoFn to support onnx
2021-02-04 12:18:40 -08:00
hwangdeyu
7a4be5c035
add operator HardSwishJitAutoFn export to onnx
2021-02-03 09:06:53 +08:00
Ross Wightman
90980de4a9
Fix up a few details in NFResNet models, managed stable training. Add support for gamma gain to be applied in activation or ScaleStdConv. Some tweaks to ScaledStdConv.
2021-01-30 16:32:07 -08:00