Commit Graph

181 Commits (dc18cda2e74962d28df4c16c667745df03ffa0b5)

Author SHA1 Message Date
Ross Wightman b745d30a3e Fix formatting of last commit 2021-10-25 15:15:14 -07:00
Ross Wightman 3478f1d7f1 Traceability fix for vit models for some experiments 2021-10-25 15:13:08 -07:00
Ross Wightman f658a72e72 Cleanup re-use of Dropout modules in Mlp modules after some twitter feedback :p 2021-10-25 00:40:59 -07:00
Ross Wightman c02334d9fa Add weights for regnetz_d and haloregnetz_c, update regnetz_c weights. Add commented PyTorch XLA code for halo attention 2021-10-19 12:32:09 -07:00
Ross Wightman 02daf2ab94 Add option to include relative pos embedding in the attention scaling as per references. See discussion #912 2021-10-12 15:37:01 -07:00
Ross Wightman e2b8d44ff0 Halo, bottleneck attn, lambda layer additions and cleanup along w/ experimental model defs
* align interfaces of halo, bottleneck attn and lambda layer
* add qk_ratio to all of above, control q/k dim relative to output dim
* add experimental haloregnetz, and trionet (lambda + halo + bottle) models
2021-10-06 16:32:48 -07:00
Ross Wightman 007bc39323 Some halo and bottleneck attn code cleanup, add halonet50ts weights, use optimal crop ratios 2021-10-02 15:51:42 -07:00
Ross Wightman b1c2e3eb92 Match rel_pos_indices attr rename in conv branch 2021-09-30 23:19:05 -07:00
Ross Wightman b49630a138 Add relative pos embed option to LambdaLayer, fix last transpose/reshape. 2021-09-30 22:45:09 -07:00
Ross Wightman b81e79aae9 Fix bottleneck attn transpose typo, hopefully these train better now.. 2021-09-28 16:38:41 -07:00
Ross Wightman 515121cca1 Use reshape instead of view in std_conv, causing issues in recent PyTorch in channels_last 2021-09-23 15:43:48 -07:00
Ross Wightman 5bd04714e4 Cleanup weight init for byob/byoanet and related 2021-09-05 15:34:05 -07:00
Ross Wightman 8642401e88 Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low... 2021-09-05 15:17:19 -07:00
Ross Wightman 5f12de4875 Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test. 2021-09-05 12:41:14 -07:00
Ross Wightman 492c0a4e20 Update HaloAttn comment 2021-09-01 17:14:31 -07:00
Ross Wightman 3b9032ea48 Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity 2021-08-27 12:45:53 -07:00
Ross Wightman 8449ba210c Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc). 2021-08-26 21:56:44 -07:00
Ross Wightman 925e102982 Update attention / self-attn based models from a series of experiments:
* remove dud attention, involution + my swin attention adaptation don't seem worth keeping
* add or update several new 26/50 layer ResNe(X)t variants that were used in experiments
* remove models associated with dead-end or uninteresting experiment results
* weights coming soon...
2021-08-20 16:13:11 -07:00
Ross Wightman 01cb46a9a5 Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode. 2021-08-07 16:45:29 -07:00
Ross Wightman 8165cacd82 Realized LayerNorm2d won't work in all cases as is, fixed. 2021-07-05 18:21:34 -07:00
Ross Wightman b9cfb64412 Support npz custom load for vision transformer hybrid models. Add posembed rescale for npz load. 2021-06-14 12:31:44 -07:00
Ross Wightman 8319e0c373 Add file docstring to std_conv.py 2021-06-13 12:31:06 -07:00
Ross Wightman 4d96165989 Merge branch 'master' into cleanup_xla_model_fixes 2021-06-12 23:19:25 -07:00
Ross Wightman 8880f696b6 Refactoring, cleanup, improved test coverage.
* Add eca_nfnet_l2 weights, 84.7 @ 384x384
* All 'non-std' (ie transformer / mlp) models have classifier / default_cfg test added
* Fix #694 reset_classifer / num_features / forward_features / num_classes=0 consistency for transformer / mlp models
* Add direct loading of npz to vision transformer (pure transformer so far, hybrid to come)
* Rename vit_deit* to deit_*
* Remove some deprecated vit hybrid model defs
* Clean up classifier flatten for conv classifiers and unusual cases (mobilenetv3/ghostnet)
* Remove explicit model fns for levit conv, just pass in arg
2021-06-12 16:40:02 -07:00
Ross Wightman ba2ca4b464 One codepath for stdconv, switch layernorm to batchnorm so gain included. Tweak epsilon values for nfnet, resnetv2, vit hybrid. 2021-06-12 12:27:43 -07:00
Ross Wightman b7a568f065 Fix torchscript issue in bat 2021-06-08 23:19:51 -07:00
Ross Wightman 8e4ac3549f All ScaledStdConv and StdConv uses default to using F.layernorm so that they work with PyTorch XLA. eps value tweaking is a WIP. 2021-06-07 17:14:19 -07:00
Ross Wightman bda8ab015a Remove min channels for SelectiveKernel, divisor should cover cases well enough. 2021-05-31 15:38:56 -07:00
Ross Wightman a27f4aec4a Missed args for skresnext w/ refactoring. 2021-05-31 14:06:34 -07:00
Ross Wightman 307a935b79 Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA. 2021-05-31 13:18:11 -07:00
Ross Wightman 8bf63b6c6c Able to use other attn layer in EfficientNet now. Create test ECA + GC B0 configs. Make ECA more configurable. 2021-05-30 12:47:02 -07:00
Ross Wightman 9611458e19 Throw in some FBNetV3 code I had lying around, some refactoring of SE reduction channel calcs for all EffNet archs. 2021-05-28 20:47:24 -07:00
Ross Wightman f615474be3 Fix broken test, repvgg block doesn't have attn_last attr. 2021-05-27 18:12:22 -07:00
Ross Wightman 742c2d5247 Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy. 2021-05-27 18:03:29 -07:00
Ross Wightman 9c78de8c02 Fix #661, move hardswish out of default args for LeViT. Enable native torch support for hardswish, hardsigmoid, mish if present. 2021-05-26 15:28:42 -07:00
Ross Wightman f45de37690 Merge branch 'master' into levit_visformer_rednet 2021-05-22 16:34:31 -07:00
Ross Wightman d5af752117 Add preliminary gMLP and ResMLP impl to Mlp-Mixer 2021-05-19 09:55:05 -07:00
Ross Wightman 3bffc701f1 Merge branch 'master' into levit_visformer_rednet 2021-05-14 23:02:12 -07:00
Ross Wightman ecc7552c5c Add levit, levit_c, and visformer model defs. Largely untested and not finished cleanup. 2021-05-14 17:16:34 -07:00
Ross Wightman 165fb354b2 Add initial RedNet model / Involution layer impl for testing 2021-05-14 17:16:34 -07:00
Ross Wightman c4f482a08b EfficientNetV2 official impl w/ weights ported from TF. Cleanup/refactor of related EfficientNet classes and models. 2021-05-14 15:50:00 -07:00
Ross Wightman 715519a5ef Rethink name of patch embed grid info 2021-05-06 14:08:20 -07:00
Ross Wightman b2c305c2aa Move Mlp and PatchEmbed modules into layers. Being used in lots of models now... 2021-05-06 14:03:23 -07:00
Ross Wightman 0721559511 Improved (hopefully) init for SA/SA-like layers used in ByoaNets 2021-05-04 21:40:39 -07:00
Ross Wightman 0d87650fea Remove filter hack from BlurPool w/ non-persistent buffer. Use BlurPool2d instead of AntiAliasing.. for TResNet. Breaks PyTorch < 1.6. 2021-05-04 16:56:28 -07:00
Ross Wightman 9cc7dda6e5 Fixup byoanet configs to pass unit tests. Add swin_attn and swinnet26t model for testing. 2021-04-29 21:08:37 -07:00
Ross Wightman e15c3886ba Defaul lambda r=7. Define '26t' stage 4/5 256x256 variants for all of bot/halo/lambda nets for experiment. Add resnet50t for exp. Fix a few comments. 2021-04-29 10:58:49 -07:00
Ross Wightman 4e4b863b15 Missed norm.py 2021-04-12 09:57:56 -07:00
Ross Wightman ce62f96d4d ByoaNet with bottleneck transformer, lambda resnet, and halo net experiments 2021-04-12 09:38:02 -07:00
Ross Wightman a5310a3451 Merge remote-tracking branch 'origin/benchmark-fixes-vit_hybrids' into pit_and_vit_update 2021-04-01 12:15:34 -07:00
Ross Wightman cf5fec5047 Cleanup experimental vit weight init a bit 2021-03-20 09:44:24 -07:00
Ross Wightman 740f32c96a Add ECA-NFNet-L0 weights and update model name. Update README and bump version to 0.4.6 2021-03-17 13:55:32 -07:00
Ross Wightman f57db99101 Update README, fix iabn pip version print. 2021-03-07 16:17:06 -08:00
Ross Wightman 8563609b28 Update notes in ScaledStdConv impl 2021-02-18 12:44:08 -08:00
Ross Wightman 678ba4e0a2 Add NFNet-F model weights ported from DeepMind Haiku impl and new set of models w/ compatible config. 2021-02-18 12:28:46 -08:00
Ross Wightman d8e69206be
Merge pull request #419 from rwightman/byob_vgg_models
More models, GPU-Efficient Nets, RepVGG, classic VGG, and flexible Byob backbone.
2021-02-10 15:44:09 -08:00
Reuben 94ca140b67 update collections.abc import 2021-02-10 23:54:35 +11:00
Ross Wightman 1bcc69e0ad Use in_channels for depthwise groups, allows using `out_channels=N * in_channels` (does not impact existing models). Fix #354. 2021-02-09 16:22:52 -08:00
Ross Wightman 9811e229f7 Fix regression in models with 1001 class pretrained weights. Improve batchnorm arg and BatchNormAct layer handling in several models. 2021-02-09 16:22:52 -08:00
Ross Wightman a39c3ee216
Merge branch 'master' into eca-weights 2021-02-08 11:52:31 -08:00
Ross Wightman 68a4144882 Add new weights for ecaresnet26t/50t/269d models. Remove distinction between 't' and 'tn' (tiered models), tn is now t. Add test time img size spec to default cfg. 2021-02-06 16:30:02 -08:00
Ross Wightman b9843f954b
Merge pull request #282 from tigert1998/patch-1
Add symbolic for SwishJitAutoFn to support onnx
2021-02-04 12:18:40 -08:00
hwangdeyu 7a4be5c035 add operator HardSwishJitAutoFn export to onnx 2021-02-03 09:06:53 +08:00
Ross Wightman 90980de4a9 Fix up a few details in NFResNet models, managed stable training. Add support for gamma gain to be applied in activation or ScaleStdConv. Some tweaks to ScaledStdConv. 2021-01-30 16:32:07 -08:00
Ross Wightman 5a8e1e643e Initial Normalizer-Free Reg/ResNet impl. A bit of related layer refactoring. 2021-01-27 22:06:57 -08:00
Ross Wightman 231d04e91a ResNetV2 pre-act and non-preact model, w/ BiT pretrained weights and support for ViT R50 model. Tweaks for in21k num_classes passing. More to do... tests failing. 2020-12-28 16:59:15 -08:00
Ross Wightman 2ed8f24715 A few more changes for 0.3.2 maint release. Linear layer change for mobilenetv3 and inception_v3, support no bias for linear wrapper. 2020-11-30 16:19:52 -08:00
Ross Wightman 460eba7f24 Work around casting issue with combination of native torch AMP and torchscript for Linear layers 2020-11-30 13:30:51 -08:00
Ross Wightman 5f4b6076d8 Fix inplace arg compat for GELU and PreLU via activation factory 2020-11-30 13:27:40 -08:00
Ross Wightman fd962c4b4a Native SiLU (Swish) op doesn't export to ONNX 2020-11-29 21:56:55 -08:00
tigertang 43f2500c26
Add symbolic for SwishJitAutoFn to support onnx 2020-11-18 14:36:12 +08:00
Ross Wightman e90edce438 Support native silu activation (aka swish). An optimized ver is available in PyTorch 1.7. 2020-10-29 15:45:17 -07:00
Ross Wightman f31933cb37 Initial Vision Transformer impl w/ patch and hybrid variants. Refactor tuple helpers. 2020-10-13 13:33:44 -07:00
Ross Wightman fcb6258877 Add missing leaky_relu layer factory defn, update Apex/Native loss scaler interfaces to support unscaled grad clipping. Bump ver to 0.2.2 for pending release. 2020-10-02 16:19:39 -07:00
Ross Wightman e8ca45854c More models in sotabench, more control over sotabench run, dataset filename extraction consistency 2020-09-24 15:56:57 -07:00
Ross Wightman 80c9d9cc72 Add 'fast' global pool option, remove redundant SEModule from tresnet, normal one is now 'fast' 2020-09-02 09:11:48 -07:00
Ross Wightman 110a7c4982 AdaptiveAvgPool2d -> mean((2,3)) for all SE/attn layers to avoid NaN with AMP + channels_last layout. See https://github.com/pytorch/pytorch/issues/43992 2020-09-01 16:05:32 -07:00
Ross Wightman fc8b8afb6f Fix a silly bug in Sample version of EvoNorm missing x* part of swish, update EvoNormBatch to accumulated unbiased variance. 2020-08-13 18:25:01 -07:00
Ross Wightman b1f1a54de9 More uniform treatment of classifiers across all models, reduce code duplication. 2020-08-03 22:18:24 -07:00
Ross Wightman 7995295968 Merge branch 'logger' into features. Change 'logger' to '_logger'. 2020-07-27 18:00:46 -07:00
Ross Wightman 1998bd3180 Merge branch 'feature/AB/logger' of https://github.com/antoinebrl/pytorch-image-models into logger 2020-07-27 16:06:01 -07:00
Ross Wightman 6c17d57a2c Fix some attributions, add copyrights to some file docstrings 2020-07-27 13:44:56 -07:00
Ross Wightman 7ba5a384d3 Add ReXNet w/ remapped weights, feature support 2020-07-23 10:28:57 -07:00
Ross Wightman 298fba09ac Back out some activation hacks trialing upcoming pytorch changes 2020-07-17 18:41:37 -07:00
Ross Wightman 3b9004bef9 Lots of changes to model creation helpers, close to finalizing feature extraction / interfaces 2020-07-17 17:54:26 -07:00
Ross Wightman 3b6cce4c95 Add initial impl of CrossStagePartial networks, yet to be trained, not quite the same as darknet cfgs. 2020-07-13 15:01:06 -07:00
Ross Wightman 3aebc2f06c Switch DPN to use BnAct layer, train a new DPN 68b model with RA to 79.21 2020-07-12 11:13:06 -07:00
Ross Wightman f122f0274b Significant ResNet refactor:
* stage creation + make_layer moved to separate fn with more sensible dilation/output_stride calc
* drop path rate decay easy to impl with refactored block creation loops
* fix dilation + blur pool combo
2020-07-05 00:48:12 -07:00
Ross Wightman d0113f9cdb Fix a few issues that came up in tests 2020-06-29 21:13:21 -07:00
Ross Wightman 151679c2f1 Add custom grad tests, fix cut & paste error with hard_mish ME, add a few more pytorch act fns to factory 2020-06-11 14:49:23 -07:00
Antoine Broyelle 78fa0772cc Leverage python hierachical logger
with this update one can tune the kind of logs generated by timm but
training and inference traces are unchanged
2020-06-09 18:28:48 +01:00
Ross Wightman 4ddde1d3a4 Fix two regressions 2020-06-05 11:04:51 -07:00
Ross Wightman 88129b2569 Add set_layer_config contextmgr to adjust all layer configs at once, use in create_module with new args. Remove a few old warning causing constant annotations for jit. 2020-06-02 21:06:10 -07:00
Ross Wightman f28170df3f Fix an an untested change, remove a debug print 2020-06-01 17:26:42 -07:00
Ross Wightman eb7653614f Monster commit, activation refactor, VoVNet, norm_act improvements, more
* refactor activations into basic PyTorch, jit scripted, and memory efficient custom auto
* implement hard-mish, better grad for hard-swish
* add initial VovNet V1/V2 impl, fix #151
* VovNet and DenseNet first models to use NormAct layers (support BatchNormAct2d, EvoNorm, InplaceIABN)
* Wrap IABN for any models that use it
* make more models torchscript compatible (DPN, PNasNet, Res2Net, SelecSLS) and add tests
2020-06-01 17:16:52 -07:00
Ross Wightman 0ea53cecc3 Merge branch 'master' into densenet_update_and_more 2020-05-22 16:18:10 -07:00
Ross Wightman 50658b9a67 Add RegNet models and weights 2020-05-18 00:08:52 -07:00
Ross Wightman 7df83258c9 Merge branch 'master' into densenet_update_and_more 2020-05-13 23:34:44 -07:00
Ross Wightman 1904ed8fec Improve dropblock impl, add fast variant, and better AMP speed, inplace, batchwise... few ResNeSt cleanups 2020-05-13 15:17:08 -07:00
Ross Wightman 17270c69b9 Remove annoying InceptionV3 dependency on scipy and insanely slow trunc_norm init. Bring InceptionV3 code into this codebase and use upcoming torch trunch_norm_ init. 2020-05-12 21:59:34 -07:00