1218 Commits

Author SHA1 Message Date
Ross Wightman
93cc08fdc5 Make evonorm variables 1d to match other PyTorch norm layers, will break weight compat for any existing use (likely minimal, easy to fix). 2021-11-20 15:50:51 -08:00
Ross Wightman
af607b75cc Prep a set of ResNetV2 models with GroupNorm, EvoNormB0, EvoNormS0 for BN free model experiments on TPU and IPU 2021-11-19 17:37:00 -08:00
Ross Wightman
c976a410d9 Add ResNet-50 w/ GN (resnet50_gn) and SEBotNet-33-TS (sebotnet33ts_256) model defs and weights. Update halonet50ts weights w/ slightly better variant in1k val, more robust to test sets. 2021-11-19 14:24:43 -08:00
Ross Wightman
f2006b2437 Cleanup qkv_bias cat in beit model so it can be traced 2021-11-18 21:25:00 -08:00
Ross Wightman
1076a65df1 Minor post FX merge cleanup 2021-11-18 19:47:07 -08:00
Ross Wightman
32c9937dec Merge branch 'fx-feature-extract-new' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-fx-feature-extract-new 2021-11-18 16:31:29 -08:00
Ross Wightman
78b36bf46c Places365 doesn't exist in some still used torchvision version 2021-11-18 14:59:51 -08:00
Alexander Soare
65d827c7a6 rename notrace registration and standardize trace_utils imports 2021-11-15 21:03:21 +00:00
Ross Wightman
9b2daf2a35 Add ResNeXt-50 weights 81.1 top-1 @ 224, 82 @ 288 with A1 'high aug' recipe 2021-11-14 13:17:27 -08:00
Ross Wightman
9b5d6dc7e2 Merge branch 'add-vit-b8' of https://github.com/martinsbruveris/pytorch-image-models into martinsbruveris-add-vit-b8 2021-11-14 12:54:28 -08:00
Ross Wightman
cfa414cad2 Matching two bits_and_tpu changes for TFDs wrapper
* change 'samples' -> 'examples' for tfds wrapper to match tfds naming
* add class_to_idx for image classification datasets in tfds wrapper
2021-11-14 12:52:19 -08:00
Martins Bruveris
5220711d87 Added B/8 models to ViT. 2021-11-14 11:01:48 +00:00
Alexander Soare
0262a0e8e1 fx ready for review 2021-11-13 00:06:33 +00:00
Alexander Soare
d2994016e9 Add try/except guards 2021-11-12 21:16:53 +00:00
Alexander Soare
b25ff96768 wip - pre-rebase 2021-11-12 20:45:05 +00:00
Alexander Soare
e051dce354 Make all models FX traceable 2021-11-12 20:45:05 +00:00
Alexander Soare
cf4561ca72 Add FX based FeatureGraphNet capability 2021-11-12 20:45:05 +00:00
Alexander Soare
0149ec30d7 wip - attempting to rebase 2021-11-12 20:45:05 +00:00
Alexander Soare
02c3a75a45 wip - make it possible to use fx graph in train and eval mode 2021-11-12 20:45:05 +00:00
Alexander Soare
bc3d4eb403 wip -rebase 2021-11-12 20:45:05 +00:00
Alexander Soare
ab3ac3f25b Add FX based FeatureGraphNet capability 2021-11-12 20:45:05 +00:00
Ross Wightman
9ec3210c2d More TFDS parser cleanup, support improved TFDS even_split impl (on tfds-nightly only currently). 2021-11-10 15:52:09 -08:00
Ross Wightman
ba65dfe2c6 Dataset work
* support some torchvision datasets
* improvements to TFDS wrapper for subsplit handling (fix #942), shuffle seed
* add class-map support to train (fix #957)
2021-11-09 22:34:15 -08:00
Ross Wightman
ddc29da974 Add ResNet101 and ResNet152 weights from higher aug RSB recipes. 81.93 and 82.82 top-1 at 224x224. 2021-11-02 17:59:16 -07:00
Ross Wightman
b328e56f49 Update eca_halonext26ts weights to a better set 2021-11-02 16:52:53 -07:00
Ross Wightman
2ddef942b9 Better fix for #954 that doesn't break torchscript, pull torch._assert into timm namespace when it exists 2021-11-02 11:22:33 -07:00
Ross Wightman
4f0f9cb348 Fix #954 by bringing traceable _assert into timm to allow compat w/ PyTorch < 1.8 2021-11-02 09:21:40 -07:00
Ross Wightman
a41de1f666 Add interpolation mode handling to transforms. Removes InterpolationMode warning. Works for torchvision versions w/ and w/o InterpolationMode enum. Fix #738. 2021-10-28 17:35:01 -07:00
Ross Wightman
ed41d32637 Add repr to auto_augment and random_erasing impl 2021-10-28 17:33:36 -07:00
Ross Wightman
ae72d009fa Add weights for lambda_resnet50ts, halo2botnet50ts, lamhalobotnet50ts, updated halonet50ts 2021-10-27 22:08:54 -07:00
Ross Wightman
b745d30a3e Fix formatting of last commit 2021-10-25 15:15:14 -07:00
Ross Wightman
3478f1d7f1 Traceability fix for vit models for some experiments 2021-10-25 15:13:08 -07:00
Ross Wightman
f658a72e72 Cleanup re-use of Dropout modules in Mlp modules after some twitter feedback :p 2021-10-25 00:40:59 -07:00
Thomas Viehmann
f805ba86d9 use .unbind instead of explicitly listing the indices 2021-10-24 21:08:47 +02:00
Ross Wightman
57992509f9 Fix some formatting in utils/model.py 2021-10-23 20:35:36 -07:00
Ross Wightman
0fe4fd3f1f add d8 and e8 regnetz models with group size 8 2021-10-23 20:34:21 -07:00
Ross Wightman
25e7c8c5e5 Update broken resnetv2_50 weight url, add resnetv1_101 a1h recipe weights for 224x224 train 2021-10-20 22:14:12 -07:00
Ross Wightman
b6caa356d2 Fixed eca_botnext26ts_256 weights added, 79.27 2021-10-19 12:44:28 -07:00
Ross Wightman
c02334d9fa Add weights for regnetz_d and haloregnetz_c, update regnetz_c weights. Add commented PyTorch XLA code for halo attention 2021-10-19 12:32:09 -07:00
Ross Wightman
02daf2ab94 Add option to include relative pos embedding in the attention scaling as per references. See discussion #912 2021-10-12 15:37:01 -07:00
masafumi
047a5ec05f Fix bugs that Mixup does not work device=cpu 2021-10-12 23:51:46 +09:00
Ross Wightman
cd34913278 Remove some outdated comments, botnet networks working great now. 2021-10-11 22:43:41 -07:00
Ross Wightman
6ed4cdccca Update lambda_resnet26t weights with better set 2021-10-10 16:32:54 -07:00
ICLR Author
44d6d51668 Add ConvMixer 2021-10-09 21:09:51 -04:00
Ross Wightman
a85df34993 Update lambda_resnet26rpt weights to 78.9, add better halonet26t weights at 79.1 with tweak to attention dim 2021-10-08 17:44:13 -07:00
Ross Wightman
b544ad4d3f regnetz model default cfg tweaks 2021-10-06 21:14:59 -07:00
Ross Wightman
e5da481073 Small post-merge tweak for freeze/unfreeze, add to __init__ for utils 2021-10-06 17:00:27 -07:00
Ross Wightman
5ca72dcc75 Merge branch 'freeze-functionality' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-freeze-functionality 2021-10-06 16:51:03 -07:00
Ross Wightman
e2b8d44ff0 Halo, bottleneck attn, lambda layer additions and cleanup along w/ experimental model defs
* align interfaces of halo, bottleneck attn and lambda layer
* add qk_ratio to all of above, control q/k dim relative to output dim
* add experimental haloregnetz, and trionet (lambda + halo + bottle) models
2021-10-06 16:32:48 -07:00
Alexander Soare
431e60c83f Add acknowledgements for freeze_batch_norm inspiration 2021-10-06 14:28:49 +01:00