Ross Wightman
02daf2ab94
Add option to include relative pos embedding in the attention scaling as per references. See discussion #912
2021-10-12 15:37:01 -07:00
Ross Wightman
cd34913278
Remove some outdated comments, botnet networks working great now.
2021-10-11 22:43:41 -07:00
Ross Wightman
6ed4cdccca
Update lambda_resnet26t weights with better set
2021-10-10 16:32:54 -07:00
ICLR Author
44d6d51668
Add ConvMixer
2021-10-09 21:09:51 -04:00
Ross Wightman
a85df34993
Update lambda_resnet26rpt weights to 78.9, add better halonet26t weights at 79.1 with tweak to attention dim
2021-10-08 17:44:13 -07:00
Ross Wightman
b544ad4d3f
regnetz model default cfg tweaks
2021-10-06 21:14:59 -07:00
Ross Wightman
e2b8d44ff0
Halo, bottleneck attn, lambda layer additions and cleanup along w/ experimental model defs
...
* align interfaces of halo, bottleneck attn and lambda layer
* add qk_ratio to all of above, control q/k dim relative to output dim
* add experimental haloregnetz, and trionet (lambda + halo + bottle) models
2021-10-06 16:32:48 -07:00
Ross Wightman
fbf59c04ee
Change crop ratio on correct resnet50 variant.
2021-10-04 22:31:08 -07:00
Ross Wightman
ae1ff5792f
Clean a1/a2/3 rsb _0 checkpoints properly, fix v2 loading.
2021-10-04 16:46:00 -07:00
Ross Wightman
da0d39bedd
Update default crop_pct for byoanet
2021-10-03 17:33:16 -07:00
Ross Wightman
cc9bedf373
Add initial ResNet Strikes Back weights for ResNet50 and ResNetV2-50 models
2021-10-03 17:32:02 -07:00
Ross Wightman
64495505b7
Add updated lambda resnet26 and botnet26 checkpoints with fixes applied
2021-10-03 17:31:39 -07:00
Ross Wightman
b2094f4ee8
support bits checkpoints in avg/load
2021-10-03 17:31:22 -07:00
Ross Wightman
007bc39323
Some halo and bottleneck attn code cleanup, add halonet50ts weights, use optimal crop ratios
2021-10-02 15:51:42 -07:00
Ross Wightman
b1c2e3eb92
Match rel_pos_indices attr rename in conv branch
2021-09-30 23:19:05 -07:00
Ross Wightman
b49630a138
Add relative pos embed option to LambdaLayer, fix last transpose/reshape.
2021-09-30 22:45:09 -07:00
Ross Wightman
d657e2cc0b
Remove dead code line from efficientnet
2021-09-30 21:54:42 -07:00
Ross Wightman
0ca687f224
Make 'regnetz' model experiments closer to actual RegNetZ, bottleneck expansion, expand from in_chs, no shortcut on stride 2, tweak model sizes
2021-09-30 21:49:38 -07:00
Ross Wightman
b81e79aae9
Fix bottleneck attn transpose typo, hopefully these train better now..
2021-09-28 16:38:41 -07:00
Ross Wightman
6478bcd02c
Fix regnetz_d conv layer name, use inception mean/std
2021-09-26 14:54:17 -07:00
Ross Wightman
515121cca1
Use reshape instead of view in std_conv, causing issues in recent PyTorch in channels_last
2021-09-23 15:43:48 -07:00
Ross Wightman
da06cc61d4
ResNetV2 seems to work best without zero_init residual
2021-09-23 15:43:22 -07:00
Ross Wightman
8e11da0ce3
Add experimental RegNetZ(ish) models for training / perf trials.
2021-09-23 15:42:57 -07:00
Alexander Soare
6bbc50beb4
make it possible to provide norm_layer via create_model
2021-09-21 10:19:04 +01:00
Ross Wightman
cf5ac2800c
BotNet models were still off, remove weights for bad configs. Add good SE-HaloNet33-TS weights.
2021-09-13 17:18:59 -07:00
Ross Wightman
24720abe3b
Merge branch 'master' into attn_update
2021-09-13 16:51:10 -07:00
Ross Wightman
1c9284c640
Add BeiT 'finetuned' 1k weights and pretrained 22k weights, pretraining specific (masked) model excluded for now
2021-09-13 16:38:23 -07:00
Ross Wightman
f8a215cfe6
A few more crossvit tweaks, fix training w/ no_weight_decay names, add crop option for scaling, adjust default crop_pct for large img size to 1.0 for better results
2021-09-13 14:17:34 -07:00
Ross Wightman
7ab2491ab7
Better handling of crossvit for tests / forward_features, fix torchscript regression in my changes
2021-09-13 13:01:05 -07:00
Ross Wightman
f1808e0970
Post crossvit merge cleanup, change model names to reflect input size, cleanup img size vs scale handling, fix tests
2021-09-13 11:49:54 -07:00
Ross Wightman
4027412757
Add resnet33ts weights, update resnext26ts baseline weights
2021-09-09 14:46:41 -07:00
Richard Chen
9fe5798bee
fix bug for reset classifier and fix for validating the dimension
2021-09-08 21:58:17 -04:00
Richard Chen
3718c5a5bd
fix loading pretrained model
2021-09-08 11:53:05 -04:00
Richard Chen
bb50b69a57
fix for torch script
2021-09-08 11:20:59 -04:00
Ross Wightman
5bd04714e4
Cleanup weight init for byob/byoanet and related
2021-09-05 15:34:05 -07:00
Ross Wightman
8642401e88
Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low...
2021-09-05 15:17:19 -07:00
Ross Wightman
5f12de4875
Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test.
2021-09-05 12:41:14 -07:00
Ross Wightman
76881d207b
Add baseline resnet26t @ 256x256 weights. Add 33ts variant of halonet with at least one halo in stage 2,3,4
2021-09-04 14:52:54 -07:00
Ross Wightman
484e61648d
Adding the attn series weights, tweaking model names, comments...
2021-09-03 18:09:42 -07:00
Ross Wightman
492c0a4e20
Update HaloAttn comment
2021-09-01 17:14:31 -07:00
Richard Chen
7ab9d4555c
add crossvit
2021-09-01 17:13:12 -04:00
Ross Wightman
3b9032ea48
Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity
2021-08-27 12:45:53 -07:00
Ross Wightman
78933122c9
Fix silly typo
2021-08-27 09:22:20 -07:00
Ross Wightman
2568ffc5ef
Merge branch 'master' into attn_update
2021-08-27 09:21:22 -07:00
Ross Wightman
708d87a813
Fix ViT SAM weight compat as weights at URL changed to not use repr layer. Fix #825 . Tweak optim test.
2021-08-27 09:20:13 -07:00
Ross Wightman
8449ba210c
Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc).
2021-08-26 21:56:44 -07:00
Ross Wightman
a8b65695f1
Add resnet26ts and resnext26ts models for non-attn baselines
2021-08-21 12:42:10 -07:00
Ross Wightman
a5a542f17d
Fix typo
2021-08-20 17:47:23 -07:00
Ross Wightman
925e102982
Update attention / self-attn based models from a series of experiments:
...
* remove dud attention, involution + my swin attention adaptation don't seem worth keeping
* add or update several new 26/50 layer ResNe(X)t variants that were used in experiments
* remove models associated with dead-end or uninteresting experiment results
* weights coming soon...
2021-08-20 16:13:11 -07:00
Ross Wightman
01cb46a9a5
Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode.
2021-08-07 16:45:29 -07:00