Commit Graph

624 Commits (5bd04714e46bf5db5dfb2137eec3b86134deb2fa)

Author SHA1 Message Date
Ross Wightman 5bd04714e4 Cleanup weight init for byob/byoanet and related 2021-09-05 15:34:05 -07:00
Ross Wightman 8642401e88 Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low... 2021-09-05 15:17:19 -07:00
Ross Wightman 5f12de4875 Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test. 2021-09-05 12:41:14 -07:00
Ross Wightman 76881d207b Add baseline resnet26t @ 256x256 weights. Add 33ts variant of halonet with at least one halo in stage 2,3,4 2021-09-04 14:52:54 -07:00
Ross Wightman 484e61648d Adding the attn series weights, tweaking model names, comments... 2021-09-03 18:09:42 -07:00
Ross Wightman fb94350896 Update training script and loader factory to allow use of scheduler updates, repeat augment, and bce loss 2021-09-01 17:46:40 -07:00
Ross Wightman f262137ff2 Add RepeatAugSampler as per DeiT RASampler impl, showing promise for current (distributed) training experiments. 2021-09-01 17:40:53 -07:00
Ross Wightman ba9c1108a1 Add a BCE loss impl that converts dense targets to sparse /w smoothing as an alternate to CE w/ smoothing. For training experiments. 2021-09-01 17:39:28 -07:00
Ross Wightman 29a37e23ee LR scheduler update:
* add polynomial decay 'poly'
* cleanup cycle specific args for cosine, poly, and tanh sched, t_mul -> cycle_mul, decay -> cycle_decay, default cycle_limit to 1 in each opt
* add k-decay for cosine and poly sched as per https://arxiv.org/abs/2004.05909
* change default tanh ub/lb to push inflection to later epochs
2021-09-01 17:33:11 -07:00
Ross Wightman 492c0a4e20 Update HaloAttn comment 2021-09-01 17:14:31 -07:00
Ross Wightman 3b9032ea48 Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity 2021-08-27 12:45:53 -07:00
Ross Wightman 2568ffc5ef Merge branch 'master' into attn_update 2021-08-27 09:21:22 -07:00
Ross Wightman 708d87a813 Fix ViT SAM weight compat as weights at URL changed to not use repr layer. Fix #825. Tweak optim test. 2021-08-27 09:20:13 -07:00
Ross Wightman 8449ba210c Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc). 2021-08-26 21:56:44 -07:00
Ross Wightman a8b65695f1 Add resnet26ts and resnext26ts models for non-attn baselines 2021-08-21 12:42:10 -07:00
Ross Wightman a5a542f17d Fix typo 2021-08-20 17:47:23 -07:00
Ross Wightman 925e102982 Update attention / self-attn based models from a series of experiments:
* remove dud attention, involution + my swin attention adaptation don't seem worth keeping
* add or update several new 26/50 layer ResNe(X)t variants that were used in experiments
* remove models associated with dead-end or uninteresting experiment results
* weights coming soon...
2021-08-20 16:13:11 -07:00
Ross Wightman d667351eac Tweak accuracy topk safety. Fix #807 2021-08-19 14:18:53 -07:00
Yohann Lereclus 35c9740826 Fix accuracy when topk > num_classes 2021-08-19 11:58:59 +02:00
Ross Wightman a16a753852 Add lamb/lars to optim init imports, remove stray comment 2021-08-18 22:55:02 -07:00
Ross Wightman c207e02782 MOAR optimizer changes. Woo! 2021-08-18 22:20:35 -07:00
Ross Wightman a426511c95 More optimizer cleanup. Change all to no longer use .data. Improve (b)float16 use with adabelief. Add XLA compatible Lars. 2021-08-18 17:21:56 -07:00
Ross Wightman 9541f4963b One more scalar -> tensor fix for lamb optimizer 2021-08-18 11:20:25 -07:00
Ross Wightman 8f68193c91
Update lamp.py comment 2021-08-18 09:27:40 -07:00
Ross Wightman 4d284017b8
Merge pull request #813 from rwightman/opt_cleanup
Optimizer cleanup and additions
2021-08-18 09:12:00 -07:00
Ross Wightman a6af48be64 add madgradw optimizer 2021-08-17 22:19:27 -07:00
Ross Wightman 55fb5eedf6 Remove experiment from lamb impl 2021-08-17 21:48:26 -07:00
Ross Wightman 8a9eca5157 A few optimizer comments, dead import, missing import 2021-08-17 18:01:33 -07:00
Ross Wightman ac469b50da Optimizer improvements, additions, cleanup
* Add MADGRAD code
* Fix Lamb (non-fused variant) to work w/ PyTorch XLA
* Tweak optimizer factory args (lr/learning_rate and opt/optimizer_name), may break compat
* Use newer fn signatures for all add,addcdiv, addcmul in optimizers
* Use upcoming PyTorch native Nadam if it's available
* Cleanup lookahead opt
* Add optimizer tests
* Remove novograd.py impl as it was messy, keep nvnovograd
* Make AdamP/SGDP work in channels_last layout
* Add rectified adablief mode (radabelief)
* Support a few more PyTorch optim, adamax, adagrad
2021-08-17 17:51:20 -07:00
Sepehr Sameni abf3e044bb
Update scheduler_factory.py
remove duplicate code from create_scheduler()
2021-08-14 22:53:17 +02:00
Ross Wightman 3cdaf5ed56 Add `mmax` config key to auto_augment for increasing upper bound of RandAugment magnitude beyond 10. Make AugMix uniform sampling default not override config setting. 2021-08-12 15:39:05 -07:00
Ross Wightman 1042b8a146 Add non fused LAMB optimizer option 2021-08-09 13:13:43 -07:00
Ross Wightman 01cb46a9a5 Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode. 2021-08-07 16:45:29 -07:00
Ross Wightman d3f7440650 Add EfficientNetV2 XL model defs 2021-07-22 13:15:24 -07:00
Ross Wightman 72b227dcf5
Merge pull request #750 from drjinying/master
Specify "interpolation" mode in vision_transformer's resize_pos_embed
2021-07-13 11:01:20 -07:00
Ross Wightman 2907c1f967
Merge pull request #746 from samarth4149/master
Adding a Multi Step LR Scheduler
2021-07-13 10:55:54 -07:00
Ross Wightman 748ab852ca Allow act_layer switch for xcit, fix in_chans for some variants 2021-07-12 13:27:29 -07:00
Ying Jin 20b2d4b69d Use bicubic interpolation in resize_pos_embed() 2021-07-12 10:38:31 -07:00
Ross Wightman d3255adf8e Merge branch 'xcit' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-xcit 2021-07-12 08:30:30 -07:00
Ross Wightman f8039c7492 Fix gc effv2 model cfg name 2021-07-11 12:14:31 -07:00
Alexander Soare 3a55a30ed1 add notes from author 2021-07-11 14:25:58 +01:00
Alexander Soare 899cf84ccc bug fix - missing _dist postfix for many of the 224_dist models 2021-07-11 12:41:51 +01:00
Alexander Soare 623e8b8eb8 wip xcit 2021-07-11 09:39:38 +01:00
Ross Wightman 392368e210 Add efficientnetv2_rw_t defs w/ weights, and gc variant, as well as gcresnet26ts for experiments. Version 0.4.13 2021-07-09 16:46:52 -07:00
samarth daab57a6d9 1. Added a simple multi step LR scheduler 2021-07-09 16:18:27 -04:00
Ross Wightman 6d8272e92c Add SAM pretrained model defs/weights for ViT B16 and B32 models. 2021-07-08 11:51:12 -07:00
Ross Wightman ee4d8fc69a Remove unecessary line from nest post refactor 2021-07-05 21:22:46 -07:00
Ross Wightman 8165cacd82 Realized LayerNorm2d won't work in all cases as is, fixed. 2021-07-05 18:21:34 -07:00
Ross Wightman 81cd6863c8 Move aggregation (convpool) for nest into NestLevel, cleanup and enable features_only use. Finalize weight url. 2021-07-05 18:20:49 -07:00
Ross Wightman 6ae0ac6420 Merge branch 'nested_transformer' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-nested_transformer 2021-07-03 12:45:26 -07:00