Commit Graph

1048 Commits (adcb74f87f09cb9db75b62a8690b53ae1552dda5)
 

Author SHA1 Message Date
nateraw adcb74f87f 🎨 Import load_state_dict_from_url directly 2021-09-14 01:11:40 -04:00
nateraw e65a2cba3d 🎨 cleanup and add a couple comments 2021-09-14 01:07:04 -04:00
nateraw 2b6ade24b3 🎨 write model card to enable inference 2021-09-13 23:31:28 -04:00
nateraw abf9d51bc3 🚧 wip 2021-09-07 18:39:26 -06:00
nateraw 28d2841acf 💄 apply isort 2021-09-01 18:15:08 -06:00
nateraw e72c989973 add ability to push to hf hub 2021-09-01 18:14:28 -06:00
Ross Wightman 78933122c9 Fix silly typo 2021-08-27 09:22:20 -07:00
Ross Wightman 708d87a813 Fix ViT SAM weight compat as weights at URL changed to not use repr layer. Fix #825. Tweak optim test. 2021-08-27 09:20:13 -07:00
Ross Wightman acd6c687fd git push origin masterMerge branch 'yohann84L-fix_accuracy' 2021-08-19 14:26:23 -07:00
Ross Wightman d667351eac Tweak accuracy topk safety. Fix #807 2021-08-19 14:18:53 -07:00
Yohann Lereclus 35c9740826 Fix accuracy when topk > num_classes 2021-08-19 11:58:59 +02:00
Ross Wightman a16a753852 Add lamb/lars to optim init imports, remove stray comment 2021-08-18 22:55:02 -07:00
Ross Wightman c207e02782 MOAR optimizer changes. Woo! 2021-08-18 22:20:35 -07:00
Ross Wightman 42c1f0cf6c Fix lars tests 2021-08-18 21:05:34 -07:00
Ross Wightman a426511c95 More optimizer cleanup. Change all to no longer use .data. Improve (b)float16 use with adabelief. Add XLA compatible Lars. 2021-08-18 17:21:56 -07:00
Ross Wightman 9541f4963b One more scalar -> tensor fix for lamb optimizer 2021-08-18 11:20:25 -07:00
Ross Wightman 8f68193c91
Update lamp.py comment 2021-08-18 09:27:40 -07:00
Ross Wightman 4d284017b8
Merge pull request #813 from rwightman/opt_cleanup
Optimizer cleanup and additions
2021-08-18 09:12:00 -07:00
Ross Wightman a6af48be64 add madgradw optimizer 2021-08-17 22:19:27 -07:00
Ross Wightman 55fb5eedf6 Remove experiment from lamb impl 2021-08-17 21:48:26 -07:00
Ross Wightman 8a9eca5157 A few optimizer comments, dead import, missing import 2021-08-17 18:01:33 -07:00
Ross Wightman 959eaff121 Add optimizer tests and update testing to pytorch 1.9 2021-08-17 17:59:15 -07:00
Ross Wightman ac469b50da Optimizer improvements, additions, cleanup
* Add MADGRAD code
* Fix Lamb (non-fused variant) to work w/ PyTorch XLA
* Tweak optimizer factory args (lr/learning_rate and opt/optimizer_name), may break compat
* Use newer fn signatures for all add,addcdiv, addcmul in optimizers
* Use upcoming PyTorch native Nadam if it's available
* Cleanup lookahead opt
* Add optimizer tests
* Remove novograd.py impl as it was messy, keep nvnovograd
* Make AdamP/SGDP work in channels_last layout
* Add rectified adablief mode (radabelief)
* Support a few more PyTorch optim, adamax, adagrad
2021-08-17 17:51:20 -07:00
Ross Wightman 368211d19a
Merge pull request #805 from Separius/patch-1
Remove duplicate code in create_scheduler
2021-08-15 12:51:43 -07:00
Sepehr Sameni abf3e044bb
Update scheduler_factory.py
remove duplicate code from create_scheduler()
2021-08-14 22:53:17 +02:00
Ross Wightman 3cdaf5ed56 Add `mmax` config key to auto_augment for increasing upper bound of RandAugment magnitude beyond 10. Make AugMix uniform sampling default not override config setting. 2021-08-12 15:39:05 -07:00
Ross Wightman 1042b8a146 Add non fused LAMB optimizer option 2021-08-09 13:13:43 -07:00
Ross Wightman 01cb46a9a5 Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode. 2021-08-07 16:45:29 -07:00
Ross Wightman bd56946676
Update README.md 2021-07-28 09:00:48 -07:00
Ross Wightman d3f7440650 Add EfficientNetV2 XL model defs 2021-07-22 13:15:24 -07:00
Ross Wightman ef1e2e12be Attempt to fix xcit test failures on github runner by filter largest models 2021-07-13 16:33:55 -07:00
Ross Wightman 72b227dcf5
Merge pull request #750 from drjinying/master
Specify "interpolation" mode in vision_transformer's resize_pos_embed
2021-07-13 11:01:20 -07:00
Ross Wightman 2907c1f967
Merge pull request #746 from samarth4149/master
Adding a Multi Step LR Scheduler
2021-07-13 10:55:54 -07:00
Ross Wightman 5aca7c01e5 Update README.md 2021-07-12 13:33:02 -07:00
Ross Wightman 763329f23f Merge branch 'alexander-soare-xcit' 2021-07-12 13:28:15 -07:00
Ross Wightman 748ab852ca Allow act_layer switch for xcit, fix in_chans for some variants 2021-07-12 13:27:29 -07:00
Ying Jin 20b2d4b69d Use bicubic interpolation in resize_pos_embed() 2021-07-12 10:38:31 -07:00
Ross Wightman d3255adf8e Merge branch 'xcit' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-xcit 2021-07-12 08:30:30 -07:00
Ross Wightman f8039c7492 Fix gc effv2 model cfg name 2021-07-11 12:14:31 -07:00
Alexander Soare 3a55a30ed1 add notes from author 2021-07-11 14:25:58 +01:00
Alexander Soare 899cf84ccc bug fix - missing _dist postfix for many of the 224_dist models 2021-07-11 12:41:51 +01:00
Alexander Soare 623e8b8eb8 wip xcit 2021-07-11 09:39:38 +01:00
Ross Wightman 392368e210 Add efficientnetv2_rw_t defs w/ weights, and gc variant, as well as gcresnet26ts for experiments. Version 0.4.13 2021-07-09 16:46:52 -07:00
samarth daab57a6d9 1. Added a simple multi step LR scheduler 2021-07-09 16:18:27 -04:00
Ross Wightman 6d8272e92c Add SAM pretrained model defs/weights for ViT B16 and B32 models. 2021-07-08 11:51:12 -07:00
Ross Wightman ee4d8fc69a Remove unecessary line from nest post refactor 2021-07-05 21:22:46 -07:00
Ross Wightman c8ec1ffcb9 Merge branch 'alexander-soare-nested_transformer' 2021-07-05 18:22:50 -07:00
Ross Wightman 8165cacd82 Realized LayerNorm2d won't work in all cases as is, fixed. 2021-07-05 18:21:34 -07:00
Ross Wightman 81cd6863c8 Move aggregation (convpool) for nest into NestLevel, cleanup and enable features_only use. Finalize weight url. 2021-07-05 18:20:49 -07:00
Ross Wightman 6ae0ac6420 Merge branch 'nested_transformer' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-nested_transformer 2021-07-03 12:45:26 -07:00