Commit Graph

119 Commits (de5fa791c623eaa08760672b51e605781342195f)

Author SHA1 Message Date
Ross Wightman cd059cbe9c Add FX backward tests back 2021-12-01 14:58:56 -08:00
Ross Wightman 58ffa2bfb7 Update pytest for GitHub runner to use --forked with xdist, hopefully eliminate memory buildup 2021-12-01 12:09:23 -08:00
Ross Wightman f7d210d759 Remove evonorm models from FX tests 2021-11-24 13:21:24 -08:00
Ross Wightman f83b0b01e3 Would like to pass GitHub tests again disabling both FX feature extract backward and torchscript tests 2021-11-23 22:24:58 -08:00
Ross Wightman 147e1059a8 Remove FX backward test from GitHub actions runs for now. 2021-11-23 14:32:32 -08:00
Ross Wightman 878bee1d5e Add patch8 vit model to FX exclusion filter 2021-11-22 14:00:27 -08:00
Ross Wightman ce76a810c2 New FX test strategy, filter based on param count 2021-11-22 11:48:40 -08:00
Ross Wightman 1e51c2d02e More FX test tweaks 2021-11-22 09:46:43 -08:00
Ross Wightman 90448031ea Filter more large models from FX tests 2021-11-21 21:26:44 -08:00
Ross Wightman 8dc269c303 Filter more models for FX tests 2021-11-21 19:49:33 -08:00
Ross Wightman 2482652027 Add nfnet_f2 to FX test exclusion 2021-11-21 14:08:53 -08:00
Ross Wightman 05092e2fbe Add more models to FX filter 2021-11-20 15:51:48 -08:00
Ross Wightman 3819bef93e Add FX test exclusion since it uses more ram and barfs on GitHub actions. Will take a few iterations to include needed models :( 2021-11-19 17:35:41 -08:00
Ross Wightman 9b3519545d Attempt to reduce memory footprint of FX tests for GitHub actions runs 2021-11-19 14:24:12 -08:00
Ross Wightman bdd3dff0ca beit_large models killing GitHub actions test, filter out 2021-11-19 08:39:48 -08:00
Ross Wightman f2006b2437 Cleanup qkv_bias cat in beit model so it can be traced 2021-11-18 21:25:00 -08:00
Ross Wightman 1076a65df1 Minor post FX merge cleanup 2021-11-18 19:47:07 -08:00
Alexander Soare 0262a0e8e1 fx ready for review 2021-11-13 00:06:33 +00:00
Alexander Soare d2994016e9 Add try/except guards 2021-11-12 21:16:53 +00:00
Alexander Soare b25ff96768 wip - pre-rebase 2021-11-12 20:45:05 +00:00
Alexander Soare a6c24b936b Tests to enforce all models FX traceable 2021-11-12 20:45:05 +00:00
Alexander Soare 6d2acec1bb Fix ordering of tests 2021-10-02 16:10:11 +01:00
Alexander Soare 65c3d78b96 Freeze unfreeze functionality finalized. Tests added 2021-10-02 15:55:08 +01:00
Ross Wightman 24720abe3b Merge branch 'master' into attn_update 2021-09-13 16:51:10 -07:00
Ross Wightman 1c9284c640 Add BeiT 'finetuned' 1k weights and pretrained 22k weights, pretraining specific (masked) model excluded for now 2021-09-13 16:38:23 -07:00
Ross Wightman 7ab2491ab7 Better handling of crossvit for tests / forward_features, fix torchscript regression in my changes 2021-09-13 13:01:05 -07:00
Ross Wightman f1808e0970 Post crossvit merge cleanup, change model names to reflect input size, cleanup img size vs scale handling, fix tests 2021-09-13 11:49:54 -07:00
Ross Wightman a897e0ebcc Merge branch 'feature/crossvit' of https://github.com/chunfuchen/pytorch-image-models into chunfuchen-feature/crossvit 2021-09-10 17:38:37 -07:00
Ross Wightman 8642401e88 Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low... 2021-09-05 15:17:19 -07:00
Ross Wightman 5f12de4875 Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test. 2021-09-05 12:41:14 -07:00
Ross Wightman 54e90e82a5 Another attempt at sgd momentum test passing... 2021-09-03 20:50:26 -07:00
Richard Chen 7ab9d4555c add crossvit 2021-09-01 17:13:12 -04:00
Ross Wightman fc894c375c Another attempt at sgd momentum test passing... 2021-08-27 10:39:31 -07:00
Ross Wightman 708d87a813 Fix ViT SAM weight compat as weights at URL changed to not use repr layer. Fix #825. Tweak optim test. 2021-08-27 09:20:13 -07:00
Ross Wightman c207e02782 MOAR optimizer changes. Woo! 2021-08-18 22:20:35 -07:00
Ross Wightman 42c1f0cf6c Fix lars tests 2021-08-18 21:05:34 -07:00
Ross Wightman a426511c95 More optimizer cleanup. Change all to no longer use .data. Improve (b)float16 use with adabelief. Add XLA compatible Lars. 2021-08-18 17:21:56 -07:00
Ross Wightman a6af48be64 add madgradw optimizer 2021-08-17 22:19:27 -07:00
Ross Wightman 55fb5eedf6 Remove experiment from lamb impl 2021-08-17 21:48:26 -07:00
Ross Wightman 959eaff121 Add optimizer tests and update testing to pytorch 1.9 2021-08-17 17:59:15 -07:00
Ross Wightman 01cb46a9a5 Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode. 2021-08-07 16:45:29 -07:00
Ross Wightman ef1e2e12be Attempt to fix xcit test failures on github runner by filter largest models 2021-07-13 16:33:55 -07:00
Alexander Soare 623e8b8eb8 wip xcit 2021-07-11 09:39:38 +01:00
Alexander Soare 7b8a0017f1 wip to review 2021-07-03 12:10:12 +01:00
Ross Wightman b41cffaa93 Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713 2021-06-22 23:16:05 -07:00
Ross Wightman 381b279785 Add hybrid model fwds back 2021-06-19 22:28:44 -07:00
Ross Wightman 0020268d9b Try lower max size for non_std default_cfg test 2021-06-12 23:31:24 -07:00
Ross Wightman 8880f696b6 Refactoring, cleanup, improved test coverage.
* Add eca_nfnet_l2 weights, 84.7 @ 384x384
* All 'non-std' (ie transformer / mlp) models have classifier / default_cfg test added
* Fix #694 reset_classifer / num_features / forward_features / num_classes=0 consistency for transformer / mlp models
* Add direct loading of npz to vision transformer (pure transformer so far, hybrid to come)
* Rename vit_deit* to deit_*
* Remove some deprecated vit hybrid model defs
* Clean up classifier flatten for conv classifiers and unusual cases (mobilenetv3/ghostnet)
* Remove explicit model fns for levit conv, just pass in arg
2021-06-12 16:40:02 -07:00
Ross Wightman 17dc47c8e6 Missed comma in test filters. 2021-05-30 22:00:43 -07:00
Ross Wightman 8bf63b6c6c Able to use other attn layer in EfficientNet now. Create test ECA + GC B0 configs. Make ECA more configurable. 2021-05-30 12:47:02 -07:00