Ross Wightman
a897e0ebcc
Merge branch 'feature/crossvit' of https://github.com/chunfuchen/pytorch-image-models into chunfuchen-feature/crossvit
2021-09-10 17:38:37 -07:00
Ross Wightman
4027412757
Add resnet33ts weights, update resnext26ts baseline weights
2021-09-09 14:46:41 -07:00
Richard Chen
9fe5798bee
fix bug for reset classifier and fix for validating the dimension
2021-09-08 21:58:17 -04:00
Richard Chen
3718c5a5bd
fix loading pretrained model
2021-09-08 11:53:05 -04:00
Richard Chen
bb50b69a57
fix for torch script
2021-09-08 11:20:59 -04:00
nateraw
abf9d51bc3
🚧 wip
2021-09-07 18:39:26 -06:00
Ross Wightman
5bd04714e4
Cleanup weight init for byob/byoanet and related
2021-09-05 15:34:05 -07:00
Ross Wightman
8642401e88
Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low...
2021-09-05 15:17:19 -07:00
Ross Wightman
5f12de4875
Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test.
2021-09-05 12:41:14 -07:00
Ross Wightman
76881d207b
Add baseline resnet26t @ 256x256 weights. Add 33ts variant of halonet with at least one halo in stage 2,3,4
2021-09-04 14:52:54 -07:00
Ross Wightman
54e90e82a5
Another attempt at sgd momentum test passing...
2021-09-03 20:50:26 -07:00
Ross Wightman
484e61648d
Adding the attn series weights, tweaking model names, comments...
2021-09-03 18:09:42 -07:00
Ross Wightman
0639d9a591
Fix updated validation_batch_size fallback
2021-09-02 14:44:53 -07:00
Ross Wightman
5db057dca0
Fix misnamed arg, tweak other train script args for better defaults.
2021-09-02 14:15:49 -07:00
Ross Wightman
fb94350896
Update training script and loader factory to allow use of scheduler updates, repeat augment, and bce loss
2021-09-01 17:46:40 -07:00
Ross Wightman
f262137ff2
Add RepeatAugSampler as per DeiT RASampler impl, showing promise for current (distributed) training experiments.
2021-09-01 17:40:53 -07:00
Ross Wightman
ba9c1108a1
Add a BCE loss impl that converts dense targets to sparse /w smoothing as an alternate to CE w/ smoothing. For training experiments.
2021-09-01 17:39:28 -07:00
Ross Wightman
29a37e23ee
LR scheduler update:
...
* add polynomial decay 'poly'
* cleanup cycle specific args for cosine, poly, and tanh sched, t_mul -> cycle_mul, decay -> cycle_decay, default cycle_limit to 1 in each opt
* add k-decay for cosine and poly sched as per https://arxiv.org/abs/2004.05909
* change default tanh ub/lb to push inflection to later epochs
2021-09-01 17:33:11 -07:00
nateraw
28d2841acf
💄 apply isort
2021-09-01 18:15:08 -06:00
Ross Wightman
492c0a4e20
Update HaloAttn comment
2021-09-01 17:14:31 -07:00
nateraw
e72c989973
✨ add ability to push to hf hub
2021-09-01 18:14:28 -06:00
Richard Chen
7ab9d4555c
add crossvit
2021-09-01 17:13:12 -04:00
Ross Wightman
3b9032ea48
Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity
2021-08-27 12:45:53 -07:00
Ross Wightman
fc894c375c
Another attempt at sgd momentum test passing...
2021-08-27 10:39:31 -07:00
Ross Wightman
78933122c9
Fix silly typo
2021-08-27 09:22:20 -07:00
Ross Wightman
2568ffc5ef
Merge branch 'master' into attn_update
2021-08-27 09:21:22 -07:00
Ross Wightman
708d87a813
Fix ViT SAM weight compat as weights at URL changed to not use repr layer. Fix #825 . Tweak optim test.
2021-08-27 09:20:13 -07:00
Ross Wightman
8449ba210c
Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc).
2021-08-26 21:56:44 -07:00
Ross Wightman
a8b65695f1
Add resnet26ts and resnext26ts models for non-attn baselines
2021-08-21 12:42:10 -07:00
Ross Wightman
a5a542f17d
Fix typo
2021-08-20 17:47:23 -07:00
Ross Wightman
925e102982
Update attention / self-attn based models from a series of experiments:
...
* remove dud attention, involution + my swin attention adaptation don't seem worth keeping
* add or update several new 26/50 layer ResNe(X)t variants that were used in experiments
* remove models associated with dead-end or uninteresting experiment results
* weights coming soon...
2021-08-20 16:13:11 -07:00
Ross Wightman
acd6c687fd
git push origin masterMerge branch 'yohann84L-fix_accuracy'
2021-08-19 14:26:23 -07:00
Ross Wightman
d667351eac
Tweak accuracy topk safety. Fix #807
2021-08-19 14:18:53 -07:00
Yohann Lereclus
35c9740826
Fix accuracy when topk > num_classes
2021-08-19 11:58:59 +02:00
Ross Wightman
a16a753852
Add lamb/lars to optim init imports, remove stray comment
2021-08-18 22:55:02 -07:00
Ross Wightman
c207e02782
MOAR optimizer changes. Woo!
2021-08-18 22:20:35 -07:00
Ross Wightman
42c1f0cf6c
Fix lars tests
2021-08-18 21:05:34 -07:00
Ross Wightman
a426511c95
More optimizer cleanup. Change all to no longer use .data. Improve (b)float16 use with adabelief. Add XLA compatible Lars.
2021-08-18 17:21:56 -07:00
Ross Wightman
9541f4963b
One more scalar -> tensor fix for lamb optimizer
2021-08-18 11:20:25 -07:00
Ross Wightman
8f68193c91
Update lamp.py comment
2021-08-18 09:27:40 -07:00
Ross Wightman
4d284017b8
Merge pull request #813 from rwightman/opt_cleanup
...
Optimizer cleanup and additions
2021-08-18 09:12:00 -07:00
Ross Wightman
a6af48be64
add madgradw optimizer
2021-08-17 22:19:27 -07:00
Ross Wightman
55fb5eedf6
Remove experiment from lamb impl
2021-08-17 21:48:26 -07:00
Ross Wightman
8a9eca5157
A few optimizer comments, dead import, missing import
2021-08-17 18:01:33 -07:00
Ross Wightman
959eaff121
Add optimizer tests and update testing to pytorch 1.9
2021-08-17 17:59:15 -07:00
Ross Wightman
ac469b50da
Optimizer improvements, additions, cleanup
...
* Add MADGRAD code
* Fix Lamb (non-fused variant) to work w/ PyTorch XLA
* Tweak optimizer factory args (lr/learning_rate and opt/optimizer_name), may break compat
* Use newer fn signatures for all add,addcdiv, addcmul in optimizers
* Use upcoming PyTorch native Nadam if it's available
* Cleanup lookahead opt
* Add optimizer tests
* Remove novograd.py impl as it was messy, keep nvnovograd
* Make AdamP/SGDP work in channels_last layout
* Add rectified adablief mode (radabelief)
* Support a few more PyTorch optim, adamax, adagrad
2021-08-17 17:51:20 -07:00
Ross Wightman
368211d19a
Merge pull request #805 from Separius/patch-1
...
Remove duplicate code in create_scheduler
2021-08-15 12:51:43 -07:00
Sepehr Sameni
abf3e044bb
Update scheduler_factory.py
...
remove duplicate code from create_scheduler()
2021-08-14 22:53:17 +02:00
Ross Wightman
3cdaf5ed56
Add `mmax` config key to auto_augment for increasing upper bound of RandAugment magnitude beyond 10. Make AugMix uniform sampling default not override config setting.
2021-08-12 15:39:05 -07:00
Yonghye Kwon
09a45ab592
fix a typo in ### Select specific feature levels or limit the stride
...
There are to additional creation arguments impacting the output features.
->
There are two additional creation arguments impacting the output features.
2021-08-13 01:36:48 +09:00