pytorch-image-models/timm/optim
Ross Wightman 82e8677690 Make LaProp weight decay match typical PyTorch 'decoupled' behaviour where it's scaled by LR 2024-11-29 16:44:43 -08:00
..
__init__.py Add guard around 'somewhat' newer torch RAdam / NAdam imports 2024-11-26 15:10:15 -08:00
_optim_factory.py Cautious optimizer impl plus some typing cleanup. 2024-11-28 15:11:58 -08:00
_param_groups.py
_types.py Cautious optimizer impl plus some typing cleanup. 2024-11-28 15:11:58 -08:00
adabelief.py
adafactor.py Update README, missed small discrep in adafactor min dim update 2024-11-29 10:57:47 -08:00
adafactor_bv.py Cautious optimizer impl plus some typing cleanup. 2024-11-28 15:11:58 -08:00
adahessian.py
adamp.py
adamw.py Cautious optimizer impl plus some typing cleanup. 2024-11-28 15:11:58 -08:00
adan.py Update Adan with newer impl (from original source) that includes multi-tensor fn 2024-11-26 15:10:15 -08:00
adopt.py Cautious optimizer impl plus some typing cleanup. 2024-11-28 15:11:58 -08:00
lamb.py Cautious optimizer impl plus some typing cleanup. 2024-11-28 15:11:58 -08:00
laprop.py Make LaProp weight decay match typical PyTorch 'decoupled' behaviour where it's scaled by LR 2024-11-29 16:44:43 -08:00
lars.py
lion.py To be technically correct, need to check the in-place _ ver of op 2024-11-28 15:11:58 -08:00
lookahead.py
madgrad.py
mars.py
nadam.py Replace radam & nadam impl with torch.optim ver, rename legacy adamw, nadam, radam impl in timm. Update optim factory & tests. 2024-11-26 15:10:15 -08:00
nadamw.py To be technically correct, need to check the in-place _ ver of op 2024-11-28 15:11:58 -08:00
nvnovograd.py
optim_factory.py
radam.py Replace radam & nadam impl with torch.optim ver, rename legacy adamw, nadam, radam impl in timm. Update optim factory & tests. 2024-11-26 15:10:15 -08:00
rmsprop_tf.py Cautious optimizer impl plus some typing cleanup. 2024-11-28 15:11:58 -08:00
sgdp.py
sgdw.py Cautious optimizer impl plus some typing cleanup. 2024-11-28 15:11:58 -08:00