mirror of
https://github.com/huggingface/pytorch-image-models.git
synced 2025-06-03 15:01:08 +08:00
Update README.md
This commit is contained in:
parent
9de2ec5e44
commit
361fd0fc40
@ -7,7 +7,7 @@
|
||||
* AGC w/ default clipping factor `--clip-grad .01 --clip-mode agc`
|
||||
* PyTorch global norm of 1.0 (old behaviour, always norm), `--clip-grad 1.0`
|
||||
* PyTorch value clipping of 10, `--clip-grad 10. --clip-mode value`
|
||||
* AGC performance is definitely sensitive to the clipping factor. More experimentation needed to determine good values for smaller batch sizes and optimizers besides those in paper. So far I've found .001-.005 is necessary for stable RMSProp training.
|
||||
* AGC performance is definitely sensitive to the clipping factor. More experimentation needed to determine good values for smaller batch sizes and optimizers besides those in paper. So far I've found .001-.005 is necessary for stable RMSProp training w/ NFNet/NF-ResNet.
|
||||
|
||||
### Feb 12, 2021
|
||||
* Update Normalization-Free nets to include new NFNet-F (https://arxiv.org/abs/2102.06171) model defs
|
||||
|
Loading…
x
Reference in New Issue
Block a user