mirror of
https://github.com/huggingface/pytorch-image-models.git
synced 2025-06-03 15:01:08 +08:00
Update README.md
This commit is contained in:
parent
1245b83924
commit
52595a9641
12
README.md
12
README.md
@ -12,6 +12,18 @@
|
||||
|
||||
## What's New
|
||||
|
||||
## Dec 31, 2024
|
||||
* Add AIM-v2 encoders from https://github.com/apple/ml-aim, see on Hub: https://huggingface.co/models?search=timm%20aimv2
|
||||
* Add PaliGemma2 encoders from https://github.com/google-research/big_vision to existing PaliGemma, see on Hub: https://huggingface.co/models?search=timm%20pali2
|
||||
* Add missing L/14 DFN2B 39B CLIP ViT, `vit_large_patch14_clip_224.dfn2b_s39b`
|
||||
* Fix existing RmsProp layer to match standard formulation, use PT 2.5 impl when possible. Move old impl to `SimpleNorm` layer, it's LN w/o centering or bias. There were only two `timm` models using it, and they have been updated.
|
||||
* Allow overidde of `cache_dir` arg for model creation
|
||||
* Pass through `trust_remote_code` for HF datasets wrapper
|
||||
* `inception_next_atto` model added by creator
|
||||
* Adan optimizer caution, and Lamb decoupled weighgt decay options
|
||||
* Some feature_info metadata fixed by https://github.com/brianhou0208
|
||||
* All OpenCLIP and JAX (CLIP, SigLIP, Pali, etc) model weights that used load time remapping were given their own HF Hub instances so that they work with `hf-hub:` based loading, and thus will work with new Transformers `TimmWrapperModel`
|
||||
|
||||
## Nov 28, 2024
|
||||
* More optimizers
|
||||
* Add MARS optimizer (https://arxiv.org/abs/2411.10438, https://github.com/AGI-Arena/MARS)
|
||||
|
Loading…
x
Reference in New Issue
Block a user