mirror of
https://github.com/huggingface/pytorch-image-models.git
synced 2025-06-03 15:01:08 +08:00
Update README.md
This commit is contained in:
parent
3bd35c7004
commit
21b1c2f6a1
@ -23,6 +23,11 @@ And a big thanks to all GitHub sponsors who helped with some of my costs before
|
||||
|
||||
* ❗Updates after Oct 10, 2022 are available in 0.8.x pre-releases (`pip install --pre timm`) or cloning main❗
|
||||
* Stable releases are 0.6.x and available by normal pip install or clone from [0.6.x](https://github.com/rwightman/pytorch-image-models/tree/0.6.x) branch.
|
||||
### April 12, 2023
|
||||
* Add ONNX export script, validate script, helpers that I've had kicking around for along time. Tweak 'same' padding for better export w/ recent ONNX + pytorch.
|
||||
* Refactor dropout args for vit and vit-like models, separate drop_rate into `drop_rate` (classifier dropout), `proj_drop_rate` (block mlp / out projections), `pos_drop_rate` (position embedding drop), `attn_drop_rate` (attention dropout). Also add patch dropout (FLIP) to vit and eva models.
|
||||
* fused F.scaled_dot_product_attention support to more vit models, add env var (TIMM_FUSED_ATTN) to control, and config interface to enable/disable
|
||||
* Add EVA-CLIP backbones w/ image tower weights, all the way up to 4B param 'enormous' model, and 336x336 OpenAI ViT mode that was missed.
|
||||
|
||||
### April 5, 2023
|
||||
* ALL ResNet models pushed to Hugging Face Hub with multi-weight support
|
||||
|
Loading…
x
Reference in New Issue
Block a user