Ross Wightman
c241081251
Merge pull request #1850 from huggingface/effnet_improve_features_only
...
Support other features only modes for EfficientNet. Fix #1848 fix #1849
2023-06-23 22:56:08 -07:00
Ross Wightman
f9a24fa19f
Merge pull request #1846 from seefun/master
...
add I-JEPA pretrained weight for ViT
2023-06-15 11:12:53 -07:00
Ross Wightman
47517dbefd
Clean more feature extract issues
...
* EfficientNet/MobileNetV3/HRNetFeatures cls and FX mode support -ve index
* MobileNetV3 allows feature_cfg mode to bypass MobileNetV3Features
2023-06-14 14:46:22 -07:00
Ross Wightman
a09c88ed0f
Support other features only modes for EfficientNet
2023-06-14 12:57:39 -07:00
SeeFun
c3f24a5ae5
‘add ViT weight from I-JEPA pretrain’
2023-06-14 22:30:31 +08:00
Ross Wightman
2d597b126d
Missed extra nadam algo step for capturable path
2023-06-13 20:51:31 -07:00
Ross Wightman
4790c0fa16
Missed nadamw.py
2023-06-13 20:45:58 -07:00
Ross Wightman
dab0360e00
Add NadamW based on mlcommons algorithm, added multi-tensor step
2023-06-13 20:45:17 -07:00
Ross Wightman
fb4f220c2e
Merge pull request #1841 from mishig25/update-doc-build-actions
...
[doc build] Use secrets
2023-06-09 07:04:06 -07:00
Mishig
3ebbe172ec
[doc build] Use secrets
2023-06-09 10:47:32 +02:00
Ross Wightman
2d0dbd17e3
Merge pull request #1837 from lorenzbaraldi/fix_help_string
...
Changed help_string of args worker
2023-06-02 09:22:32 -07:00
Ross Wightman
700aebcdc4
Fix Pytorch 2.0 breakage for Lookahead optimizer adapter
2023-06-02 08:39:07 -07:00
Lorenzo Baraldi
13d5b21ecd
Changed help_string of --worker
...
It seems like 4 is the correct default value
2023-06-01 17:27:51 +02:00
Ross Wightman
cd950e6583
Merge pull request #1823 from leng-yue/fix-layer-scale
...
[Fix] Update dinov2 layerscale init values
2023-05-24 17:40:44 -07:00
Lengyue
c308dbc6f2
update dinov2 layerscale init values
2023-05-24 12:20:17 -04:00
Ross Wightman
049b133253
Add 0.9 imagenet and ood test set results files
2023-05-24 09:02:25 -07:00
Ross Wightman
7cea88e2c4
Pop eps for lion optimizer
2023-05-21 15:20:03 -07:00
Ross Wightman
9fcc01930a
Merge pull request #1812 from seefun/master
...
add ViT for Segment-Anything Model
2023-05-18 18:46:13 -07:00
Ross Wightman
e9373b1b92
Cleanup before samvit merge. Resize abs posembed on the fly, undo some line-wraps, remove redundant unbind, fix HF hub weight load
2023-05-18 16:43:48 -07:00
方曦
c1c6eeb909
fix loading pretrained weight for samvit
2023-05-18 08:49:29 +08:00
方曦
15de561f2c
fix unit test for samvit
2023-05-17 12:51:12 +08:00
方曦
ea1f52df3e
add ViT for Segment-Anything Model
2023-05-17 11:39:29 +08:00
Ross Wightman
960202cfcc
Dev version 0.9.3 for main
2023-05-16 11:28:00 -07:00
Ross Wightman
c5d3ee47f3
Add B/16 datacompxl CLIP weights
2023-05-16 11:27:20 -07:00
Ross Wightman
3d05c0e86f
Version 0.9.2
2023-05-14 08:03:04 -07:00
Ross Wightman
ccb9dc4ec4
Merge pull request #1804 from philipsgithub/patch-1
...
Update hub.py
2023-05-12 14:15:45 -07:00
Philip Keller
fc77e9ecc5
Update hub.py
...
fixed import of _hub modules
2023-05-12 21:48:46 +02:00
Ross Wightman
cc77096350
Version 0.9.1
2023-05-12 09:47:47 -07:00
Ross Wightman
f744bda994
use torch.jit.Final instead of Final for beit, eva
2023-05-12 09:12:14 -07:00
Ross Wightman
35b9fc71a1
Update README.md
2023-05-11 15:32:01 -07:00
Ross Wightman
17510809dc
Update README.md
2023-05-11 15:28:42 -07:00
Ross Wightman
2e99bcaedd
Update README, prep for version 0.9.0 release
2023-05-11 15:22:50 -07:00
Ross Wightman
3eaf729f3f
F.sdpa for visformer fails w/o contiguous on qkv, make experimental
2023-05-11 11:37:37 -07:00
Ross Wightman
cf1884bfeb
Add 21k maxvit tf weights
2023-05-10 18:23:32 -07:00
Ross Wightman
6c2edf4d74
Missed hub_id entries for byoanet models
2023-05-10 15:58:55 -07:00
Ross Wightman
cf101b0097
Version 0.8.23dev0 and README update
2023-05-10 14:41:22 -07:00
Ross Wightman
850ab4931f
Missed a few pretrained tags...
2023-05-10 12:16:30 -07:00
Ross Wightman
ff2464e2a0
Throw when pretrained weights not available and pretrained=True (principle of least surprise).
2023-05-10 10:44:34 -07:00
Ross Wightman
8ce9a2c00a
Merge pull request #1222 from Leoooo333/master
...
Fix mixup/one_hot device problem
2023-05-10 08:59:15 -07:00
Ross Wightman
fd592ec86c
Fix an issue with FastCollateMixup still using device
2023-05-10 08:55:38 -07:00
Ross Wightman
e0ec0f7252
Merge pull request #1643 from nateraw/docstrings-update
...
Update Docstring for create_model
2023-05-09 21:33:20 -07:00
Ross Wightman
627b6315ba
Add typing to dinov2 entrypt fns, use hf hub for mae & dinov2 weights
2023-05-09 20:42:11 -07:00
Ross Wightman
c9db4709af
Merge pull request #1799 from huggingface/dot_nine_cleanup
...
Final cleanup before .9 release
2023-05-09 20:38:45 -07:00
Ross Wightman
b9d43c7dca
Version 0.8.22dev0
2023-05-09 20:38:10 -07:00
Ross Wightman
960a882510
Remove label offsets and remove old weight url for 1001 class (background + in1k) TF origin weights
2023-05-09 18:00:41 -07:00
Ross Wightman
a01d8f86f4
Tweak DinoV2 add, add MAE ViT weights, add initial intermediate layer getter experiment
2023-05-09 17:59:22 -07:00
Ross Wightman
59bea4c306
Merge branch 'main' into dot_nine_cleanup
2023-05-09 12:27:32 -07:00
Leng Yue
5cc87e6485
Add dinov2 pretrained models ( #1797 )
...
* add dinov2 small, base, and large
* fix input size
* fix swiglu & dinov2 vit giant
* use SwiGLUPacked to replace GluMlp
* clean up & add ffn_layer placeholder for ParallelScalingBlock
2023-05-09 12:24:47 -07:00
Ross Wightman
e3363a7159
Support bitsandbytes optimizers in factory
2023-05-09 11:33:51 -07:00
Ross Wightman
21e57c0b9e
Add missing beitv2 in1k -> in1k models
2023-05-08 17:03:51 -07:00