1756 Commits

Author SHA1 Message Date
Ross Wightman
5d535d7a2d Version 1.0.14, update README & changelog 2025-01-19 13:53:09 -08:00
Ross Wightman
aa333079da Tweak so150m2 def 2025-01-19 13:40:53 -08:00
Josua Rieder
8d81fdf3d9 Fix typos 2025-01-19 13:39:40 -08:00
Ross Wightman
3677f67902 Add the 256x256 in1k ft of the so150m, add an alternate so150m def 2025-01-18 15:51:57 -08:00
Ross Wightman
2a84d68d02 Add some so150m vit w/ sbb recipe weights, and a ese_vovnet57b model with RA4 recipe 2025-01-18 15:51:57 -08:00
Ross Wightman
9265d54a3a LeViT safetensors load is broken by conversion code that wasn't deactivated 2025-01-16 11:37:00 -08:00
Ross Wightman
21e75a9d25
Update version.py
Back to dev version
2025-01-16 11:23:17 -08:00
Adam J. Stewart
6d21eb0d37
VGG ConvMlp: fix layer defaults/types 2025-01-15 12:11:56 +01:00
Adam J. Stewart
f5c4d5cbb7
Add missing imports 2025-01-11 15:13:16 +01:00
Adam J. Stewart
19aaea3c8f
Fix nn.Module type hints 2025-01-11 15:09:21 +01:00
Ross Wightman
47811bc05a Update README, bump version to 1.0.13 non-dev 2025-01-09 09:33:59 -08:00
Ross Wightman
deb9895600 Update checkpoint save to fix old hard-link + fuse issue I ran into again... fix #340 2025-01-08 15:36:58 -08:00
Ross Wightman
92f610c982 Add half-precision (bfloat16, float16) support to train & validate scripts. Should push dtype handling into model factory / pretrained load at some point... 2025-01-07 10:25:14 -08:00
Ross Wightman
155f6e7fea Update README, few minor fixups. 2025-01-06 13:09:15 -08:00
Ross Wightman
2b251fb291 Wrap torch checkpoint() fn to default use_reentrant flag to False and allow env var override 2025-01-06 11:28:39 -08:00
Ross Wightman
131518c15c Add comments to MLP layers re expected layouts 2025-01-02 09:41:35 -08:00
Louis Lac
2d5277e858
Merge branch 'main' into fix-mqa-v2 2025-01-02 00:11:22 +01:00
Louis Lac
2d734d9058 Fixed unfused attn2d scale 2025-01-01 12:34:07 -08:00
Louis Lac
6171e756d3 Fix MQA V2 scale and out shape 2025-01-01 15:37:28 +01:00
Ross Wightman
e846b2cf28 Add 384x384 in12k pretrain and finetune for convnext_nano 2024-12-31 13:16:43 -08:00
Ross Wightman
b0068ba5d0 Switch hf hub entries for new aimv2 / dfn weights to point to timm locations. Undo forced device for SDR linspace, part of another change. 2024-12-30 19:24:21 -08:00
Ross Wightman
1bf84b35c3 Update tests for aimv2 filtering 2024-12-30 19:24:21 -08:00
Ross Wightman
b33418713a Add (almost) full set of aimv2 model instances. Switch back to unpacked SwiGLU. Verify correctness. Add DFN L/14 39B weight. 2024-12-30 19:24:21 -08:00
Ross Wightman
de35fd87f5 Add SimpleNorm to create_norm factory 2024-12-30 19:24:21 -08:00
Ross Wightman
d5375ca769 Use torch F.rms_norm when possible, select fast vs normal paths appropriately and test with torchscript 2024-12-30 19:24:21 -08:00
Ross Wightman
5f12a25114 Add bias arg to Vitamin GeGLU 2024-12-30 19:24:21 -08:00
Ross Wightman
5804d92e4b Switch aimv2 to used packed SwiGLU 2024-12-30 19:24:21 -08:00
Ross Wightman
15406a939e Fixing RmsNorm to fix #2380 and noticed with aimv2 when comparing outputs. Still some work to do, need to look at AMP / fast mode behaviour, dispatch to torch when possible. Add SimpleNorm for 'LayerNorm w/o centering and bias' 2024-12-30 19:24:21 -08:00
Ross Wightman
a648a04834 Supporting aimv2 encoders 2024-12-30 19:24:21 -08:00
Ross Wightman
790decc89b Add more pali(2) weights. Switch rest of models adapting open_clip weights to their own weight instances. 2024-12-27 14:00:41 -08:00
Ross Wightman
01cf0f72af Add support for tag, license customization through push_to_hub 2024-12-27 14:00:41 -08:00
Ross Wightman
b12ecbd614 Move siglip timm weights to own repos 2024-12-27 14:00:41 -08:00
Ross Wightman
6fb7aaf37d Switching to timm specific weight instances for open_clip image encoders to facilitate hf-hub: use in timm and new transformers TimmWrapper 2024-12-27 14:00:41 -08:00
Ross Wightman
364c567dd2
Merge pull request #2357 from huggingface/more_opt_stuff
Add caution to Adan. Add decouple decay option to LAMB.
2024-12-27 12:54:02 -08:00
Ryan
ab0a70dfff fix feature_info.reduction 2024-12-18 21:12:40 +08:00
Ross Wightman
7573096eb8 Make sure trust_remote code only passed to HF datasets. Improve some docstrings. 2024-12-06 11:40:04 -08:00
Ross Wightman
9eee47de52 Back to dev version 2024-12-06 10:44:41 -08:00
Álvaro Justen (@turicas)
9383f2880d Add cache_dir example 2024-12-06 10:39:13 -08:00
Ross Wightman
d1e9a8622a Rename inception_next_atto pretrained str 2024-12-06 10:36:47 -08:00
Weihao Yu
0576175d85 Add inception_next_atto 2024-12-06 10:36:47 -08:00
Ross Wightman
7ab2b938e5 More tweaks to docstrings for hub/builder 2024-12-06 10:25:06 -08:00
Ross Wightman
dc1bb05e8e Punch cache_dir through model factory / builder / pretrain helpers. Improve some annotations in related code. 2024-12-06 10:25:06 -08:00
Ross Wightman
afdf11d9ae Add caution to Adan. Add decouple decay option to LAMB. 2024-12-05 13:50:30 -08:00
Ross Wightman
553ded5c6b Version 1.0.12 2024-12-03 10:34:52 -08:00
Ross Wightman
464885e135 See if we can avoid some model / layer pickle issues with the aa attr in ConvNormAct 2024-12-03 08:02:55 -08:00
Ross Wightman
5fe5f9d488 Add a different mnv4 conv-small weight 2024-12-02 16:14:37 -08:00
Ross Wightman
303f7691a1 Add cautious mars, improve test reliability by skipping grad diff for first step 2024-12-02 11:29:02 -08:00
Ross Wightman
82e8677690 Make LaProp weight decay match typical PyTorch 'decoupled' behaviour where it's scaled by LR 2024-11-29 16:44:43 -08:00
Ross Wightman
886eb77938 Update README, missed small discrep in adafactor min dim update 2024-11-29 10:57:47 -08:00
Ross Wightman
e3e434bbc4 To be technically correct, need to check the in-place _ ver of op 2024-11-28 15:11:58 -08:00