Commit Graph

1369 Commits (6862c9850a4a7577ed7227f850c83153f328a066)

Author SHA1 Message Date
Ross Wightman 4d8ecde6cc Fix torchscript for vit-hybrid dynamic_resize 2023-08-27 15:58:35 -07:00
Ross Wightman fdd8c7c2da Initial impl of dynamic resize for existing vit models (incl vit-resnet hybrids) 2023-08-27 15:58:35 -07:00
Ross Wightman 5d599a6a10 RepViT weights on HF hub 2023-08-25 10:39:02 -07:00
Ross Wightman 56c285445c Wrong pool size for 384x384 inception_next_base 2023-08-24 18:31:44 -07:00
Ross Wightman af9f56f3bf inception_next dilation support, weights on hf hub, classifier reset / global pool / no head fixes 2023-08-24 18:31:44 -07:00
Ross Wightman 2d33b9df6c Add features_only support to inception_next 2023-08-24 18:31:44 -07:00
Ross Wightman 3d8d7450ad InceptionNeXt using timm builder, more cleanup 2023-08-24 18:31:44 -07:00
Ross Wightman f4cf9775c3 Adding InceptionNeXt 2023-08-24 18:31:44 -07:00
Ross Wightman d2e3c09ce1
Update version.py 2023-08-23 22:51:56 -07:00
Ross Wightman d6c348765a Fix first_conv for mobileone and fastvit 2023-08-23 22:50:37 -07:00
Ross Wightman 16334e4bec Fix two fastvit issues 2023-08-23 22:50:37 -07:00
Ross Wightman 5242ba6edc MobileOne and FastViT weights on HF hub, more code cleanup and tweaks, features_only working. Add reparam flag to validate and benchmark, support reparm of all models with fuse(), reparameterize() or switch_to_deploy() methods on modules 2023-08-23 22:50:37 -07:00
Ross Wightman 40dbaafef5 Stagify FastViT /w downsample to top of stage 2023-08-23 22:50:37 -07:00
Ross Wightman 8470eb1cb5 More fastvit & mobileone updates, ready for weight upload 2023-08-23 22:50:37 -07:00
Ross Wightman 8474508d07 More work on FastViT, use own impl of MobileOne, validation working with remapped weight, more refactor TODO 2023-08-23 22:50:37 -07:00
Ross Wightman c7a20cec13 Begin adding FastViT 2023-08-23 22:50:37 -07:00
Ross Wightman 7fd3674d0d Add mobileone and update repvgg 2023-08-23 22:50:37 -07:00
Ross Wightman 3055411c1b
Fix samvit bug, add F.sdpa support and ROPE option (#1920)
* Fix a bug I introduced in samvit, add F.sdpa support and ROPE option to samvit, neck is LayerNorm if not used and standard classifier used

* Add attn dropout to F.sdpa

* Fix fx trace for sam vit

* Fixing torchscript issues in samvit

* Another torchscript fix

* samvit head fc name fix
2023-08-20 21:22:59 -07:00
Ross Wightman 300f54a96f Another effcientvit (mit) tweak, fix torchscript/fx conflict with autocast disable 2023-08-20 15:07:25 -07:00
Ross Wightman dc18cda2e7 efficientvit (mit) msa attention q/k/v ops need to be in float32 to train w/o NaN 2023-08-20 11:49:36 -07:00
Ross Wightman be4e0d8f76 Update attrib comment to include v2 2023-08-19 23:39:09 -07:00
Ross Wightman 126a58e563 Combine ghostnetv2 with ghostnet, reduec redundancy, add weights to hf hub. 2023-08-19 23:33:43 -07:00
Ross Wightman 3f320a9e57 Merge branch 'Add-GhostNetV2' of github.com:yehuitang/pytorch-image-models into yehuitang-Add-GhostNetV2 2023-08-19 22:07:54 -07:00
Ross Wightman 7c2728c6fe
Merge pull request #1919 from ChengpengChen/main
Add RepGhost models and weights
2023-08-19 16:26:45 -07:00
Ross Wightman 69e0ca2e36 Weights on hf hub, bicubic yields slightly better eval 2023-08-19 16:25:45 -07:00
Ross Wightman b8011565bd
Merge pull request #1894 from seefun/master
add two different EfficientViT models
2023-08-19 09:24:14 -07:00
Ross Wightman 7d7589e8da Fixing efficient_vit torchscript, fx, default_cfg issues 2023-08-18 23:23:11 -07:00
Ross Wightman 58ea1c02c4 Add fixed_input_size flag to msra efficient_vit 2023-08-18 16:48:17 -07:00
Ross Wightman c28324a150 Update efficient_vit (msra), hf hub weights 2023-08-18 16:45:37 -07:00
Ross Wightman e700a32626 Cleanup of efficient_vit (mit), tweak eps for better AMP behaviour, formatting/cleanup, weights on hf hub 2023-08-18 16:06:07 -07:00
方曦 00f670fa69 fix bug in ci for efficientvits 2023-08-17 14:40:17 +08:00
Chengpeng Chen e7f97cb5ce Fix typos RepGhost models 2023-08-16 14:27:45 +08:00
Chengpeng Chen d1d0193615 Add RepGhost models and weights 2023-08-16 11:54:53 +08:00
Minseo Kang 7938f28542 Fix typo in efficientformer_v2 2023-08-16 03:29:01 +09:00
yehuitang b407794e3a
Add GhostNetV2 2023-08-13 18:20:27 +08:00
yehuitang fc865282e5
Add ghostnetv2.py 2023-08-13 18:16:26 +08:00
Ross Wightman da75cdd212
Merge pull request #1900 from huggingface/swin_maxvit_resize
Add support for resizing swin transformer, maxvit, coatnet at creation time
2023-08-11 15:05:28 -07:00
Ross Wightman 78a04a0e7d
Merge pull request #1911 from dsuess/1910-fixes-batchnormact-fx
Register norm_act layers as leaf modules
2023-08-11 14:34:16 -07:00
Yonghye Kwon 2048f6f20f
set self.num_features to neck_chans if neck_chans > 0 2023-08-11 13:45:06 +09:00
Ross Wightman 3a44e6c602 Fix #1912 CoaT model not loading w/ return_interm_layers 2023-08-10 11:15:58 -07:00
Daniel Suess 986de90360
Register orm_act layers as leaf modules 2023-08-10 15:37:26 +10:00
Ross Wightman c692715388 Some RepVit tweaks
* add head dropout to RepVit as all models have that arg
* default train to non-distilled head output via distilled_training flag (set_distilled_training) so fine-tune works by default w/o distillation script
* camel case naming tweaks to match other models
2023-08-09 12:41:12 -07:00
Ross Wightman c153cd4a3e Add more advanced interpolation method from BEiT and support non-square window & image size adaptation for
* beit/beit-v2
* maxxvit/coatnet
* swin transformer
And non-square windows for swin-v2
2023-08-08 16:41:16 -07:00
alec.tu bb2b6b5f09 fix num_classes not found 2023-08-07 15:16:03 +08:00
Ross Wightman 1dab536cb1 Fix torch.fx for swin padding change 2023-08-05 13:09:55 -07:00
Ross Wightman 7c0f492dbb Fix type annotation for torchscript 2023-08-04 23:03:52 -07:00
Ross Wightman 7790ea709b Add support for resizing swin transformer img_size and window_size on init and load from pretrained weights. Add support for non-square window_size to both swin v1/v2 2023-08-04 22:10:46 -07:00
Ross Wightman 81089b10a2 Remove unecessary LongTensor in EfficientFormer. Possibly maybe fix #1878 2023-08-03 16:38:53 -07:00
Ross Wightman 4224529ebe Version 0.9.5 prep for release. README update 2023-08-03 15:16:46 -07:00
Ross Wightman d138a9bf88 Add gluon hrnet small weights, fix #1895 2023-08-03 12:15:04 -07:00
Ross Wightman 76d166981d Fix missing norm call in Mlp forward (not used by default, but can be enabled for normformer MLP scale). Fix #1851 fix #1852 2023-08-03 11:36:30 -07:00
Ross Wightman 8e4480e4b6 Patch and pos embed resample done in float32 always (cast to float and back). Fix #1811 2023-08-03 11:32:17 -07:00
Ross Wightman 150356c493 Fix unfortunate selecsls case bug caused by aggressive IDE rename 2023-08-03 10:37:06 -07:00
Ross Wightman 6e8c53d0d3 Comment out beit url, no longer valid as now require long query string, leave for reference, must use HF hub now. 2023-08-03 10:00:46 -07:00
方曦 a56e2bbf19 fix efficientvit_msra pretrained load 2023-08-03 18:44:38 +08:00
方曦 e94c60b546 efficientvit_msra refactor 2023-08-03 17:45:50 +08:00
方曦 047bab6ab2 efficientvit_mit stage refactor 2023-08-03 14:59:35 +08:00
方曦 e8fb866ccf fix efficientvit_msra pool 2023-08-02 14:40:01 +08:00
方曦 43443f64eb fix efficientvits 2023-08-02 14:12:37 +08:00
方曦 82d1e99e1a add efficientvit(msra) 2023-08-01 18:51:08 +08:00
方曦 b91a77fab7 add EfficientVit (MIT) 2023-08-01 12:42:21 +08:00
Sepehr Sameni 40a518c194
use float in resample_abs_pos_embed_nhwc
since F.interpolate doesn't always support BFloat16
2023-07-28 16:01:42 -07:00
Ross Wightman 8cb0ddac45 Update README, version 0.9.4dev0 2023-07-27 17:07:31 -07:00
Ross Wightman a9d0615f42 Fix ijepa vit issue with 448 model, minor formatting fixes 2023-07-26 20:46:27 -07:00
alec.tu 942726db31 import lion in __init__.py 2023-07-27 09:26:57 +08:00
Ross Wightman 5874d1bfc7
Merge pull request #1876 from jameslahm/main
Add RepViT models
2023-07-26 14:38:41 -07:00
Ross Wightman b10310cc27 Add proper pool size for new resnexts 2023-07-26 14:36:03 -07:00
Ross Wightman b71d60cdb7 Two small fixes, num_classes in base class, add model tag 2023-07-26 13:18:49 -07:00
Ross Wightman 3561f8e885 Add seresnextaa201d_32x8d 12k and 1k weights 2023-07-26 13:17:05 -07:00
jameslahm 3318e7614d Add RepViT models 2023-07-21 14:56:53 +08:00
Ruslan Baikulov 158bf129c4 Replace deprecated NumPy aliases of builtin types 2023-07-03 22:24:25 +03:00
Ross Wightman c241081251
Merge pull request #1850 from huggingface/effnet_improve_features_only
Support other features only modes for EfficientNet. Fix #1848 fix #1849
2023-06-23 22:56:08 -07:00
Ross Wightman 47517dbefd Clean more feature extract issues
* EfficientNet/MobileNetV3/HRNetFeatures cls and FX mode support -ve index
* MobileNetV3 allows feature_cfg mode to bypass MobileNetV3Features
2023-06-14 14:46:22 -07:00
Ross Wightman a09c88ed0f Support other features only modes for EfficientNet 2023-06-14 12:57:39 -07:00
SeeFun c3f24a5ae5
‘add ViT weight from I-JEPA pretrain’ 2023-06-14 22:30:31 +08:00
Ross Wightman 2d597b126d Missed extra nadam algo step for capturable path 2023-06-13 20:51:31 -07:00
Ross Wightman 4790c0fa16 Missed nadamw.py 2023-06-13 20:45:58 -07:00
Ross Wightman dab0360e00 Add NadamW based on mlcommons algorithm, added multi-tensor step 2023-06-13 20:45:17 -07:00
Ross Wightman 700aebcdc4 Fix Pytorch 2.0 breakage for Lookahead optimizer adapter 2023-06-02 08:39:07 -07:00
Lengyue c308dbc6f2
update dinov2 layerscale init values 2023-05-24 12:20:17 -04:00
Ross Wightman 7cea88e2c4 Pop eps for lion optimizer 2023-05-21 15:20:03 -07:00
Ross Wightman e9373b1b92 Cleanup before samvit merge. Resize abs posembed on the fly, undo some line-wraps, remove redundant unbind, fix HF hub weight load 2023-05-18 16:43:48 -07:00
方曦 c1c6eeb909 fix loading pretrained weight for samvit 2023-05-18 08:49:29 +08:00
方曦 15de561f2c fix unit test for samvit 2023-05-17 12:51:12 +08:00
方曦 ea1f52df3e add ViT for Segment-Anything Model 2023-05-17 11:39:29 +08:00
Ross Wightman 960202cfcc Dev version 0.9.3 for main 2023-05-16 11:28:00 -07:00
Ross Wightman c5d3ee47f3 Add B/16 datacompxl CLIP weights 2023-05-16 11:27:20 -07:00
Ross Wightman 3d05c0e86f Version 0.9.2 2023-05-14 08:03:04 -07:00
Philip Keller fc77e9ecc5
Update hub.py
fixed import of _hub modules
2023-05-12 21:48:46 +02:00
Ross Wightman cc77096350 Version 0.9.1 2023-05-12 09:47:47 -07:00
Ross Wightman f744bda994 use torch.jit.Final instead of Final for beit, eva 2023-05-12 09:12:14 -07:00
Ross Wightman 2e99bcaedd Update README, prep for version 0.9.0 release 2023-05-11 15:22:50 -07:00
Ross Wightman 3eaf729f3f F.sdpa for visformer fails w/o contiguous on qkv, make experimental 2023-05-11 11:37:37 -07:00
Ross Wightman cf1884bfeb Add 21k maxvit tf weights 2023-05-10 18:23:32 -07:00
Ross Wightman 6c2edf4d74 Missed hub_id entries for byoanet models 2023-05-10 15:58:55 -07:00
Ross Wightman cf101b0097 Version 0.8.23dev0 and README update 2023-05-10 14:41:22 -07:00
Ross Wightman 850ab4931f Missed a few pretrained tags... 2023-05-10 12:16:30 -07:00
Ross Wightman ff2464e2a0 Throw when pretrained weights not available and pretrained=True (principle of least surprise). 2023-05-10 10:44:34 -07:00
Ross Wightman 8ce9a2c00a
Merge pull request #1222 from Leoooo333/master
Fix mixup/one_hot device problem
2023-05-10 08:59:15 -07:00
Ross Wightman fd592ec86c Fix an issue with FastCollateMixup still using device 2023-05-10 08:55:38 -07:00
Ross Wightman e0ec0f7252
Merge pull request #1643 from nateraw/docstrings-update
Update Docstring for create_model
2023-05-09 21:33:20 -07:00
Ross Wightman 627b6315ba Add typing to dinov2 entrypt fns, use hf hub for mae & dinov2 weights 2023-05-09 20:42:11 -07:00
Ross Wightman b9d43c7dca Version 0.8.22dev0 2023-05-09 20:38:10 -07:00
Ross Wightman 960a882510 Remove label offsets and remove old weight url for 1001 class (background + in1k) TF origin weights 2023-05-09 18:00:41 -07:00
Ross Wightman a01d8f86f4 Tweak DinoV2 add, add MAE ViT weights, add initial intermediate layer getter experiment 2023-05-09 17:59:22 -07:00
Ross Wightman 59bea4c306 Merge branch 'main' into dot_nine_cleanup 2023-05-09 12:27:32 -07:00
Leng Yue 5cc87e6485
Add dinov2 pretrained models (#1797)
* add dinov2 small, base, and large

* fix input size

* fix swiglu & dinov2 vit giant

* use SwiGLUPacked to replace GluMlp

* clean up & add ffn_layer placeholder for ParallelScalingBlock
2023-05-09 12:24:47 -07:00
Ross Wightman e3363a7159 Support bitsandbytes optimizers in factory 2023-05-09 11:33:51 -07:00
Ross Wightman 21e57c0b9e Add missing beitv2 in1k -> in1k models 2023-05-08 17:03:51 -07:00
Ross Wightman 8c6fccb879 Allow passing state_dict directly via pretrained cfg mechanism as an override 2023-05-08 15:15:44 -07:00
Ross Wightman af48246a9a Add SwiGLUPacked to layers __init__ 2023-05-08 13:52:34 -07:00
Ross Wightman 3fdb31de2e Small SwiGLU tweak, remove default LN arg in unpacked variant, add packed alias for GluMLP 2023-05-08 12:28:00 -07:00
Ross Wightman e4e43190ce Add typing to all model entrypoint fns, add old cache check env var to builder 2023-05-08 08:52:38 -07:00
Ross Wightman cb3f9c23bb
Metaformer baselines for vision (final PR with cleanup) (#1793)
* update

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* merge with poolformer, initial version

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Revert "Update metaformers.py"

This reverts commit 2916f37f8d.

* Revert "Update metaformers.py"

This reverts commit 1d882eb494.

* Revert "Update metaformers.py"

This reverts commit 2209d0830e.

* Revert "Update metaformers.py"

This reverts commit 32bede4e27.

* Revert "Update metaformers.py"

This reverts commit 4ed934e000.

* Revert "Update metaformers.py"

This reverts commit 3f0b075367.

* Revert "Update metaformers.py"

This reverts commit 2fef9006d7.

* Update metaformers.py

* Update metaformers.py

* rename model

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Stem/Downsample rework

* Update metaformers.py

* try NHWC

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Squashed commit of the following:

commit b7696a30a7
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Fri Feb 10 01:46:44 2023 -0800

    Update metaformers.py

commit 41fe5c3626
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Fri Feb 10 01:03:47 2023 -0800

    Update metaformers.py

commit a3aee37c35
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Fri Feb 10 00:32:04 2023 -0800

    Update metaformers.py

commit f938beb81b
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Fri Feb 10 00:24:58 2023 -0800

    Update metaformers.py

commit 10bde717e5
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Sun Feb 5 02:11:28 2023 -0800

    Update metaformers.py

commit 39274bd45e
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Sun Feb 5 02:06:58 2023 -0800

    Update metaformers.py

commit a2329ab8ec
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Sun Feb 5 02:03:34 2023 -0800

    Update metaformers.py

commit 53b8ce5b8a
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Sun Feb 5 02:02:37 2023 -0800

    Update metaformers.py

commit ab6225b941
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Sun Feb 5 01:04:55 2023 -0800

    try NHWC

commit 02fcc30eaa
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Sat Feb 4 23:47:06 2023 -0800

    Update metaformers.py

commit 366aae9304
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Sat Feb 4 23:37:30 2023 -0800

    Stem/Downsample rework

commit 26a8e481a5
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Wed Feb 1 07:42:07 2023 -0800

    Update metaformers.py

commit a913f5d438
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Wed Feb 1 07:41:24 2023 -0800

    Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* channels first for whole network

* Channels first

* Update metaformers.py

* Use buffer for randformer

* Update metaformers.py

* Remove einsum

* don't test randformer for feature extraction

* arbitrary input sizes for randformer

* Squashed commit of the following:

commit 6c089ca4325ab10942fe56e0999dcc1a11e1d2f0
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Mon Mar 6 02:11:17 2023 -0800

    Update metaformers.py

commit 521528a900e49ef8f462f5ccd795efb3a5d14214
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Mon Mar 6 02:06:08 2023 -0800

    Update metaformers.py

commit 3827eec7963698ff727fbb13ace53594ceb374d5
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Mon Mar 6 02:03:08 2023 -0800

    Update metaformers.py

commit ac1c6fea8adcd846e031ea0f5fa81ffe63d3c4bb
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Mon Mar 6 02:01:04 2023 -0800

    Update metaformers.py

commit 26f3d343cdc46183543f83482187f669f3181ddf
Merge: d577129 f736730
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Mon Mar 6 01:57:29 2023 -0800

    Merge branch 'metaformer_workspace' of https://github.com/fffffgggg54/pytorch-image-models into metaformer_workspace

commit d577129aaa23fb348a8bb93bcd17cf1d5a4e8ff8
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Mon Mar 6 01:57:20 2023 -0800

    Update metaformers.py

commit f7367304e8f3b7a9a7f16e0a032bb72546afcc2a
Author: Fredo Guan <fredo.guan@hotmail.com>
Date:   Mon Mar 6 01:56:11 2023 -0800

    Metaformer baselines for vision (#12)

* formatting, cleanup, fix dropout

* fix regression, pass kwargs

* fix poolformerv1 weights, formatting

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* Update metaformers.py

* some cleanup

* SDPA from ViT, fix imports

* Update metaformers.py

* fix head reset

* fast norm bias patch for metaformers

* Metaformer refactor, remove rand/ident models, fix issues, remove old poolformer

* Switch to hub weights

---------

Co-authored-by: Fredo Guan <fredo.guan@hotmail.com>
2023-05-05 11:18:26 -07:00
Ross Wightman 320bf9c469 Remove redundant types, kwargs back in own section (lesser of many evils?) 2023-05-01 14:21:48 -07:00
Ross Wightman 8fa86a28a8 Add datacomp L/14 (79.2 zs) image tower weights 2023-05-01 10:24:08 -07:00
Ross Wightman 5e64777804 0.8.21dev0 2023-04-28 13:46:59 -07:00
Ross Wightman 493c730ffc Fix pit regression 2023-04-26 23:16:06 -07:00
Ross Wightman 437d344e03 Always some torchscript issues 2023-04-26 20:42:34 -07:00
Ross Wightman 528faa0e04 Some fixes 2023-04-26 17:46:20 -07:00
Ross Wightman 3386af8c86 Final push to get remaining models using multi-weight pretrained configs, almost all weights on HF hub 2023-04-26 15:52:13 -07:00
Ross Wightman c0560cbf22 version 0.8.20dev0 2023-04-21 16:57:32 -07:00
Ross Wightman 7ad7ddb7ad DenseNet, DPN, VoVNet, Aligned Xception weights on HF hub. DenseNet grad_checkpointing using timm API 2023-04-21 16:56:44 -07:00
Ross Wightman 864bfd43d0 hardcore nas weights on hf hub 2023-04-21 14:35:10 -07:00
Ross Wightman 6e4529ae35 TResNet weights now on HF hub, modified to remove InplaceABN dependency 2023-04-21 14:20:48 -07:00
Ross Wightman 04dcbc02ec Fix weight remap for tresnet_v2_l 2023-04-21 09:05:04 -07:00
Ross Wightman a08e5aed1d More models w/ multi-weight support, moving to HF hub. Removing inplace_abn from all models including TResNet 2023-04-20 22:44:49 -07:00
Ross Wightman 2aabaef039
Merge pull request #1784 from huggingface/wip-voidbag-accumulate-grad
Accumulate gradients (adding to #1659)
2023-04-20 08:15:28 -07:00
Ross Wightman f4825a09ef
Merge pull request #212 from bryant1410/patch-1
Fix MultiEpochsDataLoader when there's no batching
2023-04-20 07:09:27 -07:00
Ross Wightman 4cd7fb88b2 clip gradients with update 2023-04-19 23:36:20 -07:00
Ross Wightman df81d8d85b Cleanup gradient accumulation, fix a few issues, a few other small cleanups in related code. 2023-04-19 23:11:00 -07:00
Ross Wightman ab7ca62a6e Merge branch 'main' of github.com:rwightman/pytorch-image-models into wip-voidbag-accumulate-grad 2023-04-19 11:08:12 -07:00
Ross Wightman 34df125be6 cait, volo, xvit hub weights 2023-04-14 10:13:13 -07:00
Ross Wightman f6d5767551 cspnet models on HF hub w/ multi-weight support 2023-04-12 14:02:38 -07:00
Ross Wightman aef6e562e4 Add onnx utils and export code, tweak padding and conv2d_same for better dynamic export with recent PyTorch 2023-04-11 17:03:57 -07:00
Ross Wightman 80b247d843 Update swin_v2 attn_mask buffer change in #1790 to apply to updated checkpoints in hub 2023-04-11 14:40:32 -07:00
Ross Wightman 1a1aca0cee
Merge pull request #1761 from huggingface/patch_drop_refactor
Implement patch dropout for eva / vision_transformer, refactor dropout args
2023-04-11 14:37:36 -07:00
Ross Wightman c0670822d2 Small factory handling fix for pretrained tag vs cfg 2023-04-11 07:42:13 -07:00
Ross Wightman 2f25f73b90 Missed a fused_attn update in relpos vit 2023-04-10 23:30:50 -07:00
Ross Wightman 0b65b5c0ac Add finalized eva CLIP weights pointing to remapped timm hub models 2023-04-10 23:13:12 -07:00
Ross Wightman 965d0a2d36 fast_attn -> fused_attn, implement global config for enable/disable fused_attn, add to more models. vit clip openai 336 weights. 2023-04-10 12:04:33 -07:00
Ross Wightman 4d135421a3 Implement patch dropout for eva / vision_transformer, refactor / improve consistency of dropout args across all vit based models 2023-04-07 20:27:23 -07:00
Marco Forte c76818a592
skip attention mask buffers
Allows more flexibility in the resolutions accepted by SwinV2.
2023-04-07 18:50:02 +02:00
Ross Wightman 1bb3989b61 Improve kwarg passthrough for swin, vit, deit, beit, eva 2023-04-05 21:37:16 -07:00
Ross Wightman 35c94b836c Update warning message for deprecated model names 2023-04-05 17:24:17 -07:00
Ross Wightman 9eaab795c2 Add some vit model deprecations 2023-04-05 17:21:03 -07:00
Ross Wightman b17abd35b2 Version 0.8.19dev0 2023-04-05 16:37:16 -07:00
Ross Wightman abff3f12ec Wrong pool_size for 288 ft 2023-04-05 16:07:51 -07:00
Ross Wightman 356309959c ResNet models on HF hub, multi-weight support, add torchvision v2 weights, new 12k pretrained and fine-tuned timm anti-aliased weights 2023-04-05 14:19:42 -07:00
Ross Wightman 7501972cd6 Version 0.8.18dev0 2023-03-31 16:51:26 -07:00