Commit Graph

1525 Commits (71101ebba002c72fb9203a66b2b6682b67a22002)

Author SHA1 Message Date
Ross Wightman be0944edae Significant transforms, dataset, dataloading enhancements. 2024-01-08 09:38:42 -08:00
Ross Wightman b5a4fa9c3b Add pos_weight and support for summing over classes to BCE impl in train scripts 2023-12-30 12:13:06 -08:00
方曦 9dbea3bef6 fix cls head in hgnet 2023-12-27 21:26:26 +08:00
SeeFun 56ae8b906d
fix reset head in hgnet 2023-12-27 20:11:29 +08:00
SeeFun 6862c9850a
fix backward in hgnet 2023-12-27 16:49:37 +08:00
SeeFun 6cd28bc5c2
Merge branch 'huggingface:main' into master 2023-12-27 16:43:37 +08:00
Ross Wightman f2fdd97e9f Add parsable json results output for train.py, tweak --pretrained-path to force head adaptation 2023-12-22 11:18:25 -08:00
LR e0079c92da
Update eva.py (#2058)
* Update eva.py

When argument class token = False, self.cls_token = None.

Prevents error from attempting trunc_normal_ on None:
AttributeError: 'NoneType' object has no attribute 'uniform_'

* Update eva.py

fix
2023-12-16 15:10:45 -08:00
Li zhuoqun 7da34a999a add type annotations in the code of swin_transformer_v2 2023-12-15 09:31:25 -08:00
Fredo Guan bbe798317f
Update EdgeNeXt to use ClassifierHead as per ConvNeXt (#2051)
* Update edgenext.py
2023-12-11 12:17:19 -08:00
Ross Wightman 711c5dee6d Update sgdw for older pytorch 2023-12-11 12:10:29 -08:00
Ross Wightman 60b170b200 Add --pretrained-path arg to train script to allow passing local checkpoint as pretrained. Add missing/unexpected keys log. 2023-12-11 12:10:29 -08:00
Ross Wightman 17a47c0e35 Add SGDW optimizer 2023-12-11 12:10:29 -08:00
Fredo Guan 2597ce2860 Update davit.py 2023-12-11 11:13:04 -08:00
akiyuki ishikawa 2bd043ce5d fix doc position 2023-12-05 12:00:51 -08:00
akiyuki ishikawa 4f2e1bf4cb Add missing docs in SwinTransformerStage 2023-12-05 12:00:51 -08:00
Ross Wightman df7ae11eb2 Add device arg for patch embed resize, fix #2024 2023-12-04 11:42:13 -08:00
Ross Wightman cd8d9d9ff3 Add missing hf hub entries for mvitv2 2023-11-26 21:06:39 -08:00
Ross Wightman 19a8c182cc Version 0.9.13dev0 2023-11-25 10:52:31 -08:00
Ross Wightman b996c1a0f5 A few more missed hf hub entries 2023-11-23 21:48:14 -08:00
Ross Wightman 5fb92d79e7 Version 0.9.12 2023-11-23 17:01:03 -08:00
Ross Wightman 89ec91aece Add missing hf_hub entry for mobilnetv3_rw 2023-11-23 12:44:59 -08:00
Ross Wightman 40d55ab4bc Add `in_chans` to data config helper. Fix #2021 2023-11-23 12:44:59 -08:00
Dillon Laird 63ee54853c fixed intermediate output indices 2023-11-22 16:32:41 -08:00
Ross Wightman fa06f6c481 Merge branch 'seefun-efficientvit' 2023-11-21 14:06:27 -08:00
Ross Wightman c6b0c98963 Upload weights to hub, tweak crop_pct, comment out SAM EfficientViTs for now (v2 weights comming) 2023-11-21 14:05:04 -08:00
Ross Wightman 975203a369 Version 0.9.12dev0 2023-11-21 10:18:45 -08:00
Ross Wightman ada145b016 Literal use w/ python < 3.8 requires typing_extension, cach instead of check sys ver 2023-11-21 09:48:03 -08:00
Ross Wightman dfaab97d20 More consistency in model arg/kwarg merge handling 2023-11-21 09:48:03 -08:00
Ross Wightman 3775e4984f Merge branch 'efficientvit' of github.com:seefun/pytorch-image-models into seefun-efficientvit 2023-11-20 16:21:38 -08:00
Ross Wightman dfb8658100 Fix a few missed model deprecations and one missed pretrained cfg 2023-11-20 12:41:49 -08:00
Ross Wightman c20f5fc385 Version 0.9.11 2023-11-19 17:18:48 -08:00
Ross Wightman a604011935 Add support for passing model args via hf hub config 2023-11-19 15:16:01 -08:00
方曦 c9d093a58e update norm eps for efficientvit large 2023-11-18 17:46:47 +08:00
Laureηt 21647c0a0c Add types to vision_transformers.py 2023-11-17 16:06:06 -08:00
方曦 87ba43a9bc add efficientvit large series 2023-11-17 13:58:46 +08:00
Ross Wightman 7c685a4ef3 Fix openai quickgelu loading and add mnissing orig_in21k vit weights and remove zero'd classifier w/ matching hub update 2023-11-16 19:16:28 -08:00
LittleNyima ef72c3cd47 Add warnings for duplicate registry names 2023-11-08 10:18:59 -08:00
Ross Wightman 205d8ad37c version 0.9.10 2023-11-04 02:33:04 -07:00
Ross Wightman 9fab8d8f58 Fix break of 2 years old torchvision installs :/ 2023-11-04 02:32:09 -07:00
Ross Wightman d3e83a190f Add in12k fine-tuned convnext_xxlarge 2023-11-03 14:35:01 -07:00
Ross Wightman 855719fca6 Prep for 0.9.9 release 2023-11-03 11:38:09 -07:00
Ross Wightman f7762fee78 Consistency handling None / empty string inputs to norm / act create fns 2023-11-03 11:01:41 -07:00
Ross Wightman dcfdba1f5f Make quickgelu models appear in listing 2023-11-03 11:01:41 -07:00
Ross Wightman 96bd162ddb Add cc-by-nc-4.0 license for metaclip, make note in quickgelu model def about pretrained_cfg mapping 2023-11-03 11:01:41 -07:00
Ross Wightman 6894ec7edc Forgot about datcomp b32 models 2023-11-03 11:01:41 -07:00
Ross Wightman a2e4a4c148 Add quickgelu vit clip variants, simplify get_norm_layer and allow string args in vit norm/act. Add metaclip CLIP weights 2023-11-03 11:01:41 -07:00
Ross Wightman c55bc41a42 DFN CLIP ViT support 2023-10-31 12:16:21 -07:00
a-r-r-o-w d5f1525334 include suggestions from review
Co-Authored-By: Ross Wightman <rwightman@gmail.com>
2023-10-30 13:47:54 -07:00
a-r-r-o-w 5f14bdd564 include typing suggestions by @rwightman 2023-10-30 13:47:54 -07:00
a-r-r-o-w 05b0aaca51 improvement: add typehints and docs to timm/models/resnet.py 2023-10-30 13:47:54 -07:00
a-r-r-o-w c2fe0a2268 improvement: add typehints and docs to timm/models/mobilenetv3.py 2023-10-30 13:47:54 -07:00
Laureηt d023154bb5 Update swin_transformer.py
make `SwimTransformer`'s `patch_embed` customizable through the constructor
2023-10-30 13:47:14 -07:00
Ross Wightman 68a121402f Added hub weights for dinov2 register models 2023-10-29 23:03:48 -07:00
Ross Wightman 3f02392488 Add DINOv2 models with register tokens. Convert pos embed to non-overlapping for consistency. 2023-10-29 23:03:48 -07:00
Laureηt fe92fd93e5 fix adaptive_avgmax_pool.py
remove extra whitespace in `SelectAdaptivePool2d`'s `__repr__`
2023-10-29 23:03:36 -07:00
Patrick Labatut 97450d618a Update DINOv2 license to Apache 2.0 2023-10-27 09:12:51 -07:00
mjamroz 7a6369156f avoid getting undefined 2023-10-22 21:36:23 -07:00
Tush9905 89ba0da910 Fixed Typos
Fixed the typos in helpers.py and CONTRIBUTING.md
2023-10-21 21:46:31 -07:00
Ross Wightman 9afe0bb78e Update README, prep for 0.9.8 release 2023-10-20 13:57:23 -07:00
pUmpKin-Co 8556462a18 fix doc typo in resnetv2 2023-10-20 11:56:50 -07:00
Ross Wightman 462fb3ec9f Push new repvit weights to hub, tweak tag names 2023-10-20 11:49:29 -07:00
Ross Wightman 5309424d5e Merge branch 'main' of https://github.com/jameslahm/pytorch-image-models into jameslahm-main 2023-10-20 11:08:12 -07:00
Ross Wightman d3ebdcfd93 Disable strict load when siglip vit pooling removed 2023-10-19 12:03:40 -07:00
Ross Wightman e728f3efdb Cleanup ijepa models, they're just gap (global-avg-pool) models w/o heads. fc-norm conversion was wrong, gigantic should have been giant 2023-10-17 15:44:46 -07:00
Ross Wightman 49a459e8f1 Merge remote-tracking branch 'upstream/main' into vit_siglip_and_reg 2023-10-17 09:36:48 -07:00
Ross Wightman a58f9162d7 Missed __init__.py update for attention pooling layer add 2023-10-17 09:28:21 -07:00
Ross Wightman 59b622233b Change ijepa names, add pretrain cfg for reg experimentts 2023-10-17 07:16:17 -07:00
Ross Wightman 71365165a2 Add SigLIP weights 2023-10-16 23:26:08 -07:00
Ross Wightman 42daa3b497 Add full set of SigLIP models 2023-10-10 22:15:45 -07:00
方曦 4aa166de9c Add hgnet ssld weights 2023-10-09 19:14:10 +08:00
方曦 159e91605c Add PP-HGNet and PP-HGNetv2 models 2023-10-09 19:04:58 +08:00
lucapericlp 7ce65a83a2 Removing unused self.drop 2023-10-05 11:20:57 -07:00
Yassine 884ef88818 fix all SDPA dropouts 2023-10-05 08:58:41 -07:00
Yassine b500cae4c5 fastvit: don't dropout in eval 2023-10-05 08:58:41 -07:00
Ross Wightman b9dde58076 Fixup attention pooling in siglip vit support 2023-10-02 11:44:12 -07:00
jameslahm f061b539d7 Update RepViT models 2023-10-01 14:00:53 +08:00
Ross Wightman 99cfd6702f Use global pool arg to select attention pooling in head 2023-09-30 16:16:21 -07:00
Ross Wightman 82cc53237e Working on support for siglip (w/ attn pool) vit backbone, and adding registers (reg tokens) 2023-09-30 16:03:01 -07:00
Ross Wightman 054c763fca Bump to dev 0.9.8 version 2023-09-27 10:27:47 -07:00
Ross Wightman 6bae514656 Add pretrained patch embed resizing to swin 2023-09-27 10:27:28 -07:00
Yassine 5c504b4ded flip these two 2023-09-27 10:24:12 -07:00
Yassine 8ba2038e6b fast_vit: propagate act_layer argument 2023-09-27 10:24:12 -07:00
Nguyen Nhat Hoang 95ba90157f Update tiny_vit.py to fix bug 2023-09-23 10:05:52 -07:00
belfner 245ad4f41a Added missing RuntimeError to builder functions of models that do not currently support feature extraction 2023-09-19 08:19:14 -07:00
Thorsten Hempel d4c21b95f4 Update repghost.py 2023-09-15 11:41:56 -07:00
Thorsten Hempel 7eb7d13845 Fix in_features for linear layer in reset_classifier. 2023-09-13 09:29:38 -07:00
Ross Wightman 34ae2861f4 Version 0.9.7 2023-09-01 15:06:55 -07:00
Ross Wightman 0d124ffd4f Update README. Fine-grained layer-wise lr decay working for tiny_vit and both efficientvits. Minor fixes. 2023-09-01 15:05:29 -07:00
Ross Wightman 2f0fbb59b3 TinyViT weights on HF hub 2023-09-01 11:05:56 -07:00
Ross Wightman 507cb08acf TinyVitBlock needs adding as leaf for FX now, tweak a few dim names 2023-09-01 11:05:56 -07:00
Ross Wightman 9caf32b93f Move levit style pos bias resize with other rel pos bias utils 2023-09-01 11:05:56 -07:00
Ross Wightman 63417b438f TinyViT adjustments
* keep most of net in BCHW layout, performance appears same, can remove static resolution attribs and features easier to use
* add F.sdpa, decent gains in pt 2.1
* tweak crop pct based on eval
2023-09-01 11:05:56 -07:00
方曦 39aa44b192 Fixing tinyvit trace issue 2023-09-01 11:05:56 -07:00
方曦 aea3b9c854 Fixing tinyvit input_size issue 2023-09-01 11:05:56 -07:00
方曦 fabc4e5bcd Fixing tinyvit torchscript issue 2023-09-01 11:05:56 -07:00
方曦 bae949f830 fix attention_bias_cache in tinyvit 2023-09-01 11:05:56 -07:00
方曦 170a5b6e27 add tinyvit 2023-09-01 11:05:56 -07:00
Ross Wightman 983310d6a2 Fix #1935 , torch.amp.autocast -> torch.autocast (namespace issue for 1.10 pt compat) 2023-08-30 15:03:28 -07:00
Ross Wightman f544d4916c Version 0.9.6 for release 2023-08-29 09:14:13 -07:00
Ross Wightman c8b2f28096 Fix a few typos, fix fastvit proj_drop, add code link 2023-08-28 21:26:29 -07:00
Ross Wightman fc5d705b83 dynamic_size -> dynamic_img_size, add dynamic_img_pad for padding option 2023-08-27 15:58:35 -07:00
Ross Wightman 1f4512fca3 Support dynamic_resize in eva.py models 2023-08-27 15:58:35 -07:00
Ross Wightman ea3519a5f0 Fix dynamic_resize for deit models (distilled or no_embed_cls) and vit w/o class tokens 2023-08-27 15:58:35 -07:00
Ross Wightman 4d8ecde6cc Fix torchscript for vit-hybrid dynamic_resize 2023-08-27 15:58:35 -07:00
Ross Wightman fdd8c7c2da Initial impl of dynamic resize for existing vit models (incl vit-resnet hybrids) 2023-08-27 15:58:35 -07:00
Ross Wightman 5d599a6a10 RepViT weights on HF hub 2023-08-25 10:39:02 -07:00
Ross Wightman 56c285445c Wrong pool size for 384x384 inception_next_base 2023-08-24 18:31:44 -07:00
Ross Wightman af9f56f3bf inception_next dilation support, weights on hf hub, classifier reset / global pool / no head fixes 2023-08-24 18:31:44 -07:00
Ross Wightman 2d33b9df6c Add features_only support to inception_next 2023-08-24 18:31:44 -07:00
Ross Wightman 3d8d7450ad InceptionNeXt using timm builder, more cleanup 2023-08-24 18:31:44 -07:00
Ross Wightman f4cf9775c3 Adding InceptionNeXt 2023-08-24 18:31:44 -07:00
Ross Wightman d2e3c09ce1
Update version.py 2023-08-23 22:51:56 -07:00
Ross Wightman d6c348765a Fix first_conv for mobileone and fastvit 2023-08-23 22:50:37 -07:00
Ross Wightman 16334e4bec Fix two fastvit issues 2023-08-23 22:50:37 -07:00
Ross Wightman 5242ba6edc MobileOne and FastViT weights on HF hub, more code cleanup and tweaks, features_only working. Add reparam flag to validate and benchmark, support reparm of all models with fuse(), reparameterize() or switch_to_deploy() methods on modules 2023-08-23 22:50:37 -07:00
Ross Wightman 40dbaafef5 Stagify FastViT /w downsample to top of stage 2023-08-23 22:50:37 -07:00
Ross Wightman 8470eb1cb5 More fastvit & mobileone updates, ready for weight upload 2023-08-23 22:50:37 -07:00
Ross Wightman 8474508d07 More work on FastViT, use own impl of MobileOne, validation working with remapped weight, more refactor TODO 2023-08-23 22:50:37 -07:00
Ross Wightman c7a20cec13 Begin adding FastViT 2023-08-23 22:50:37 -07:00
Ross Wightman 7fd3674d0d Add mobileone and update repvgg 2023-08-23 22:50:37 -07:00
Ross Wightman 3055411c1b
Fix samvit bug, add F.sdpa support and ROPE option (#1920)
* Fix a bug I introduced in samvit, add F.sdpa support and ROPE option to samvit, neck is LayerNorm if not used and standard classifier used

* Add attn dropout to F.sdpa

* Fix fx trace for sam vit

* Fixing torchscript issues in samvit

* Another torchscript fix

* samvit head fc name fix
2023-08-20 21:22:59 -07:00
Ross Wightman 300f54a96f Another effcientvit (mit) tweak, fix torchscript/fx conflict with autocast disable 2023-08-20 15:07:25 -07:00
Ross Wightman dc18cda2e7 efficientvit (mit) msa attention q/k/v ops need to be in float32 to train w/o NaN 2023-08-20 11:49:36 -07:00
Ross Wightman be4e0d8f76 Update attrib comment to include v2 2023-08-19 23:39:09 -07:00
Ross Wightman 126a58e563 Combine ghostnetv2 with ghostnet, reduec redundancy, add weights to hf hub. 2023-08-19 23:33:43 -07:00
Ross Wightman 3f320a9e57 Merge branch 'Add-GhostNetV2' of github.com:yehuitang/pytorch-image-models into yehuitang-Add-GhostNetV2 2023-08-19 22:07:54 -07:00
Ross Wightman 7c2728c6fe
Merge pull request #1919 from ChengpengChen/main
Add RepGhost models and weights
2023-08-19 16:26:45 -07:00
Ross Wightman 69e0ca2e36 Weights on hf hub, bicubic yields slightly better eval 2023-08-19 16:25:45 -07:00
Ross Wightman b8011565bd
Merge pull request #1894 from seefun/master
add two different EfficientViT models
2023-08-19 09:24:14 -07:00
Ross Wightman 7d7589e8da Fixing efficient_vit torchscript, fx, default_cfg issues 2023-08-18 23:23:11 -07:00
Ross Wightman 58ea1c02c4 Add fixed_input_size flag to msra efficient_vit 2023-08-18 16:48:17 -07:00
Ross Wightman c28324a150 Update efficient_vit (msra), hf hub weights 2023-08-18 16:45:37 -07:00
Ross Wightman e700a32626 Cleanup of efficient_vit (mit), tweak eps for better AMP behaviour, formatting/cleanup, weights on hf hub 2023-08-18 16:06:07 -07:00
方曦 00f670fa69 fix bug in ci for efficientvits 2023-08-17 14:40:17 +08:00
Chengpeng Chen e7f97cb5ce Fix typos RepGhost models 2023-08-16 14:27:45 +08:00
Chengpeng Chen d1d0193615 Add RepGhost models and weights 2023-08-16 11:54:53 +08:00
Minseo Kang 7938f28542 Fix typo in efficientformer_v2 2023-08-16 03:29:01 +09:00
yehuitang b407794e3a
Add GhostNetV2 2023-08-13 18:20:27 +08:00
yehuitang fc865282e5
Add ghostnetv2.py 2023-08-13 18:16:26 +08:00
Ross Wightman da75cdd212
Merge pull request #1900 from huggingface/swin_maxvit_resize
Add support for resizing swin transformer, maxvit, coatnet at creation time
2023-08-11 15:05:28 -07:00
Ross Wightman 78a04a0e7d
Merge pull request #1911 from dsuess/1910-fixes-batchnormact-fx
Register norm_act layers as leaf modules
2023-08-11 14:34:16 -07:00
Yonghye Kwon 2048f6f20f
set self.num_features to neck_chans if neck_chans > 0 2023-08-11 13:45:06 +09:00
Ross Wightman 3a44e6c602 Fix #1912 CoaT model not loading w/ return_interm_layers 2023-08-10 11:15:58 -07:00
Daniel Suess 986de90360
Register orm_act layers as leaf modules 2023-08-10 15:37:26 +10:00
Ross Wightman c692715388 Some RepVit tweaks
* add head dropout to RepVit as all models have that arg
* default train to non-distilled head output via distilled_training flag (set_distilled_training) so fine-tune works by default w/o distillation script
* camel case naming tweaks to match other models
2023-08-09 12:41:12 -07:00
Ross Wightman c153cd4a3e Add more advanced interpolation method from BEiT and support non-square window & image size adaptation for
* beit/beit-v2
* maxxvit/coatnet
* swin transformer
And non-square windows for swin-v2
2023-08-08 16:41:16 -07:00
alec.tu bb2b6b5f09 fix num_classes not found 2023-08-07 15:16:03 +08:00
Ross Wightman 1dab536cb1 Fix torch.fx for swin padding change 2023-08-05 13:09:55 -07:00
Ross Wightman 7c0f492dbb Fix type annotation for torchscript 2023-08-04 23:03:52 -07:00
Ross Wightman 7790ea709b Add support for resizing swin transformer img_size and window_size on init and load from pretrained weights. Add support for non-square window_size to both swin v1/v2 2023-08-04 22:10:46 -07:00
Ross Wightman 81089b10a2 Remove unecessary LongTensor in EfficientFormer. Possibly maybe fix #1878 2023-08-03 16:38:53 -07:00
Ross Wightman 4224529ebe Version 0.9.5 prep for release. README update 2023-08-03 15:16:46 -07:00
Ross Wightman d138a9bf88 Add gluon hrnet small weights, fix #1895 2023-08-03 12:15:04 -07:00
Ross Wightman 76d166981d Fix missing norm call in Mlp forward (not used by default, but can be enabled for normformer MLP scale). Fix #1851 fix #1852 2023-08-03 11:36:30 -07:00
Ross Wightman 8e4480e4b6 Patch and pos embed resample done in float32 always (cast to float and back). Fix #1811 2023-08-03 11:32:17 -07:00
Ross Wightman 150356c493 Fix unfortunate selecsls case bug caused by aggressive IDE rename 2023-08-03 10:37:06 -07:00
Ross Wightman 6e8c53d0d3 Comment out beit url, no longer valid as now require long query string, leave for reference, must use HF hub now. 2023-08-03 10:00:46 -07:00
方曦 a56e2bbf19 fix efficientvit_msra pretrained load 2023-08-03 18:44:38 +08:00
方曦 e94c60b546 efficientvit_msra refactor 2023-08-03 17:45:50 +08:00
方曦 047bab6ab2 efficientvit_mit stage refactor 2023-08-03 14:59:35 +08:00
方曦 e8fb866ccf fix efficientvit_msra pool 2023-08-02 14:40:01 +08:00
方曦 43443f64eb fix efficientvits 2023-08-02 14:12:37 +08:00
方曦 82d1e99e1a add efficientvit(msra) 2023-08-01 18:51:08 +08:00
方曦 b91a77fab7 add EfficientVit (MIT) 2023-08-01 12:42:21 +08:00
Sepehr Sameni 40a518c194
use float in resample_abs_pos_embed_nhwc
since F.interpolate doesn't always support BFloat16
2023-07-28 16:01:42 -07:00
Ross Wightman 8cb0ddac45 Update README, version 0.9.4dev0 2023-07-27 17:07:31 -07:00
Ross Wightman a9d0615f42 Fix ijepa vit issue with 448 model, minor formatting fixes 2023-07-26 20:46:27 -07:00
alec.tu 942726db31 import lion in __init__.py 2023-07-27 09:26:57 +08:00
Ross Wightman 5874d1bfc7
Merge pull request #1876 from jameslahm/main
Add RepViT models
2023-07-26 14:38:41 -07:00
Ross Wightman b10310cc27 Add proper pool size for new resnexts 2023-07-26 14:36:03 -07:00
Ross Wightman b71d60cdb7 Two small fixes, num_classes in base class, add model tag 2023-07-26 13:18:49 -07:00
Ross Wightman 3561f8e885 Add seresnextaa201d_32x8d 12k and 1k weights 2023-07-26 13:17:05 -07:00
jameslahm 3318e7614d Add RepViT models 2023-07-21 14:56:53 +08:00
Ruslan Baikulov 158bf129c4 Replace deprecated NumPy aliases of builtin types 2023-07-03 22:24:25 +03:00
Ross Wightman c241081251
Merge pull request #1850 from huggingface/effnet_improve_features_only
Support other features only modes for EfficientNet. Fix #1848 fix #1849
2023-06-23 22:56:08 -07:00
Ross Wightman 47517dbefd Clean more feature extract issues
* EfficientNet/MobileNetV3/HRNetFeatures cls and FX mode support -ve index
* MobileNetV3 allows feature_cfg mode to bypass MobileNetV3Features
2023-06-14 14:46:22 -07:00
Ross Wightman a09c88ed0f Support other features only modes for EfficientNet 2023-06-14 12:57:39 -07:00
SeeFun c3f24a5ae5
‘add ViT weight from I-JEPA pretrain’ 2023-06-14 22:30:31 +08:00
Ross Wightman 2d597b126d Missed extra nadam algo step for capturable path 2023-06-13 20:51:31 -07:00
Ross Wightman 4790c0fa16 Missed nadamw.py 2023-06-13 20:45:58 -07:00
Ross Wightman dab0360e00 Add NadamW based on mlcommons algorithm, added multi-tensor step 2023-06-13 20:45:17 -07:00
Ross Wightman 700aebcdc4 Fix Pytorch 2.0 breakage for Lookahead optimizer adapter 2023-06-02 08:39:07 -07:00
Lengyue c308dbc6f2
update dinov2 layerscale init values 2023-05-24 12:20:17 -04:00
Ross Wightman 7cea88e2c4 Pop eps for lion optimizer 2023-05-21 15:20:03 -07:00
Ross Wightman e9373b1b92 Cleanup before samvit merge. Resize abs posembed on the fly, undo some line-wraps, remove redundant unbind, fix HF hub weight load 2023-05-18 16:43:48 -07:00
方曦 c1c6eeb909 fix loading pretrained weight for samvit 2023-05-18 08:49:29 +08:00
方曦 15de561f2c fix unit test for samvit 2023-05-17 12:51:12 +08:00
方曦 ea1f52df3e add ViT for Segment-Anything Model 2023-05-17 11:39:29 +08:00
Ross Wightman 960202cfcc Dev version 0.9.3 for main 2023-05-16 11:28:00 -07:00
Ross Wightman c5d3ee47f3 Add B/16 datacompxl CLIP weights 2023-05-16 11:27:20 -07:00
Ross Wightman 3d05c0e86f Version 0.9.2 2023-05-14 08:03:04 -07:00
Philip Keller fc77e9ecc5
Update hub.py
fixed import of _hub modules
2023-05-12 21:48:46 +02:00
Ross Wightman cc77096350 Version 0.9.1 2023-05-12 09:47:47 -07:00
Ross Wightman f744bda994 use torch.jit.Final instead of Final for beit, eva 2023-05-12 09:12:14 -07:00
Ross Wightman 2e99bcaedd Update README, prep for version 0.9.0 release 2023-05-11 15:22:50 -07:00
Ross Wightman 3eaf729f3f F.sdpa for visformer fails w/o contiguous on qkv, make experimental 2023-05-11 11:37:37 -07:00
Ross Wightman cf1884bfeb Add 21k maxvit tf weights 2023-05-10 18:23:32 -07:00
Ross Wightman 6c2edf4d74 Missed hub_id entries for byoanet models 2023-05-10 15:58:55 -07:00
Ross Wightman cf101b0097 Version 0.8.23dev0 and README update 2023-05-10 14:41:22 -07:00