Ross Wightman
9b9a356a04
Add forward_intermediates support for xcit, cait, and volo.
2024-04-29 16:30:45 -07:00
Ross Wightman
ef147fd2fb
Add forward_intermediates API to Hiera for features_only=True support
2024-04-21 11:30:41 -07:00
Ross Wightman
d88bed6535
Bit more Hiera fiddling
2024-04-21 09:36:57 -07:00
Ross Wightman
8a54d2a930
WIP Hiera implementation. Fix #2083 . Trying to get image size adaptation to work.
2024-04-20 09:47:17 -07:00
Ross Wightman
d6b95520f1
Merge pull request #2136 from huggingface/vit_features_only
...
Exploring vit features_only via new forward_intermediates() API, inspired by #2131
2024-04-11 08:38:20 -07:00
Ross Wightman
4b2565e4cb
More forward_intermediates() / FeatureGetterNet work
...
* include relpos vit
* refactor reduction / size calcs so hybrid vits work and dynamic_img_size works
* fix -ve feature indices when pruning
* fix mvitv2 w/ class token
* refine naming
* add tests
2024-04-10 15:11:34 -07:00
Ross Wightman
ef9c6fb846
forward_head(), consistent pre_logits handling to reduce likelihood of people manually replacing .head module having issues
2024-04-09 21:54:59 -07:00
Ross Wightman
679daef76a
More forward_intermediates() & features_only work
...
* forward_intermediates() added to beit, deit, eva, mvitv2, twins, vit, vit_sam
* add features_only to forward intermediates to allow just intermediate features
* fix #2060
* fix #1374
* fix #657
2024-04-09 21:29:16 -07:00
Ross Wightman
17b892f703
Fix #2139 , disable strict weight loading when head changes from classification
2024-04-09 08:41:37 -07:00
Ross Wightman
5fdc0b4e93
Exploring vit features_only using get_intermediate_layers() as per #2131
2024-04-07 11:24:45 -07:00
Ross Wightman
34b41b143c
Fiddling with efficientnet x/h defs, is it worth adding & training any?
2024-03-22 17:55:02 -07:00
Ross Wightman
c559c3911f
Improve vit conversions. OpenAI convert pass through main convert for patch & pos resize. Fix #2120
2024-03-21 10:00:43 -07:00
Ross Wightman
256cf19148
Rename tinyclip models to fit existing 'clip' variants, use consistently mapped OpenCLIP compatible checkpoint on hf hub
2024-03-20 15:21:46 -07:00
Thien Tran
1a1d07d479
add other tinyclip
2024-03-19 07:27:09 +08:00
Thien Tran
dfffffac55
add tinyclip 8m
2024-03-19 07:02:17 +08:00
Ross Wightman
6ccb7d6a7c
Merge pull request #2111 from jamesljlster/enhance_vit_get_intermediate_layers
...
Vision Transformer (ViT) get_intermediate_layers: enhanced to support dynamic image size and saved computational costs from unused blocks
2024-03-18 13:41:18 -07:00
Cheng-Ling Lai
db06b56d34
Saved computational costs of get_intermediate_layers() from unused blocks
2024-03-17 21:34:06 +08:00
Cheng-Ling Lai
4731e4efc4
Modified ViT get_intermediate_layers() to support dynamic image size
2024-03-16 23:07:21 +08:00
SmilingWolf
59cb0be595
SwinV2: add configurable act_layer argument
...
Defaults to "gelu", but makes it possible to pass "gelu_tanh".
Makes it easier to port weights from JAX/Flax, where the tanh
approximation is the default.
2024-03-05 22:04:17 +01:00
Ross Wightman
31e0dc0a5d
Tweak hgnet before merge
2024-02-12 15:00:32 -08:00
Ross Wightman
3e03491e49
Merge branch 'master' of https://github.com/seefun/pytorch-image-models into seefun-master
2024-02-12 14:59:54 -08:00
Ross Wightman
59239d9df5
Cleanup imports for vit relpos
2024-02-10 21:40:57 -08:00
Ross Wightman
ac1b08deb6
fix_init on vit & relpos vit
2024-02-10 20:15:37 -08:00
Ross Wightman
935950cc11
Fix F.sdpa attn drop prob
2024-02-10 20:14:47 -08:00
Ross Wightman
0737cf231d
Add Next-ViT
2024-02-10 17:05:16 -08:00
Ross Wightman
d6c2cc91af
Make NormMlpClassifier head reset args consistent with ClassifierHead
2024-02-10 16:25:33 -08:00
Ross Wightman
87fec3dc14
Update experimental vit model configs
2024-02-10 16:05:58 -08:00
Ross Wightman
7d3c2dc993
Add group_matcher for DaViT
2024-02-10 14:58:45 -08:00
Ross Wightman
88889de923
Fix meshgrid deprecation warnings and backward compat with explicit 'ndgrid' and 'meshgrid' fn w/o indexing arg
2024-01-27 13:48:33 -08:00
Ross Wightman
d4386219c6
Improve type handling for arange & rel pos embeds, keep calculations in float32 until application (may change to apply in float32 in future). Prevent arange type hijacking by DeepSpeed Zero
2024-01-26 16:35:51 -08:00
Ross Wightman
3234daf783
Add missing deprecation mapping for a densenet and xcit model. Fix #2086 . Tweak xcit pos embed use of arange for better low prec safety.
2024-01-24 22:04:04 -08:00
Li zhuoqun
53a4888328
Add droppath and type hint to Xception.
2024-01-19 11:15:47 -08:00
方曦
9dbea3bef6
fix cls head in hgnet
2023-12-27 21:26:26 +08:00
SeeFun
56ae8b906d
fix reset head in hgnet
2023-12-27 20:11:29 +08:00
SeeFun
6862c9850a
fix backward in hgnet
2023-12-27 16:49:37 +08:00
SeeFun
6cd28bc5c2
Merge branch 'huggingface:main' into master
2023-12-27 16:43:37 +08:00
Ross Wightman
f2fdd97e9f
Add parsable json results output for train.py, tweak --pretrained-path to force head adaptation
2023-12-22 11:18:25 -08:00
LR
e0079c92da
Update eva.py ( #2058 )
...
* Update eva.py
When argument class token = False, self.cls_token = None.
Prevents error from attempting trunc_normal_ on None:
AttributeError: 'NoneType' object has no attribute 'uniform_'
* Update eva.py
fix
2023-12-16 15:10:45 -08:00
Li zhuoqun
7da34a999a
add type annotations in the code of swin_transformer_v2
2023-12-15 09:31:25 -08:00
Fredo Guan
bbe798317f
Update EdgeNeXt to use ClassifierHead as per ConvNeXt ( #2051 )
...
* Update edgenext.py
2023-12-11 12:17:19 -08:00
Ross Wightman
60b170b200
Add --pretrained-path arg to train script to allow passing local checkpoint as pretrained. Add missing/unexpected keys log.
2023-12-11 12:10:29 -08:00
Fredo Guan
2597ce2860
Update davit.py
2023-12-11 11:13:04 -08:00
akiyuki ishikawa
2bd043ce5d
fix doc position
2023-12-05 12:00:51 -08:00
akiyuki ishikawa
4f2e1bf4cb
Add missing docs in SwinTransformerStage
2023-12-05 12:00:51 -08:00
Ross Wightman
cd8d9d9ff3
Add missing hf hub entries for mvitv2
2023-11-26 21:06:39 -08:00
Ross Wightman
b996c1a0f5
A few more missed hf hub entries
2023-11-23 21:48:14 -08:00
Ross Wightman
89ec91aece
Add missing hf_hub entry for mobilnetv3_rw
2023-11-23 12:44:59 -08:00
Dillon Laird
63ee54853c
fixed intermediate output indices
2023-11-22 16:32:41 -08:00
Ross Wightman
fa06f6c481
Merge branch 'seefun-efficientvit'
2023-11-21 14:06:27 -08:00
Ross Wightman
c6b0c98963
Upload weights to hub, tweak crop_pct, comment out SAM EfficientViTs for now (v2 weights comming)
2023-11-21 14:05:04 -08:00
Ross Wightman
ada145b016
Literal use w/ python < 3.8 requires typing_extension, cach instead of check sys ver
2023-11-21 09:48:03 -08:00
Ross Wightman
dfaab97d20
More consistency in model arg/kwarg merge handling
2023-11-21 09:48:03 -08:00
Ross Wightman
3775e4984f
Merge branch 'efficientvit' of github.com:seefun/pytorch-image-models into seefun-efficientvit
2023-11-20 16:21:38 -08:00
Ross Wightman
dfb8658100
Fix a few missed model deprecations and one missed pretrained cfg
2023-11-20 12:41:49 -08:00
Ross Wightman
a604011935
Add support for passing model args via hf hub config
2023-11-19 15:16:01 -08:00
方曦
c9d093a58e
update norm eps for efficientvit large
2023-11-18 17:46:47 +08:00
Laureηt
21647c0a0c
Add types to vision_transformers.py
2023-11-17 16:06:06 -08:00
方曦
87ba43a9bc
add efficientvit large series
2023-11-17 13:58:46 +08:00
Ross Wightman
7c685a4ef3
Fix openai quickgelu loading and add mnissing orig_in21k vit weights and remove zero'd classifier w/ matching hub update
2023-11-16 19:16:28 -08:00
LittleNyima
ef72c3cd47
Add warnings for duplicate registry names
2023-11-08 10:18:59 -08:00
Ross Wightman
d3e83a190f
Add in12k fine-tuned convnext_xxlarge
2023-11-03 14:35:01 -07:00
Ross Wightman
dcfdba1f5f
Make quickgelu models appear in listing
2023-11-03 11:01:41 -07:00
Ross Wightman
96bd162ddb
Add cc-by-nc-4.0 license for metaclip, make note in quickgelu model def about pretrained_cfg mapping
2023-11-03 11:01:41 -07:00
Ross Wightman
6894ec7edc
Forgot about datcomp b32 models
2023-11-03 11:01:41 -07:00
Ross Wightman
a2e4a4c148
Add quickgelu vit clip variants, simplify get_norm_layer and allow string args in vit norm/act. Add metaclip CLIP weights
2023-11-03 11:01:41 -07:00
Ross Wightman
c55bc41a42
DFN CLIP ViT support
2023-10-31 12:16:21 -07:00
a-r-r-o-w
d5f1525334
include suggestions from review
...
Co-Authored-By: Ross Wightman <rwightman@gmail.com>
2023-10-30 13:47:54 -07:00
a-r-r-o-w
5f14bdd564
include typing suggestions by @rwightman
2023-10-30 13:47:54 -07:00
a-r-r-o-w
05b0aaca51
improvement: add typehints and docs to timm/models/resnet.py
2023-10-30 13:47:54 -07:00
a-r-r-o-w
c2fe0a2268
improvement: add typehints and docs to timm/models/mobilenetv3.py
2023-10-30 13:47:54 -07:00
Laureηt
d023154bb5
Update swin_transformer.py
...
make `SwimTransformer`'s `patch_embed` customizable through the constructor
2023-10-30 13:47:14 -07:00
Ross Wightman
68a121402f
Added hub weights for dinov2 register models
2023-10-29 23:03:48 -07:00
Ross Wightman
3f02392488
Add DINOv2 models with register tokens. Convert pos embed to non-overlapping for consistency.
2023-10-29 23:03:48 -07:00
Patrick Labatut
97450d618a
Update DINOv2 license to Apache 2.0
2023-10-27 09:12:51 -07:00
mjamroz
7a6369156f
avoid getting undefined
2023-10-22 21:36:23 -07:00
pUmpKin-Co
8556462a18
fix doc typo in resnetv2
2023-10-20 11:56:50 -07:00
Ross Wightman
462fb3ec9f
Push new repvit weights to hub, tweak tag names
2023-10-20 11:49:29 -07:00
Ross Wightman
5309424d5e
Merge branch 'main' of https://github.com/jameslahm/pytorch-image-models into jameslahm-main
2023-10-20 11:08:12 -07:00
Ross Wightman
d3ebdcfd93
Disable strict load when siglip vit pooling removed
2023-10-19 12:03:40 -07:00
Ross Wightman
e728f3efdb
Cleanup ijepa models, they're just gap (global-avg-pool) models w/o heads. fc-norm conversion was wrong, gigantic should have been giant
2023-10-17 15:44:46 -07:00
Ross Wightman
49a459e8f1
Merge remote-tracking branch 'upstream/main' into vit_siglip_and_reg
2023-10-17 09:36:48 -07:00
Ross Wightman
59b622233b
Change ijepa names, add pretrain cfg for reg experimentts
2023-10-17 07:16:17 -07:00
Ross Wightman
71365165a2
Add SigLIP weights
2023-10-16 23:26:08 -07:00
Ross Wightman
42daa3b497
Add full set of SigLIP models
2023-10-10 22:15:45 -07:00
方曦
4aa166de9c
Add hgnet ssld weights
2023-10-09 19:14:10 +08:00
方曦
159e91605c
Add PP-HGNet and PP-HGNetv2 models
2023-10-09 19:04:58 +08:00
Yassine
884ef88818
fix all SDPA dropouts
2023-10-05 08:58:41 -07:00
Yassine
b500cae4c5
fastvit: don't dropout in eval
2023-10-05 08:58:41 -07:00
Ross Wightman
b9dde58076
Fixup attention pooling in siglip vit support
2023-10-02 11:44:12 -07:00
jameslahm
f061b539d7
Update RepViT models
2023-10-01 14:00:53 +08:00
Ross Wightman
99cfd6702f
Use global pool arg to select attention pooling in head
2023-09-30 16:16:21 -07:00
Ross Wightman
82cc53237e
Working on support for siglip (w/ attn pool) vit backbone, and adding registers (reg tokens)
2023-09-30 16:03:01 -07:00
Ross Wightman
6bae514656
Add pretrained patch embed resizing to swin
2023-09-27 10:27:28 -07:00
Yassine
5c504b4ded
flip these two
2023-09-27 10:24:12 -07:00
Yassine
8ba2038e6b
fast_vit: propagate act_layer argument
2023-09-27 10:24:12 -07:00
Nguyen Nhat Hoang
95ba90157f
Update tiny_vit.py to fix bug
2023-09-23 10:05:52 -07:00
belfner
245ad4f41a
Added missing RuntimeError to builder functions of models that do not currently support feature extraction
2023-09-19 08:19:14 -07:00
Thorsten Hempel
d4c21b95f4
Update repghost.py
2023-09-15 11:41:56 -07:00
Thorsten Hempel
7eb7d13845
Fix in_features for linear layer in reset_classifier.
2023-09-13 09:29:38 -07:00
Ross Wightman
0d124ffd4f
Update README. Fine-grained layer-wise lr decay working for tiny_vit and both efficientvits. Minor fixes.
2023-09-01 15:05:29 -07:00
Ross Wightman
2f0fbb59b3
TinyViT weights on HF hub
2023-09-01 11:05:56 -07:00
Ross Wightman
507cb08acf
TinyVitBlock needs adding as leaf for FX now, tweak a few dim names
2023-09-01 11:05:56 -07:00
Ross Wightman
9caf32b93f
Move levit style pos bias resize with other rel pos bias utils
2023-09-01 11:05:56 -07:00
Ross Wightman
63417b438f
TinyViT adjustments
...
* keep most of net in BCHW layout, performance appears same, can remove static resolution attribs and features easier to use
* add F.sdpa, decent gains in pt 2.1
* tweak crop pct based on eval
2023-09-01 11:05:56 -07:00
方曦
39aa44b192
Fixing tinyvit trace issue
2023-09-01 11:05:56 -07:00
方曦
aea3b9c854
Fixing tinyvit input_size issue
2023-09-01 11:05:56 -07:00
方曦
fabc4e5bcd
Fixing tinyvit torchscript issue
2023-09-01 11:05:56 -07:00
方曦
bae949f830
fix attention_bias_cache in tinyvit
2023-09-01 11:05:56 -07:00
方曦
170a5b6e27
add tinyvit
2023-09-01 11:05:56 -07:00
Ross Wightman
983310d6a2
Fix #1935 , torch.amp.autocast -> torch.autocast (namespace issue for 1.10 pt compat)
2023-08-30 15:03:28 -07:00
Ross Wightman
c8b2f28096
Fix a few typos, fix fastvit proj_drop, add code link
2023-08-28 21:26:29 -07:00
Ross Wightman
fc5d705b83
dynamic_size -> dynamic_img_size, add dynamic_img_pad for padding option
2023-08-27 15:58:35 -07:00
Ross Wightman
1f4512fca3
Support dynamic_resize in eva.py models
2023-08-27 15:58:35 -07:00
Ross Wightman
ea3519a5f0
Fix dynamic_resize for deit models (distilled or no_embed_cls) and vit w/o class tokens
2023-08-27 15:58:35 -07:00
Ross Wightman
4d8ecde6cc
Fix torchscript for vit-hybrid dynamic_resize
2023-08-27 15:58:35 -07:00
Ross Wightman
fdd8c7c2da
Initial impl of dynamic resize for existing vit models (incl vit-resnet hybrids)
2023-08-27 15:58:35 -07:00
Ross Wightman
5d599a6a10
RepViT weights on HF hub
2023-08-25 10:39:02 -07:00
Ross Wightman
56c285445c
Wrong pool size for 384x384 inception_next_base
2023-08-24 18:31:44 -07:00
Ross Wightman
af9f56f3bf
inception_next dilation support, weights on hf hub, classifier reset / global pool / no head fixes
2023-08-24 18:31:44 -07:00
Ross Wightman
2d33b9df6c
Add features_only support to inception_next
2023-08-24 18:31:44 -07:00
Ross Wightman
3d8d7450ad
InceptionNeXt using timm builder, more cleanup
2023-08-24 18:31:44 -07:00
Ross Wightman
f4cf9775c3
Adding InceptionNeXt
2023-08-24 18:31:44 -07:00
Ross Wightman
d6c348765a
Fix first_conv for mobileone and fastvit
2023-08-23 22:50:37 -07:00
Ross Wightman
16334e4bec
Fix two fastvit issues
2023-08-23 22:50:37 -07:00
Ross Wightman
5242ba6edc
MobileOne and FastViT weights on HF hub, more code cleanup and tweaks, features_only working. Add reparam flag to validate and benchmark, support reparm of all models with fuse(), reparameterize() or switch_to_deploy() methods on modules
2023-08-23 22:50:37 -07:00
Ross Wightman
40dbaafef5
Stagify FastViT /w downsample to top of stage
2023-08-23 22:50:37 -07:00
Ross Wightman
8470eb1cb5
More fastvit & mobileone updates, ready for weight upload
2023-08-23 22:50:37 -07:00
Ross Wightman
8474508d07
More work on FastViT, use own impl of MobileOne, validation working with remapped weight, more refactor TODO
2023-08-23 22:50:37 -07:00
Ross Wightman
c7a20cec13
Begin adding FastViT
2023-08-23 22:50:37 -07:00
Ross Wightman
7fd3674d0d
Add mobileone and update repvgg
2023-08-23 22:50:37 -07:00
Ross Wightman
3055411c1b
Fix samvit bug, add F.sdpa support and ROPE option ( #1920 )
...
* Fix a bug I introduced in samvit, add F.sdpa support and ROPE option to samvit, neck is LayerNorm if not used and standard classifier used
* Add attn dropout to F.sdpa
* Fix fx trace for sam vit
* Fixing torchscript issues in samvit
* Another torchscript fix
* samvit head fc name fix
2023-08-20 21:22:59 -07:00
Ross Wightman
300f54a96f
Another effcientvit (mit) tweak, fix torchscript/fx conflict with autocast disable
2023-08-20 15:07:25 -07:00
Ross Wightman
dc18cda2e7
efficientvit (mit) msa attention q/k/v ops need to be in float32 to train w/o NaN
2023-08-20 11:49:36 -07:00
Ross Wightman
be4e0d8f76
Update attrib comment to include v2
2023-08-19 23:39:09 -07:00
Ross Wightman
126a58e563
Combine ghostnetv2 with ghostnet, reduec redundancy, add weights to hf hub.
2023-08-19 23:33:43 -07:00
Ross Wightman
3f320a9e57
Merge branch 'Add-GhostNetV2' of github.com:yehuitang/pytorch-image-models into yehuitang-Add-GhostNetV2
2023-08-19 22:07:54 -07:00
Ross Wightman
7c2728c6fe
Merge pull request #1919 from ChengpengChen/main
...
Add RepGhost models and weights
2023-08-19 16:26:45 -07:00
Ross Wightman
69e0ca2e36
Weights on hf hub, bicubic yields slightly better eval
2023-08-19 16:25:45 -07:00
Ross Wightman
b8011565bd
Merge pull request #1894 from seefun/master
...
add two different EfficientViT models
2023-08-19 09:24:14 -07:00
Ross Wightman
7d7589e8da
Fixing efficient_vit torchscript, fx, default_cfg issues
2023-08-18 23:23:11 -07:00
Ross Wightman
58ea1c02c4
Add fixed_input_size flag to msra efficient_vit
2023-08-18 16:48:17 -07:00
Ross Wightman
c28324a150
Update efficient_vit (msra), hf hub weights
2023-08-18 16:45:37 -07:00
Ross Wightman
e700a32626
Cleanup of efficient_vit (mit), tweak eps for better AMP behaviour, formatting/cleanup, weights on hf hub
2023-08-18 16:06:07 -07:00
方曦
00f670fa69
fix bug in ci for efficientvits
2023-08-17 14:40:17 +08:00
Chengpeng Chen
e7f97cb5ce
Fix typos RepGhost models
2023-08-16 14:27:45 +08:00
Chengpeng Chen
d1d0193615
Add RepGhost models and weights
2023-08-16 11:54:53 +08:00
Minseo Kang
7938f28542
Fix typo in efficientformer_v2
2023-08-16 03:29:01 +09:00
yehuitang
b407794e3a
Add GhostNetV2
2023-08-13 18:20:27 +08:00
yehuitang
fc865282e5
Add ghostnetv2.py
2023-08-13 18:16:26 +08:00
Ross Wightman
da75cdd212
Merge pull request #1900 from huggingface/swin_maxvit_resize
...
Add support for resizing swin transformer, maxvit, coatnet at creation time
2023-08-11 15:05:28 -07:00
Ross Wightman
78a04a0e7d
Merge pull request #1911 from dsuess/1910-fixes-batchnormact-fx
...
Register norm_act layers as leaf modules
2023-08-11 14:34:16 -07:00
Yonghye Kwon
2048f6f20f
set self.num_features to neck_chans if neck_chans > 0
2023-08-11 13:45:06 +09:00
Ross Wightman
3a44e6c602
Fix #1912 CoaT model not loading w/ return_interm_layers
2023-08-10 11:15:58 -07:00
Daniel Suess
986de90360
Register orm_act layers as leaf modules
2023-08-10 15:37:26 +10:00
Ross Wightman
c692715388
Some RepVit tweaks
...
* add head dropout to RepVit as all models have that arg
* default train to non-distilled head output via distilled_training flag (set_distilled_training) so fine-tune works by default w/o distillation script
* camel case naming tweaks to match other models
2023-08-09 12:41:12 -07:00
Ross Wightman
c153cd4a3e
Add more advanced interpolation method from BEiT and support non-square window & image size adaptation for
...
* beit/beit-v2
* maxxvit/coatnet
* swin transformer
And non-square windows for swin-v2
2023-08-08 16:41:16 -07:00
alec.tu
bb2b6b5f09
fix num_classes not found
2023-08-07 15:16:03 +08:00
Ross Wightman
1dab536cb1
Fix torch.fx for swin padding change
2023-08-05 13:09:55 -07:00
Ross Wightman
7c0f492dbb
Fix type annotation for torchscript
2023-08-04 23:03:52 -07:00
Ross Wightman
7790ea709b
Add support for resizing swin transformer img_size and window_size on init and load from pretrained weights. Add support for non-square window_size to both swin v1/v2
2023-08-04 22:10:46 -07:00
Ross Wightman
81089b10a2
Remove unecessary LongTensor in EfficientFormer. Possibly maybe fix #1878
2023-08-03 16:38:53 -07:00
Ross Wightman
d138a9bf88
Add gluon hrnet small weights, fix #1895
2023-08-03 12:15:04 -07:00
Ross Wightman
150356c493
Fix unfortunate selecsls case bug caused by aggressive IDE rename
2023-08-03 10:37:06 -07:00
Ross Wightman
6e8c53d0d3
Comment out beit url, no longer valid as now require long query string, leave for reference, must use HF hub now.
2023-08-03 10:00:46 -07:00
方曦
a56e2bbf19
fix efficientvit_msra pretrained load
2023-08-03 18:44:38 +08:00
方曦
e94c60b546
efficientvit_msra refactor
2023-08-03 17:45:50 +08:00
方曦
047bab6ab2
efficientvit_mit stage refactor
2023-08-03 14:59:35 +08:00
方曦
e8fb866ccf
fix efficientvit_msra pool
2023-08-02 14:40:01 +08:00
方曦
43443f64eb
fix efficientvits
2023-08-02 14:12:37 +08:00
方曦
82d1e99e1a
add efficientvit(msra)
2023-08-01 18:51:08 +08:00
方曦
b91a77fab7
add EfficientVit (MIT)
2023-08-01 12:42:21 +08:00
Ross Wightman
a9d0615f42
Fix ijepa vit issue with 448 model, minor formatting fixes
2023-07-26 20:46:27 -07:00
Ross Wightman
5874d1bfc7
Merge pull request #1876 from jameslahm/main
...
Add RepViT models
2023-07-26 14:38:41 -07:00
Ross Wightman
b10310cc27
Add proper pool size for new resnexts
2023-07-26 14:36:03 -07:00
Ross Wightman
b71d60cdb7
Two small fixes, num_classes in base class, add model tag
2023-07-26 13:18:49 -07:00
Ross Wightman
3561f8e885
Add seresnextaa201d_32x8d 12k and 1k weights
2023-07-26 13:17:05 -07:00
jameslahm
3318e7614d
Add RepViT models
2023-07-21 14:56:53 +08:00
Ruslan Baikulov
158bf129c4
Replace deprecated NumPy aliases of builtin types
2023-07-03 22:24:25 +03:00
Ross Wightman
c241081251
Merge pull request #1850 from huggingface/effnet_improve_features_only
...
Support other features only modes for EfficientNet. Fix #1848 fix #1849
2023-06-23 22:56:08 -07:00
Ross Wightman
47517dbefd
Clean more feature extract issues
...
* EfficientNet/MobileNetV3/HRNetFeatures cls and FX mode support -ve index
* MobileNetV3 allows feature_cfg mode to bypass MobileNetV3Features
2023-06-14 14:46:22 -07:00
Ross Wightman
a09c88ed0f
Support other features only modes for EfficientNet
2023-06-14 12:57:39 -07:00
SeeFun
c3f24a5ae5
‘add ViT weight from I-JEPA pretrain’
2023-06-14 22:30:31 +08:00
Lengyue
c308dbc6f2
update dinov2 layerscale init values
2023-05-24 12:20:17 -04:00
Ross Wightman
e9373b1b92
Cleanup before samvit merge. Resize abs posembed on the fly, undo some line-wraps, remove redundant unbind, fix HF hub weight load
2023-05-18 16:43:48 -07:00
方曦
c1c6eeb909
fix loading pretrained weight for samvit
2023-05-18 08:49:29 +08:00
方曦
15de561f2c
fix unit test for samvit
2023-05-17 12:51:12 +08:00
方曦
ea1f52df3e
add ViT for Segment-Anything Model
2023-05-17 11:39:29 +08:00
Ross Wightman
c5d3ee47f3
Add B/16 datacompxl CLIP weights
2023-05-16 11:27:20 -07:00
Philip Keller
fc77e9ecc5
Update hub.py
...
fixed import of _hub modules
2023-05-12 21:48:46 +02:00
Ross Wightman
f744bda994
use torch.jit.Final instead of Final for beit, eva
2023-05-12 09:12:14 -07:00
Ross Wightman
2e99bcaedd
Update README, prep for version 0.9.0 release
2023-05-11 15:22:50 -07:00
Ross Wightman
3eaf729f3f
F.sdpa for visformer fails w/o contiguous on qkv, make experimental
2023-05-11 11:37:37 -07:00
Ross Wightman
cf1884bfeb
Add 21k maxvit tf weights
2023-05-10 18:23:32 -07:00
Ross Wightman
6c2edf4d74
Missed hub_id entries for byoanet models
2023-05-10 15:58:55 -07:00
Ross Wightman
850ab4931f
Missed a few pretrained tags...
2023-05-10 12:16:30 -07:00
Ross Wightman
ff2464e2a0
Throw when pretrained weights not available and pretrained=True (principle of least surprise).
2023-05-10 10:44:34 -07:00
Ross Wightman
e0ec0f7252
Merge pull request #1643 from nateraw/docstrings-update
...
Update Docstring for create_model
2023-05-09 21:33:20 -07:00
Ross Wightman
627b6315ba
Add typing to dinov2 entrypt fns, use hf hub for mae & dinov2 weights
2023-05-09 20:42:11 -07:00
Ross Wightman
960a882510
Remove label offsets and remove old weight url for 1001 class (background + in1k) TF origin weights
2023-05-09 18:00:41 -07:00
Ross Wightman
a01d8f86f4
Tweak DinoV2 add, add MAE ViT weights, add initial intermediate layer getter experiment
2023-05-09 17:59:22 -07:00