Ross Wightman
beef7f0a22
Add ImageNet-12k intermediate fine-tunes of convnext base & large CLIP models, add first 1k fine-tune of xxlarge
2023-03-31 16:45:01 -07:00
Ross Wightman
9aa1133bd2
Fix #1750 , uncomment weight that exists on HF hub, add FIXME to 3 others that are still on local storage
2023-03-31 14:49:30 -07:00
Ross Wightman
7326470514
Merge pull request #1746 from huggingface/eva02
...
Adding EVA02 weights and model defs
2023-03-31 12:17:00 -07:00
Ross Wightman
adeb9de7c6
Mismatch in eva pretrained_cfg vs model for one of the clip variants
2023-03-31 10:30:30 -07:00
Ross Wightman
0737bd3ec8
eva02 non-CLIP weights on HF hub, add initial eva02 clip model configs w/ postnorm variant & attn LN
2023-03-30 23:43:59 -07:00
Ross Wightman
ac67098147
Add final attr for fast_attn on beit / eva
2023-03-28 08:40:40 -07:00
Ross Wightman
1885bdc431
Merge pull request #1745 from huggingface/mw-mlp_mixer
...
MLP-Mixer multi-weight support, HF hub push
2023-03-28 07:55:17 -07:00
Ross Wightman
e9f427b953
Add hf hub entries for mlp_mixer
2023-03-27 22:50:43 -07:00
Ross Wightman
cff81deb78
multi-weight and hf hub for deit / deit3
2023-03-27 22:47:16 -07:00
Ross Wightman
3863d63516
Adding EVA02 weights and model defs, move beit based eva_giant to same eva.py file. Cleanup rotary pos, add lang oriented freq bands to be compat with eva design choice. Fix #1738
2023-03-27 17:16:07 -07:00
Ross Wightman
b12060996c
MLP-Mixer multi-weight support, hf hub push
2023-03-27 16:42:13 -07:00
Ross Wightman
d196fa536d
Fix last min torchscript regression in nfnet changes
2023-03-24 00:10:17 -07:00
Ross Wightman
33ada0cbca
Add group_matcher to focalnet for proper layer-wise LR decay
2023-03-23 23:21:49 -07:00
Ross Wightman
b271dc0e16
NFNet multi-weight support + HF hub push
2023-03-23 23:20:38 -07:00
Ross Wightman
dbd33e4b62
Update crop settings for new rexnet weights
2023-03-22 15:39:49 -07:00
Ross Wightman
da6bdd4560
Update resnetv2.py for multi-weight and HF hub weights
2023-03-22 15:38:04 -07:00
Ross Wightman
b3e816d6d7
Improve filtering behaviour for tag + non-tagged model wildcard consistency.
2023-03-22 10:21:22 -07:00
Ross Wightman
7aba64ebdb
Add update byobnet.py w/ models pushed to HF hub
2023-03-22 10:00:00 -07:00
Ross Wightman
e7ef8335bf
regnet.py multi-weight conversion, new ImageNet-12k pretrain/ft from timm for y_120 and y_160, also new tv v2, swag, & seer weights for push to Hf hub.
2023-03-21 15:51:49 -07:00
Ross Wightman
c78319adce
Add ImageNet-12k ReXNet-R 200 & 300 weights, and push existing ReXNet models to HF hub. Dilation support added to rexnet
2023-03-20 13:48:17 -07:00
Ross Wightman
041de79f9e
Fix numel use in helpers for checkpoint remap
2023-03-20 09:36:48 -07:00
Ross Wightman
49b9c3be80
Include pretrained tag in deprecated mapping warning
2023-03-19 21:21:19 -07:00
Ross Wightman
572f05096a
Swin and FocalNet weights on HF hub. Add model deprecation functionality w/ some registry tweaks.
2023-03-18 14:55:09 -07:00
Ross Wightman
5aebad3fbc
return_map back to out_map for _feature helpers
2023-03-16 14:50:55 -07:00
Ross Wightman
acfd85ad68
All swin models support spatial output, add output_fmt to v1/v2 and use ClassifierHead.
...
* update ClassifierHead to allow different input format
* add output format support to patch embed
* fix some flatten issues for a few conv head models
* add Format enum and helpers for tensor format (layout) choices
2023-03-15 23:21:51 -07:00
Ross Wightman
c30a160d3e
Merge remote-tracking branch 'origin/main' into focalnet_and_swin_refactor
2023-03-15 15:58:39 -07:00
Ross Wightman
ad94d737b7
Add support to ConvNextBlock for downsample and channel expansion to improve stand alone use. Fix #1699
2023-03-13 14:06:24 -07:00
Piotr Sebastian Kluska
992bf7c3d4
chore: Modify the MobileVitV2Block to be coreml exportable
...
based on is_exportable() set variable controlling behaviour of the block
CoreMLTools support im2col from 6.2 version, unfortunately col2im
is still not supported.
Tested with exporting to ONNX, Torchscript, CoreML, and TVM.
2023-03-03 09:38:24 +01:00
Ross Wightman
4b8cfa6c0a
Add convnext_xxlarge CLIP image tower weights, version 0.8.15dev0
2023-02-26 21:51:48 -08:00
Ross Wightman
1c13ef7b46
Add default norm_eps=1e-5 for convnext_xxlarge, improve kwarg merging for all convnext models
2023-02-26 12:11:49 -08:00
Benjamin Bossan
a5b01ec04e
Add type annotations to _registry.py
...
Description
Add type annotations to _registry.py so that they will pass mypy
--strict.
Comment
I was reading the code and felt that this module would be easier to
understand with type annotations. Therefore, I went ahead and added the
annotations.
The idea with this PR is to start small to see if we can align on _how_
to annotate types. I've seen people in the past disagree on how strictly
to annotate the code base, so before spending too much time on this, I
wanted to check if you agree, Ross.
Most of the added types should be straightforward. Some notes on the
non-trivial changes:
- I made no assumption about the fn passed to register_model, but maybe
the type could be stricter. Are all models nn.Modules?
- If I'm not mistaken, the type hint for get_arch_name was incorrect
- I had to add a # type: ignore to model.__all__ = ...
- I made some minor code changes to list_models to facilitate the
typing. I think the changes should not affect the logic of the function.
- I removed list from list(sorted(...)) because sorted returns always a
list.
2023-02-22 09:19:30 -08:00
Ross Wightman
4d9c3ae2fb
Add laion2b 320x320 ConvNeXt-Large CLIP weights
2023-02-18 16:34:03 -08:00
Ross Wightman
d0b45c9b4d
Make safetensor import option for now. Improve avg/clean checkpoints ext handling a bit (more consistent).
2023-02-18 16:06:42 -08:00
Ross Wightman
947c1d757a
Merge branch 'main' into focalnet_and_swin_refactor
2023-02-17 16:28:52 -08:00
Ross Wightman
cf324ea38f
Fix grad checkpointing in focalnet
2023-02-17 16:26:26 -08:00
Ross Wightman
848d200767
Overhaul FocalNet implementation
2023-02-17 16:24:59 -08:00
Ross Wightman
7266c5c716
Merge branch 'main' into focalnet_and_swin_refactor
2023-02-17 09:20:14 -08:00
Ross Wightman
7d9e321b76
Improve tracing of window attn models with simpler reshape logic
2023-02-17 07:59:06 -08:00
Ross Wightman
2e38d53dca
Remove dead line
2023-02-16 16:57:42 -08:00
Ross Wightman
f77c04ff36
Torchscript fixes/hacks for rms_norm, refactor ParallelScalingBlock with manual combination of input projections, closer paper match
2023-02-16 16:57:42 -08:00
Ross Wightman
122621daef
Add Final annotation to attn_fas to avoid symbol lookup of new scaled_dot_product_attn fn on old PyTorch in jit
2023-02-16 16:57:42 -08:00
Ross Wightman
621e1b2182
Add ideas from 'Scaling ViT to 22-B Params', testing PyTorch 2.0 fused F.scaled_dot_product_attention impl in vit, vit_relpos, maxxvit / coatnet.
2023-02-16 16:57:42 -08:00
testbot
a09d403c24
changed warning to info
2023-02-16 16:20:31 -08:00
testbot
8470e29541
Add support to load safetensors weights
2023-02-16 16:20:31 -08:00
Ross Wightman
624266148d
Remove unused imports from _hub helpers
2023-02-09 17:47:26 -08:00
Ross Wightman
2cfff0581b
Add grad_checkpointing support to features_only, test in EfficientDet.
2023-02-09 17:45:40 -08:00
Ross Wightman
9c14654a0d
Improve support for custom dataset label name/description through HF hub export, via pretrained_cfg
2023-02-08 08:29:20 -08:00
Ross Wightman
0d33127df2
Add 384x384 convnext_large_mlp laion2b fine-tune on in1k
2023-02-06 22:01:04 -08:00
Ross Wightman
7a0bd095cb
Update model prune loader to use pkgutil
2023-02-06 17:45:16 -08:00
Ross Wightman
13acac8c5e
Update head metadata for effformerv2
2023-02-04 23:11:51 -08:00
Ross Wightman
8682528096
Add first conv metadata for efficientformer_v2
2023-02-04 23:02:02 -08:00
Ross Wightman
72fba669a8
is_scripting() guard on checkpoint_seq
2023-02-04 14:21:49 -08:00
Ross Wightman
95ec255f7f
Finish timm mode api for efficientformer_v2, add grad checkpointing support to both efficientformers
2023-02-03 21:21:23 -08:00
Ross Wightman
9d03c6f526
Merge remote-tracking branch 'origin/main' into levit_efficientformer_redux
2023-02-03 14:47:01 -08:00
Ross Wightman
086bd55a94
Add EfficientFormer-V2, refactor EfficientFormer and Levit for more uniformity across the 3 related arch. Add features_out support to levit conv models and efficientformer_v2. All weights on hub.
2023-02-03 14:12:29 -08:00
Ross Wightman
2cb2699dc8
Apply fix from #1649 to main
2023-02-03 11:28:57 -08:00
Ross Wightman
b3042081b4
Add laion -> in1k fine-tuned base and large_mlp weights for convnext
2023-02-03 10:58:02 -08:00
Ross Wightman
316bdf8955
Add mlp head support for convnext_large, add laion2b CLIP weights, prep fine-tuned weight tags
2023-02-01 08:27:02 -08:00
Ross Wightman
6f28b562c6
Factor NormMlpClassifierHead from MaxxViT and use across MaxxViT / ConvNeXt / DaViT, refactor some type hints & comments
2023-01-27 14:57:01 -08:00
Ross Wightman
9a53c3f727
Finalize DaViT, some formatting and modelling simplifications (separate PatchEmbed to Stem + Downsample, weights on HF hub.
2023-01-27 13:54:04 -08:00
Fredo Guan
fb717056da
Merge remote-tracking branch 'upstream/main'
2023-01-26 10:49:15 -08:00
nateraw
14b84e8895
📝 update docstrings
2023-01-26 00:49:44 -05:00
nateraw
f0dc8a8267
📝 update docstrings for create_model
2023-01-25 21:10:41 -05:00
Ross Wightman
64667bfa0e
Add 'gigantic' vit clip variant for feature extraction and future fine-tuning
2023-01-25 18:02:10 -08:00
Ross Wightman
36989cfae4
Factor out readme generation in hub helper, add more readme fields
2023-01-20 14:49:40 -08:00
Ross Wightman
32f252381d
Change order of checkpoitn filtering fn application in builder, try dict, model variant first
2023-01-20 14:48:54 -08:00
Ross Wightman
bed350f5e5
Push all MaxxViT weights to HF hub, cleanup impl, add feature map extraction support and prompote to 'std' architecture. Fix norm head for proper embedding / feat map output. Add new in12k + ft 1k weights.
2023-01-20 14:45:25 -08:00
Ross Wightman
ca38e1e73f
Update ClassifierHead module, add reset() method, update in_chs -> in_features for consistency
2023-01-20 14:44:05 -08:00
Ross Wightman
8ab573cd26
Add convnext_tiny and convnext_small 384x384 fine-tunes of in12k weights, fix pool size for laion CLIP convnext weights
2023-01-20 14:40:16 -08:00
Fredo Guan
81ca323751
Davit update formatting and fix grad checkpointing ( #7 )
...
fixed head to gap->norm->fc as per convnext, along with option for norm->gap->fc
failed tests due to clip convnext models, davit tests passed
2023-01-15 14:34:56 -08:00
Ross Wightman
e9aac412de
Correct mean/std for CLIP convnexts
2023-01-14 22:53:56 -08:00
Ross Wightman
42bd8f7bcb
Add convnext_base CLIP image tower weights for fine-tuning / features
2023-01-14 21:16:29 -08:00
Ross Wightman
a2c14c2064
Add tiny/small in12k pretrained and fine-tuned ConvNeXt models
2023-01-11 14:50:39 -08:00
Ross Wightman
01fdf44438
Initial focalnet import, more refactoring needed for timm.
2023-01-09 16:18:19 -08:00
Ross Wightman
2e83bba142
Revert head norm changes to ConvNeXt as it broke some downstream use, alternate workaround for fcmae weights
2023-01-09 13:37:40 -08:00
Ross Wightman
1825b5e314
maxxvit type
2023-01-09 08:57:31 -08:00
Ross Wightman
5078b28f8a
More kwarg handling tweaks, maxvit_base_rw def added
2023-01-09 08:57:31 -08:00
Ross Wightman
c0d7388a1b
Improving kwarg merging in more models
2023-01-09 08:57:31 -08:00
Ross Wightman
60ebb6cefa
Re-order vit pretrained entries for more sensible default weights (no .tag specified)
2023-01-06 16:12:33 -08:00
Ross Wightman
e861b74cf8
Pass through --model-kwargs (and --opt-kwargs for train) from command line through to model __init__. Update some models to improve arg overlay. Cleanup along the way.
2023-01-06 16:12:33 -08:00
Ross Wightman
add3fb864e
Working on improved model card template for push_to_hf_hub
2023-01-06 16:12:33 -08:00
Ross Wightman
6e5553da5f
Add ConvNeXt-V2 support (model additions and weights) ( #1614 )
...
* Add ConvNeXt-V2 support (model additions and weights)
* ConvNeXt-V2 weights on HF Hub, tweaking some tests
* Update README, fixing convnextv2 tests
2023-01-05 07:53:32 -08:00
Ross Wightman
6902c48a5f
Fix ResNet based models to work w/ norm layers w/o affine params. Reformat long arg lists into vertical form.
2022-12-29 16:32:26 -08:00
Ross Wightman
8ece53e194
Switch BEiT to HF hub weights
2022-12-22 21:43:04 -08:00
Ross Wightman
9a51e4ea2e
Add FlexiViT models and weights, refactoring, push more weights
...
* push all vision_transformer*.py weights to HF hub
* finalize more pretrained tags for pushed weights
* refactor pos_embed files and module locations, move some pos embed modules to layers
* tweak hf hub helpers to aid bulk uploading and updating
2022-12-22 17:23:09 -08:00
Fredo Guan
10b3f696b4
Davit std ( #6 )
...
Separate patch_embed module
2022-12-16 21:50:28 -08:00
Ross Wightman
656e1776de
Convert mobilenetv3 to multi-weight, tweak PretrainedCfg metadata
2022-12-16 09:29:13 -08:00
Ross Wightman
6a01101905
Update efficientnet.py and convnext.py to multi-weight, add ImageNet-12k pretrained EfficientNet-B5 and ConvNeXt-Nano.
2022-12-14 20:33:23 -08:00
Fredo Guan
84178fca60
Merge branch 'rwightman:main' into main
2022-12-12 23:13:58 -08:00
Fredo Guan
c43340ddd4
Davit std ( #5 )
...
* Update davit.py
* Update test_models.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* starting point
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update test_models.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Davit revised (#4 )
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
clean up
* Update test_models.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update test_models.py
* Update davit.py
2022-12-11 03:03:22 -08:00
Ross Wightman
d5e7d6b27e
Merge remote-tracking branch 'origin/main' into refactor-imports
2022-12-09 14:49:44 -08:00
Ross Wightman
cda39b35bd
Add a deprecation phase to module re-org
2022-12-09 14:39:45 -08:00
Fredo Guan
edea013dd1
Davit std ( #3 )
...
Davit with all features working
2022-12-09 02:53:21 -08:00
Ross Wightman
7c4ed4d5a4
Add EVA-large models
2022-12-08 16:21:30 -08:00
Fredo Guan
434a03937d
Merge branch 'rwightman:main' into main
2022-12-08 08:05:16 -08:00
Ross Wightman
98047ef5e3
Add EVA FT results, hopefully fix BEiT test failures
2022-12-07 08:54:06 -08:00
Ross Wightman
3cc4d7a894
Fix missing register for 224 eva model
2022-12-07 08:54:06 -08:00
Ross Wightman
eba07b0de7
Add eva models to beit.py
2022-12-07 08:54:06 -08:00
Fredo Guan
3bd96609c8
Davit ( #1 )
...
Implement the davit model from https://arxiv.org/abs/2204.03645 and https://github.com/dingmyu/davit
2022-12-06 17:19:25 -08:00
Ross Wightman
927f031293
Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models
2022-12-06 15:00:06 -08:00
Ross Wightman
3785c234d7
Remove clip vit models that won't be ft and comment two that aren't uploaded yet
2022-12-05 10:21:34 -08:00
Ross Wightman
755570e2d6
Rename _pretrained.py -> pretrained.py, not feasible to change the other files to same scheme without breaking uses
2022-12-05 10:21:34 -08:00
Ross Wightman
72cfa57761
Add ported Tensorflow MaxVit weights. Add a few more CLIP ViT fine-tunes. Tweak some model tag names. Improve model tag name sorting. Update HF hub push config layout.
2022-12-05 10:21:34 -08:00
Ross Wightman
4d5c395160
MaxVit, ViT, ConvNeXt, and EfficientNet-v2 updates
...
* Add support for TF weights and modelling specifics to MaxVit (testing ported weights)
* More fine-tuned CLIP ViT configs
* ConvNeXt and MaxVit updated to new pretrained cfgs use
* EfficientNetV2, MaxVit and ConvNeXt high res models use squash crop/resize
2022-12-05 10:21:34 -08:00
Ross Wightman
9da7e3a799
Add crop_mode for pretraind config / image transforms. Add support for dynamo compilation to benchmark/train/validate
2022-12-05 10:21:34 -08:00
Ross Wightman
b2b6285af7
Add two more FT clip weights
2022-12-05 10:21:34 -08:00
Ross Wightman
5895056dc4
Add openai b32 ft
2022-12-05 10:21:34 -08:00
Ross Wightman
9dea5143d5
Adding more clip ft variants
2022-12-05 10:21:34 -08:00
Ross Wightman
444dcba4ad
CLIP B16 12k weights added
2022-12-05 10:21:34 -08:00
Ross Wightman
dff4717cbf
Add clip b16 384x384 finetunes
2022-12-05 10:21:34 -08:00
Ross Wightman
883fa2eeaa
Add fine-tuned B/16 224x224 in1k clip models
2022-12-05 10:21:34 -08:00
Ross Wightman
9a3d2ac2d5
Add latest CLIP ViT fine-tune pretrained configs / model entrypt updates
2022-12-05 10:21:34 -08:00
Ross Wightman
42bbbddee9
Add missing model config
2022-12-05 10:21:34 -08:00
Ross Wightman
def68befa7
Updating vit model defs for mult-weight support trial (vit first). Prepping for CLIP (laion2b and openai) fine-tuned weights.
2022-12-05 10:21:34 -08:00
Ross Wightman
0dadb4a6e9
Initial multi-weight support, handled so old pretraing config handling co-exists with new tags.
2022-12-05 10:21:34 -08:00
Wauplin
9b114754db
refactor push_to_hub helper
2022-11-16 12:03:34 +01:00
Wauplin
ae0a0db7de
Create repo before cloning with Repository.clone_from
2022-11-15 15:17:20 +01:00
Ross Wightman
803254bb40
Fix spacing misalignment for fast norm path in LayerNorm modules
2022-10-24 21:43:49 -07:00
Ross Wightman
6635bc3f7d
Merge pull request #1479 from rwightman/script_cleanup
...
Train / val script enhancements, non-GPU (ie CPU) device support, HF datasets support, TFDS/WDS dataloading improvements
2022-10-15 09:29:39 -07:00
Ross Wightman
0e6023f032
Merge pull request #1381 from ChristophReich1996/master
...
Fix typo in PositionalEncodingFourier
2022-10-14 18:34:33 -07:00
Ross Wightman
66f4af7090
Merge remote-tracking branch 'origin/master' into script_cleanup
2022-10-14 15:54:00 -07:00
Ross Wightman
9914f744dc
Add more maxxvit weights includ ConvNeXt conv block based experiments.
2022-10-10 21:49:18 -07:00
Mohamed Rashad
8fda68aff6
Fix repo id bug
...
This to fix this issue #1482
2022-10-05 16:26:06 +02:00
Ross Wightman
1199c5a1a4
clip_laion2b models need 1e-5 eps for LayerNorm
2022-09-25 10:36:54 -07:00
Ross Wightman
e858912e0c
Add brute-force checkpoint remapping option
2022-09-23 16:07:03 -07:00
Ross Wightman
b293dfa595
Add CL SE module
2022-09-23 16:06:09 -07:00
Ross Wightman
a383ef99f5
Make huggingface_hub necessary if it's the only source for a pretrained weight
2022-09-23 13:54:21 -07:00
Ross Wightman
e069249a2d
Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7
2022-09-16 21:39:05 -07:00
Ross Wightman
9d65557be3
Fix errant import
2022-09-15 17:47:23 -07:00
Ross Wightman
9709dbaaa9
Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP
2022-09-15 17:25:59 -07:00
Ross Wightman
a520da9b49
Update tresnet features_info for v2
2022-09-13 20:54:54 -07:00
Ross Wightman
c8ab747bf4
BEiT-V2 checkpoints didn't remove 'module' from weights, adapt checkpoint filter
2022-09-13 17:56:49 -07:00
Ross Wightman
73049dc2aa
Fix type in dla weight update
2022-09-13 17:52:45 -07:00
Ross Wightman
e11efa872d
Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights.
2022-09-13 16:35:26 -07:00
Ross Wightman
fa8c84eede
Update maxvit_tiny_256 weight to better iter, add coatnet / maxvit / maxxvit model defs for future runs
2022-09-07 12:37:37 -07:00
Ross Wightman
c1b3cea19d
Add maxvit_rmlp_tiny_rw_256 model def and weights w/ 84.2 top-1 @ 256, 84.8 @ 320
2022-09-07 10:27:11 -07:00
Ross Wightman
914544fc81
Add beitv2 224x224 checkpoints from https://github.com/microsoft/unilm/tree/master/beit2
2022-09-06 20:25:18 -07:00
Ross Wightman
dc90816f26
Add `maxvit_tiny_rw_224` weights 83.5 @ 224 and `maxvit_rmlp_pico_rw_256` relpos weights, 80.5 @ 256, 81.3 @ 320
2022-09-06 16:14:41 -07:00
Ross Wightman
f489f02ad1
Make gcvit window size ratio based to improve resolution changing support #1449 . Change default init to original.
2022-09-06 16:14:00 -07:00
Ross Wightman
7f1b223c02
Add maxvit_rmlp_nano_rw_256 model def & weights, make window/grid size dynamic wrt img_size by default
2022-08-29 15:49:32 -07:00
Ross Wightman
e6a4361306
pretrained_cfg entry for mvitv2_small_cls
2022-08-28 15:27:01 -07:00
Ross Wightman
f66e5f0e35
Fix class token support in MViT-V2, add small_class variant to ensure it's tested. Fix #1443
2022-08-28 15:24:04 -07:00
Ross Wightman
f1d2160d85
Update a few maxxvit comments, rename PartitionAttention -> PartitionAttenionCl for consistency with other blocks
2022-08-26 12:53:49 -07:00
Ross Wightman
eca6f0a25c
Fix syntax error (extra dataclass comma) in maxxvit.py
2022-08-26 11:29:09 -07:00
Ross Wightman
ff6a919cf5
Add --fast-norm arg to benchmark.py, train.py, validate.py
2022-08-25 17:20:46 -07:00
Ross Wightman
769ab4b98a
Clean up no_grad for trunc normal weight inits
2022-08-25 16:29:52 -07:00
Ross Wightman
48e1df8b37
Add norm/norm_act header comments
2022-08-25 16:29:34 -07:00
Ross Wightman
7c2660576d
Tweak init for convnext block using maxxvit/coatnext.
2022-08-25 15:30:59 -07:00
Ross Wightman
1d8d6f6072
Fix two default args in DenseNet blocks... fix #1427
2022-08-25 15:00:35 -07:00
Ross Wightman
527f9a4cb2
Updated to correct maxvit_nano weights...
2022-08-24 12:42:11 -07:00