Ross Wightman
7d9e321b76
Improve tracing of window attn models with simpler reshape logic
2023-02-17 07:59:06 -08:00
Ross Wightman
a3c6685e20
Delete requirements-modelindex.txt
2023-02-17 00:03:58 -08:00
Ross Wightman
022403ce0a
Update README
2023-02-16 17:20:27 -08:00
Ross Wightman
2e38d53dca
Remove dead line
2023-02-16 16:57:42 -08:00
Ross Wightman
f77c04ff36
Torchscript fixes/hacks for rms_norm, refactor ParallelScalingBlock with manual combination of input projections, closer paper match
2023-02-16 16:57:42 -08:00
Ross Wightman
122621daef
Add Final annotation to attn_fas to avoid symbol lookup of new scaled_dot_product_attn fn on old PyTorch in jit
2023-02-16 16:57:42 -08:00
Ross Wightman
621e1b2182
Add ideas from 'Scaling ViT to 22-B Params', testing PyTorch 2.0 fused F.scaled_dot_product_attention impl in vit, vit_relpos, maxxvit / coatnet.
2023-02-16 16:57:42 -08:00
Ross Wightman
a3d528524a
Version 0.8.12dev0
2023-02-16 16:27:29 -08:00
testbot
a09d403c24
changed warning to info
2023-02-16 16:20:31 -08:00
testbot
8470e29541
Add support to load safetensors weights
2023-02-16 16:20:31 -08:00
Ross Wightman
f35d6ea57b
Add multi-tensor (foreach) version of Lion in style of upcoming PyTorch 2.0 optimizers
2023-02-16 15:48:00 -08:00
Ross Wightman
709d5e0d9d
Add Lion optimizer
2023-02-14 23:55:05 -08:00
Ross Wightman
624266148d
Remove unused imports from _hub helpers
2023-02-09 17:47:26 -08:00
Ross Wightman
2cfff0581b
Add grad_checkpointing support to features_only, test in EfficientDet.
2023-02-09 17:45:40 -08:00
Ross Wightman
45af496197
Version 0.8.11dev0
2023-02-08 08:29:29 -08:00
Ross Wightman
9c14654a0d
Improve support for custom dataset label name/description through HF hub export, via pretrained_cfg
2023-02-08 08:29:20 -08:00
Ross Wightman
1e0b347227
Fix README
2023-02-06 23:46:26 -08:00
Ross Wightman
497be8343c
Update README and version
2023-02-06 23:43:14 -08:00
Ross Wightman
0d33127df2
Add 384x384 convnext_large_mlp laion2b fine-tune on in1k
2023-02-06 22:01:04 -08:00
Ross Wightman
88a5b8491d
Merge pull request #1662 from rwightman/dataset_info
...
ImageNet metadata (info) and labelling update
2023-02-06 20:23:06 -08:00
Ross Wightman
7a0bd095cb
Update model prune loader to use pkgutil
2023-02-06 17:45:16 -08:00
Ross Wightman
0f2803de7a
Move ImageNet metadata (aka info) files to timm/data/_info. Add helper classes to make info available for labelling. Update inference.py for first use.
2023-02-06 17:45:03 -08:00
Ross Wightman
89b0452171
Add PyTorch 1.13 inference benchmark numbers
2023-02-06 09:09:04 -08:00
Taeksang Kim
7f29a46d44
Add gradient accumulation option to train.py
...
option: iters-to-accum(iterations to accmulate)
Gradient accumulation improves training performance(samples/s).
It can reduce the number of parameter sharing between each node.
This option can be helpful when network is bottleneck.
Signed-off-by: Taeksang Kim <voidbag@puzzle-ai.com>
2023-02-06 09:24:48 +09:00
Ross Wightman
7a13be67a5
Update version.py
2023-02-05 10:06:15 -08:00
Ross Wightman
4b383e8ffe
Merge pull request #1655 from rwightman/levit_efficientformer_redux
...
Add EfficientFormer-V2, refactor EfficientFormer and Levit
2023-02-05 10:05:51 -08:00
Ross Wightman
13acac8c5e
Update head metadata for effformerv2
2023-02-04 23:11:51 -08:00
Ross Wightman
8682528096
Add first conv metadata for efficientformer_v2
2023-02-04 23:02:02 -08:00
Ross Wightman
72fba669a8
is_scripting() guard on checkpoint_seq
2023-02-04 14:21:49 -08:00
Ross Wightman
95ec255f7f
Finish timm mode api for efficientformer_v2, add grad checkpointing support to both efficientformers
2023-02-03 21:21:23 -08:00
Ross Wightman
9d03c6f526
Merge remote-tracking branch 'origin/main' into levit_efficientformer_redux
2023-02-03 14:47:01 -08:00
Ross Wightman
086bd55a94
Add EfficientFormer-V2, refactor EfficientFormer and Levit for more uniformity across the 3 related arch. Add features_out support to levit conv models and efficientformer_v2. All weights on hub.
2023-02-03 14:12:29 -08:00
Ross Wightman
2cb2699dc8
Apply fix from #1649 to main
2023-02-03 11:28:57 -08:00
Ross Wightman
e0a5911072
Merge pull request #1645 from rwightman/norm_mlp_classifier
...
Extract NormMlpClassifierHead from maxxvit.py
2023-02-03 11:00:58 -08:00
Ross Wightman
b3042081b4
Add laion -> in1k fine-tuned base and large_mlp weights for convnext
2023-02-03 10:58:02 -08:00
Ross Wightman
316bdf8955
Add mlp head support for convnext_large, add laion2b CLIP weights, prep fine-tuned weight tags
2023-02-01 08:27:02 -08:00
Ross Wightman
6f28b562c6
Factor NormMlpClassifierHead from MaxxViT and use across MaxxViT / ConvNeXt / DaViT, refactor some type hints & comments
2023-01-27 14:57:01 -08:00
Ross Wightman
29fda20e6d
Merge branch 'fffffgggg54-main'
2023-01-27 13:55:17 -08:00
Ross Wightman
9a53c3f727
Finalize DaViT, some formatting and modelling simplifications (separate PatchEmbed to Stem + Downsample, weights on HF hub.
2023-01-27 13:54:04 -08:00
Fredo Guan
fb717056da
Merge remote-tracking branch 'upstream/main'
2023-01-26 10:49:15 -08:00
nateraw
14b84e8895
📝 update docstrings
2023-01-26 00:49:44 -05:00
nateraw
f0dc8a8267
📝 update docstrings for create_model
2023-01-25 21:10:41 -05:00
Ross Wightman
2bbc26dd82
version 0.8.8dev0
2023-01-25 18:02:48 -08:00
Ross Wightman
64667bfa0e
Add 'gigantic' vit clip variant for feature extraction and future fine-tuning
2023-01-25 18:02:10 -08:00
Ross Wightman
3aa31f537d
Merge pull request #1641 from rwightman/maxxvit_hub
...
MaxxViT weights on hub, new 12k FT 1k weights, convnext 384x384 12k FT 1k, and more
2023-01-20 20:26:54 -08:00
Ross Wightman
9983ed7721
xlarge maxvit killing the tests
2023-01-20 16:16:20 -08:00
Ross Wightman
c2822568ec
Update version to 0.8.7dev0
2023-01-20 15:01:10 -08:00
Ross Wightman
0417a9dd81
Update README
2023-01-20 15:00:49 -08:00
Ross Wightman
36989cfae4
Factor out readme generation in hub helper, add more readme fields
2023-01-20 14:49:40 -08:00
Ross Wightman
32f252381d
Change order of checkpoitn filtering fn application in builder, try dict, model variant first
2023-01-20 14:48:54 -08:00