Commit Graph

1853 Commits (bd5f9a341f605d51608db048106ce51b088a8c68)
 

Author SHA1 Message Date
Benjamin Bossan cf6f6adf6e Add pytest-cov, requirements-dev, pyproject.toml
When tests finish, a report should be printed that shows the code
coverage of timm. This should give us a better idea where we should work
on test coverage.

I have tested this locally (on a subset of tests) and it worked.

Since the number of test dependencies was getting quite high, I created
a requirements-dev.txt and moved them there. GH action and
CONTRIBUTING.md are adjusted accordingly.

Furthermore, instead of extending the pytest invocation, I created a
pyproject.toml and added the coverage options there. For completeness, I
also added the black settings that come closest to the style of timm.
LMK if this is not desired.

For now, the coverage is only reported but not enforced. I.e. when a PR
is created that adds uncovered lines, CI will still succeed. We could
think about adding codecov or something like that, but it can be
annoying sometimes and the service was flaky for me in the past.
2023-02-23 15:09:41 -08:00
Benjamin Bossan a5b01ec04e Add type annotations to _registry.py
Description

Add type annotations to _registry.py so that they will pass mypy
--strict.

Comment

I was reading the code and felt that this module would be easier to
understand with type annotations. Therefore, I went ahead and added the
annotations.

The idea with this PR is to start small to see if we can align on _how_
to annotate types. I've seen people in the past disagree on how strictly
to annotate the code base, so before spending too much time on this, I
wanted to check if you agree, Ross.

Most of the added types should be straightforward. Some notes on the
non-trivial changes:

- I made no assumption about the fn passed to register_model, but maybe
  the type could be stricter. Are all models nn.Modules?
- If I'm not mistaken, the type hint for get_arch_name was incorrect
- I had to add a # type: ignore to model.__all__ = ...
- I made some minor code changes to list_models to facilitate the
  typing. I think the changes should not affect the logic of the function.
- I removed list from list(sorted(...)) because sorted returns always a
  list.
2023-02-22 09:19:30 -08:00
Benjamin Bossan c9406ce608
Some additions to the CONTRIBUTING guide (#1685)
* Some additions to the CONTRIBUTING guide

- how to run black if so desired
- install instructions for devs (following GH action)
- running tests
- minor fixups

If there is a guide on how to best add new models, it would be a good
idea to link it here, since I imagine this is what many contributors
would need most help with.

* [skip ci] empty commit to skip ci
2023-02-22 08:24:26 -08:00
Ross Wightman a32c4eff69
Create CONTRIBUTING.md 2023-02-20 17:13:10 -08:00
Ross Wightman a0772f03e0
Update README.md 2023-02-20 10:26:09 -08:00
Ross Wightman 47f1de9bec Version bump 2023-02-20 10:17:10 -08:00
Ross Wightman 11f7b589e5 Update setup.py for huggingface changes. 2023-02-20 10:17:10 -08:00
Ross Wightman 4d9c3ae2fb Add laion2b 320x320 ConvNeXt-Large CLIP weights 2023-02-18 16:34:03 -08:00
Ross Wightman d0b45c9b4d Make safetensor import option for now. Improve avg/clean checkpoints ext handling a bit (more consistent). 2023-02-18 16:06:42 -08:00
Ross Wightman 947c1d757a Merge branch 'main' into focalnet_and_swin_refactor 2023-02-17 16:28:52 -08:00
Ross Wightman cf324ea38f Fix grad checkpointing in focalnet 2023-02-17 16:26:26 -08:00
Ross Wightman 848d200767 Overhaul FocalNet implementation 2023-02-17 16:24:59 -08:00
Ross Wightman 7266c5c716 Merge branch 'main' into focalnet_and_swin_refactor 2023-02-17 09:20:14 -08:00
Ross Wightman 7d9e321b76 Improve tracing of window attn models with simpler reshape logic 2023-02-17 07:59:06 -08:00
Ross Wightman a3c6685e20
Delete requirements-modelindex.txt 2023-02-17 00:03:58 -08:00
Ross Wightman 022403ce0a Update README 2023-02-16 17:20:27 -08:00
Ross Wightman 2e38d53dca Remove dead line 2023-02-16 16:57:42 -08:00
Ross Wightman f77c04ff36 Torchscript fixes/hacks for rms_norm, refactor ParallelScalingBlock with manual combination of input projections, closer paper match 2023-02-16 16:57:42 -08:00
Ross Wightman 122621daef Add Final annotation to attn_fas to avoid symbol lookup of new scaled_dot_product_attn fn on old PyTorch in jit 2023-02-16 16:57:42 -08:00
Ross Wightman 621e1b2182 Add ideas from 'Scaling ViT to 22-B Params', testing PyTorch 2.0 fused F.scaled_dot_product_attention impl in vit, vit_relpos, maxxvit / coatnet. 2023-02-16 16:57:42 -08:00
Ross Wightman a3d528524a Version 0.8.12dev0 2023-02-16 16:27:29 -08:00
testbot a09d403c24 changed warning to info 2023-02-16 16:20:31 -08:00
testbot 8470e29541 Add support to load safetensors weights 2023-02-16 16:20:31 -08:00
Ross Wightman f35d6ea57b Add multi-tensor (foreach) version of Lion in style of upcoming PyTorch 2.0 optimizers 2023-02-16 15:48:00 -08:00
Ross Wightman 709d5e0d9d Add Lion optimizer 2023-02-14 23:55:05 -08:00
Ross Wightman 624266148d Remove unused imports from _hub helpers 2023-02-09 17:47:26 -08:00
Ross Wightman 2cfff0581b Add grad_checkpointing support to features_only, test in EfficientDet. 2023-02-09 17:45:40 -08:00
Ross Wightman 45af496197 Version 0.8.11dev0 2023-02-08 08:29:29 -08:00
Ross Wightman 9c14654a0d Improve support for custom dataset label name/description through HF hub export, via pretrained_cfg 2023-02-08 08:29:20 -08:00
Ross Wightman 1e0b347227 Fix README 2023-02-06 23:46:26 -08:00
Ross Wightman 497be8343c Update README and version 2023-02-06 23:43:14 -08:00
Ross Wightman 0d33127df2 Add 384x384 convnext_large_mlp laion2b fine-tune on in1k 2023-02-06 22:01:04 -08:00
Ross Wightman 88a5b8491d
Merge pull request #1662 from rwightman/dataset_info
ImageNet metadata (info) and labelling update
2023-02-06 20:23:06 -08:00
Ross Wightman 7a0bd095cb Update model prune loader to use pkgutil 2023-02-06 17:45:16 -08:00
Ross Wightman 0f2803de7a Move ImageNet metadata (aka info) files to timm/data/_info. Add helper classes to make info available for labelling. Update inference.py for first use. 2023-02-06 17:45:03 -08:00
Ross Wightman 89b0452171 Add PyTorch 1.13 inference benchmark numbers 2023-02-06 09:09:04 -08:00
Taeksang Kim 7f29a46d44 Add gradient accumulation option to train.py
option: iters-to-accum(iterations to accmulate)

Gradient accumulation improves training performance(samples/s).
It can reduce the number of parameter sharing between each node.
This option can be helpful when network is bottleneck.

Signed-off-by: Taeksang Kim <voidbag@puzzle-ai.com>
2023-02-06 09:24:48 +09:00
Ross Wightman 7a13be67a5
Update version.py 2023-02-05 10:06:15 -08:00
Ross Wightman 4b383e8ffe
Merge pull request #1655 from rwightman/levit_efficientformer_redux
Add EfficientFormer-V2, refactor EfficientFormer and Levit
2023-02-05 10:05:51 -08:00
Ross Wightman 13acac8c5e Update head metadata for effformerv2 2023-02-04 23:11:51 -08:00
Ross Wightman 8682528096 Add first conv metadata for efficientformer_v2 2023-02-04 23:02:02 -08:00
Ross Wightman 72fba669a8 is_scripting() guard on checkpoint_seq 2023-02-04 14:21:49 -08:00
Ross Wightman 95ec255f7f Finish timm mode api for efficientformer_v2, add grad checkpointing support to both efficientformers 2023-02-03 21:21:23 -08:00
Ross Wightman 9d03c6f526 Merge remote-tracking branch 'origin/main' into levit_efficientformer_redux 2023-02-03 14:47:01 -08:00
Ross Wightman 086bd55a94 Add EfficientFormer-V2, refactor EfficientFormer and Levit for more uniformity across the 3 related arch. Add features_out support to levit conv models and efficientformer_v2. All weights on hub. 2023-02-03 14:12:29 -08:00
Ross Wightman 2cb2699dc8 Apply fix from #1649 to main 2023-02-03 11:28:57 -08:00
Ross Wightman e0a5911072
Merge pull request #1645 from rwightman/norm_mlp_classifier
Extract NormMlpClassifierHead from maxxvit.py
2023-02-03 11:00:58 -08:00
Ross Wightman b3042081b4 Add laion -> in1k fine-tuned base and large_mlp weights for convnext 2023-02-03 10:58:02 -08:00
Ross Wightman 316bdf8955 Add mlp head support for convnext_large, add laion2b CLIP weights, prep fine-tuned weight tags 2023-02-01 08:27:02 -08:00
Ross Wightman 6f28b562c6 Factor NormMlpClassifierHead from MaxxViT and use across MaxxViT / ConvNeXt / DaViT, refactor some type hints & comments 2023-01-27 14:57:01 -08:00