2764 Commits

Author SHA1 Message Date
Ross Wightman
ea728f67fa Improve several typing issues for flex vit, can (almost) work with jit if we bash h,w key into an int or str 2025-04-14 11:01:56 -07:00
Ross Wightman
681be882e8 Fix arg merging of sknet, old seresnet. Fix #2470 2025-04-14 10:32:26 -07:00
Ross Wightman
97341fec51 A much faster resample_patch_embed, can be used at train/validation time 2025-04-10 15:58:24 -07:00
Ross Wightman
b4bb0f452a Exclude embeds module and mask attn functions from tracing 2025-04-09 15:34:15 -07:00
Ross Wightman
13e0f3a4a3 Add loss scale arg, initial distributed loss scale. Maybe fix FX for the model. 2025-04-08 20:47:57 -07:00
Ross Wightman
6675590264 Fix ParallelThingsBlock w/ attn_mask 2025-04-08 09:35:34 -07:00
Ross Wightman
9b23d6dea2 Exclude naflex models from jit tests 2025-04-08 07:59:19 -07:00
Ross Wightman
825edccf19 Type fixes, remove old comments 2025-04-07 21:35:03 -07:00
Ross Wightman
0893f5d296 Initial NaFlex ViT model and training support 2025-04-07 21:27:10 -07:00
Ross Wightman
e44f14d7d2 Update README v1.0.15 2025-02-22 21:04:13 -08:00
Ross Wightman
98e9651952
Update version.py
Version 1.0.15, prep for a release
2025-02-22 10:50:21 -08:00
Ross Wightman
e76ea5474d
Update README.md 2025-02-21 16:09:42 -08:00
Adam J. Stewart
92682d8d4d timm.models: explicitly export attributes 2025-02-21 14:19:39 -08:00
Ross Wightman
a667d3d8f0 siglip2 weights on hub, fix forward_intermediates when no prefix tokens (& return prefix selected) 2025-02-21 13:10:51 -08:00
Ross Wightman
f63a11cf81 Remove duplicate so400m/16 @ 256 model def 2025-02-21 13:10:51 -08:00
Ross Wightman
9758e0b8b0 Prep for siglip2 release 2025-02-21 13:10:51 -08:00
Adam J. Stewart
c68d724e9c adapt_input_conv: add type hints 2025-02-21 12:28:22 -08:00
Ross Wightman
105a667baa Dev version 1.0.15.dev0 2025-02-17 15:50:12 -08:00
Ross Wightman
7234f5c6c5 Add 448 so150m2 weight/model, add updated internvit 300m weight 2025-02-17 12:59:10 -08:00
Ross Wightman
9ce824c39a Add vit so150m2 weights 2025-02-14 15:55:51 -08:00
Ross Wightman
a49b020eff Merge branch 'ClashLuke-patch-1' 2025-01-31 12:53:29 -08:00
Ross Wightman
490d222dd8 Fix issue taking device from V before V exists 2025-01-31 12:52:47 -08:00
Ross Wightman
875c19d0c9 Merge branch 'patch-1' of github.com:ClashLuke/pytorch-image-models into ClashLuke-patch-1 2025-01-31 12:43:28 -08:00
Ross Wightman
8b3c07a841
Update README.md 2025-01-31 10:37:32 -08:00
Lucas Nestler
e025328f96
simplify RNG 2025-01-31 17:26:14 +01:00
Lucas Nestler
6367267298
unify RNG 2025-01-31 17:23:53 +01:00
Ross Wightman
872978ccfe Fix comment, add 'stochastic weight decay' idea because why not 2025-01-30 18:22:36 -08:00
Ross Wightman
510bbd5389 Change start/end args 2025-01-30 18:22:36 -08:00
Ross Wightman
31831f5948 Change flattening behaviour in Kron 2025-01-30 18:22:36 -08:00
Ross Wightman
cdbafd9057 Try to force numpy<2.0 for torch 1.13 tests, update newest tested torch to 2.5.1 2025-01-28 20:56:30 -08:00
Ross Wightman
b1752eefb5 Fix missing model key in bulk validate results on error 2025-01-28 13:20:40 -08:00
Ross Wightman
b3a83b81d6 Prep Kron for merge, add detail to attributions note, README. 2025-01-27 21:02:26 -08:00
Ross Wightman
67ef6f0a92 Move opt_einsum import back out of class __init__ 2025-01-27 21:02:26 -08:00
Ross Wightman
9ab5464e4d More additions to Kron 2025-01-27 21:02:26 -08:00
Ross Wightman
5f10450235 Some more kron work. Figured out why some tests fail, implemented a deterministic rng state load but too slow so skipping some tests for now. 2025-01-27 21:02:26 -08:00
Ross Wightman
cd21e80d03 Fiddling with Kron (PSGD) 2025-01-27 21:02:26 -08:00
Adam J. Stewart
d81da93c16 Use import alias 2025-01-22 10:27:17 -08:00
Adam J. Stewart
4de1abf837 timm: add __all__ to __init__ 2025-01-22 10:27:17 -08:00
Ryan
bda46f8e6f Add num_classes assertion after reset_classifier 2025-01-21 11:52:05 -08:00
Ryan
17eabaad17 Fix RDNet forward call 2025-01-21 11:52:05 -08:00
Ryan
80a4877376 Fix self.reset_classifier num_classes update 2025-01-21 11:52:05 -08:00
Collin McCarthy
84631cb5c6 Add missing training flag to convert_sync_batchnorm 2025-01-21 11:51:55 -08:00
Josua Rieder
cb4cea561a add arguments to the respective argument groups 2025-01-20 10:54:35 -08:00
Josua Rieder
634b68ae50 Fix metavar for --input-size 2025-01-20 10:53:46 -08:00
Ross Wightman
5d535d7a2d Version 1.0.14, update README & changelog v1.0.14 2025-01-19 13:53:09 -08:00
Ross Wightman
c6b74eb5bd Remove numpy ver constraint, not relevant for latest PyTorch version 2025-01-19 13:41:16 -08:00
Ross Wightman
aa333079da Tweak so150m2 def 2025-01-19 13:40:53 -08:00
Josua Rieder
8d81fdf3d9 Fix typos 2025-01-19 13:39:40 -08:00
Ross Wightman
3677f67902 Add the 256x256 in1k ft of the so150m, add an alternate so150m def 2025-01-18 15:51:57 -08:00
Ross Wightman
2a84d68d02 Add some so150m vit w/ sbb recipe weights, and a ese_vovnet57b model with RA4 recipe 2025-01-18 15:51:57 -08:00