1104 Commits

Author SHA1 Message Date
nateraw
30bafd7347 🔖 add dev suffix to version tag 2022-10-13 17:08:33 -04:00
Ross Wightman
f67a7ee8bd Set num_workers in Iterable WDS/TFDS datasets early so sample estimate is correct 2022-10-11 15:11:18 -07:00
Ross Wightman
cea8df3d0c Version 0.6.12 2022-10-10 21:49:52 -07:00
Ross Wightman
9914f744dc Add more maxxvit weights includ ConvNeXt conv block based experiments. 2022-10-10 21:49:18 -07:00
Ross Wightman
b1b024dfed Scheduler update, add v2 factory method, support scheduling on updates instead of just epochs. Add LR to summary csv. Add lr_base scaling calculations to train script. Fix #1168 2022-10-07 10:43:04 -07:00
Ross Wightman
4f18d6dc5f Fix logs in WDS parser 2022-10-07 10:06:17 -07:00
Mohamed Rashad
8fda68aff6
Fix repo id bug
This to fix this issue #1482
2022-10-05 16:26:06 +02:00
Ross Wightman
b8c8550841 Data improvements. Improve train support for in_chans != 3. Add wds dataset support from bits_and_tpu branch w/ fixes and tweaks. TFDS tweaks. 2022-09-29 16:42:58 -07:00
Alex Fafard
7327792f39 update to support pickle based dictionaries 2022-09-27 11:13:48 -04:00
Ross Wightman
1199c5a1a4 clip_laion2b models need 1e-5 eps for LayerNorm 2022-09-25 10:36:54 -07:00
Ross Wightman
87939e6fab Refactor device handling in scripts, distributed init to be less 'cuda' centric. More device args passed through where needed. 2022-09-23 16:08:59 -07:00
Ross Wightman
c88947ad3d Add initial Hugging Face Datasets parser impl. 2022-09-23 16:08:19 -07:00
Ross Wightman
e858912e0c Add brute-force checkpoint remapping option 2022-09-23 16:07:03 -07:00
Ross Wightman
b293dfa595 Add CL SE module 2022-09-23 16:06:09 -07:00
Ross Wightman
2a296412be Add Adan optimizer 2022-09-23 16:05:52 -07:00
Ross Wightman
5dc4343308 version 0.6.11 2022-09-23 13:54:56 -07:00
Ross Wightman
a383ef99f5 Make huggingface_hub necessary if it's the only source for a pretrained weight 2022-09-23 13:54:21 -07:00
Ross Wightman
33e30f8c8b Remove layer-decay print 2022-09-18 21:33:03 -07:00
Ross Wightman
e069249a2d Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7 2022-09-16 21:39:05 -07:00
Ross Wightman
9d65557be3 Fix errant import 2022-09-15 17:47:23 -07:00
Ross Wightman
9709dbaaa9 Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP 2022-09-15 17:25:59 -07:00
Ross Wightman
a520da9b49 Update tresnet features_info for v2 2022-09-13 20:54:54 -07:00
Ross Wightman
c8ab747bf4 BEiT-V2 checkpoints didn't remove 'module' from weights, adapt checkpoint filter 2022-09-13 17:56:49 -07:00
Ross Wightman
73049dc2aa Fix type in dla weight update 2022-09-13 17:52:45 -07:00
Ross Wightman
3599c7e6a4 version 0.6.10 2022-09-13 16:37:02 -07:00
Ross Wightman
e11efa872d Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights. 2022-09-13 16:35:26 -07:00
Ross Wightman
fa8c84eede Update maxvit_tiny_256 weight to better iter, add coatnet / maxvit / maxxvit model defs for future runs 2022-09-07 12:37:37 -07:00
Ross Wightman
c1b3cea19d Add maxvit_rmlp_tiny_rw_256 model def and weights w/ 84.2 top-1 @ 256, 84.8 @ 320 2022-09-07 10:27:11 -07:00
Ross Wightman
914544fc81 Add beitv2 224x224 checkpoints from https://github.com/microsoft/unilm/tree/master/beit2 2022-09-06 20:25:18 -07:00
Ross Wightman
dc90816f26 Add maxvit_tiny_rw_224 weights 83.5 @ 224 and maxvit_rmlp_pico_rw_256 relpos weights, 80.5 @ 256, 81.3 @ 320 2022-09-06 16:14:41 -07:00
Ross Wightman
f489f02ad1 Make gcvit window size ratio based to improve resolution changing support #1449. Change default init to original. 2022-09-06 16:14:00 -07:00
Ross Wightman
7f1b223c02 Add maxvit_rmlp_nano_rw_256 model def & weights, make window/grid size dynamic wrt img_size by default 2022-08-29 15:49:32 -07:00
Ross Wightman
e6a4361306 pretrained_cfg entry for mvitv2_small_cls 2022-08-28 15:27:01 -07:00
Ross Wightman
f66e5f0e35 Fix class token support in MViT-V2, add small_class variant to ensure it's tested. Fix #1443 2022-08-28 15:24:04 -07:00
Ross Wightman
f1d2160d85 Update a few maxxvit comments, rename PartitionAttention -> PartitionAttenionCl for consistency with other blocks 2022-08-26 12:53:49 -07:00
Ross Wightman
eca6f0a25c Fix syntax error (extra dataclass comma) in maxxvit.py 2022-08-26 11:29:09 -07:00
Ross Wightman
ff6a919cf5 Add --fast-norm arg to benchmark.py, train.py, validate.py 2022-08-25 17:20:46 -07:00
Ross Wightman
769ab4b98a Clean up no_grad for trunc normal weight inits 2022-08-25 16:29:52 -07:00
Ross Wightman
48e1df8b37 Add norm/norm_act header comments 2022-08-25 16:29:34 -07:00
Ross Wightman
7c2660576d Tweak init for convnext block using maxxvit/coatnext. 2022-08-25 15:30:59 -07:00
Ross Wightman
1d8d6f6072 Fix two default args in DenseNet blocks... fix #1427 2022-08-25 15:00:35 -07:00
Ross Wightman
527f9a4cb2 Updated to correct maxvit_nano weights... 2022-08-24 12:42:11 -07:00
Ross Wightman
b2e8426fca Make k=stride=2 ('avg2') pooling default for coatnet/maxvit. Add weight links. Rename 'combined' partition to 'parallel'. 2022-08-24 11:01:20 -07:00
Ross Wightman
837c68263b For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode 2022-08-23 15:17:12 -07:00
Ross Wightman
cac0a4570a More test fixes, pool size for 256x256 maxvit models 2022-08-23 13:38:26 -07:00
Ross Wightman
e939ed19b9 Rename internal creation fn for maxvit, has not been just coatnet for a while... 2022-08-22 17:44:51 -07:00
Ross Wightman
ffaf97f813 MaxxVit! A very configurable MaxVit and CoAtNet impl with lots of goodies.. 2022-08-22 17:42:10 -07:00
Ross Wightman
8c9696c9df More model and test fixes 2022-08-22 17:40:31 -07:00
Ross Wightman
ca52108c2b Fix some model support functions 2022-08-19 10:20:51 -07:00
Ross Wightman
f332fc2db7 Fix some test failures, torchscript issues 2022-08-18 16:19:46 -07:00