Wauplin
|
9b114754db
|
refactor push_to_hub helper
|
2022-11-16 12:03:34 +01:00 |
Wauplin
|
ae0a0db7de
|
Create repo before cloning with Repository.clone_from
|
2022-11-15 15:17:20 +01:00 |
Ross Wightman
|
803254bb40
|
Fix spacing misalignment for fast norm path in LayerNorm modules
|
2022-10-24 21:43:49 -07:00 |
Ross Wightman
|
6635bc3f7d
|
Merge pull request #1479 from rwightman/script_cleanup
Train / val script enhancements, non-GPU (ie CPU) device support, HF datasets support, TFDS/WDS dataloading improvements
|
2022-10-15 09:29:39 -07:00 |
Ross Wightman
|
0e6023f032
|
Merge pull request #1381 from ChristophReich1996/master
Fix typo in PositionalEncodingFourier
|
2022-10-14 18:34:33 -07:00 |
Ross Wightman
|
66f4af7090
|
Merge remote-tracking branch 'origin/master' into script_cleanup
|
2022-10-14 15:54:00 -07:00 |
Ross Wightman
|
9914f744dc
|
Add more maxxvit weights includ ConvNeXt conv block based experiments.
|
2022-10-10 21:49:18 -07:00 |
Mohamed Rashad
|
8fda68aff6
|
Fix repo id bug
This to fix this issue #1482
|
2022-10-05 16:26:06 +02:00 |
Ross Wightman
|
1199c5a1a4
|
clip_laion2b models need 1e-5 eps for LayerNorm
|
2022-09-25 10:36:54 -07:00 |
Ross Wightman
|
e858912e0c
|
Add brute-force checkpoint remapping option
|
2022-09-23 16:07:03 -07:00 |
Ross Wightman
|
b293dfa595
|
Add CL SE module
|
2022-09-23 16:06:09 -07:00 |
Ross Wightman
|
a383ef99f5
|
Make huggingface_hub necessary if it's the only source for a pretrained weight
|
2022-09-23 13:54:21 -07:00 |
Ross Wightman
|
e069249a2d
|
Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7
|
2022-09-16 21:39:05 -07:00 |
Ross Wightman
|
9d65557be3
|
Fix errant import
|
2022-09-15 17:47:23 -07:00 |
Ross Wightman
|
9709dbaaa9
|
Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP
|
2022-09-15 17:25:59 -07:00 |
Ross Wightman
|
a520da9b49
|
Update tresnet features_info for v2
|
2022-09-13 20:54:54 -07:00 |
Ross Wightman
|
c8ab747bf4
|
BEiT-V2 checkpoints didn't remove 'module' from weights, adapt checkpoint filter
|
2022-09-13 17:56:49 -07:00 |
Ross Wightman
|
73049dc2aa
|
Fix type in dla weight update
|
2022-09-13 17:52:45 -07:00 |
Ross Wightman
|
e11efa872d
|
Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights.
|
2022-09-13 16:35:26 -07:00 |
Ross Wightman
|
fa8c84eede
|
Update maxvit_tiny_256 weight to better iter, add coatnet / maxvit / maxxvit model defs for future runs
|
2022-09-07 12:37:37 -07:00 |
Ross Wightman
|
c1b3cea19d
|
Add maxvit_rmlp_tiny_rw_256 model def and weights w/ 84.2 top-1 @ 256, 84.8 @ 320
|
2022-09-07 10:27:11 -07:00 |
Ross Wightman
|
914544fc81
|
Add beitv2 224x224 checkpoints from https://github.com/microsoft/unilm/tree/master/beit2
|
2022-09-06 20:25:18 -07:00 |
Ross Wightman
|
dc90816f26
|
Add `maxvit_tiny_rw_224` weights 83.5 @ 224 and `maxvit_rmlp_pico_rw_256` relpos weights, 80.5 @ 256, 81.3 @ 320
|
2022-09-06 16:14:41 -07:00 |
Ross Wightman
|
f489f02ad1
|
Make gcvit window size ratio based to improve resolution changing support #1449. Change default init to original.
|
2022-09-06 16:14:00 -07:00 |
Ross Wightman
|
7f1b223c02
|
Add maxvit_rmlp_nano_rw_256 model def & weights, make window/grid size dynamic wrt img_size by default
|
2022-08-29 15:49:32 -07:00 |
Ross Wightman
|
e6a4361306
|
pretrained_cfg entry for mvitv2_small_cls
|
2022-08-28 15:27:01 -07:00 |
Ross Wightman
|
f66e5f0e35
|
Fix class token support in MViT-V2, add small_class variant to ensure it's tested. Fix #1443
|
2022-08-28 15:24:04 -07:00 |
Ross Wightman
|
f1d2160d85
|
Update a few maxxvit comments, rename PartitionAttention -> PartitionAttenionCl for consistency with other blocks
|
2022-08-26 12:53:49 -07:00 |
Ross Wightman
|
eca6f0a25c
|
Fix syntax error (extra dataclass comma) in maxxvit.py
|
2022-08-26 11:29:09 -07:00 |
Ross Wightman
|
ff6a919cf5
|
Add --fast-norm arg to benchmark.py, train.py, validate.py
|
2022-08-25 17:20:46 -07:00 |
Ross Wightman
|
769ab4b98a
|
Clean up no_grad for trunc normal weight inits
|
2022-08-25 16:29:52 -07:00 |
Ross Wightman
|
48e1df8b37
|
Add norm/norm_act header comments
|
2022-08-25 16:29:34 -07:00 |
Ross Wightman
|
7c2660576d
|
Tweak init for convnext block using maxxvit/coatnext.
|
2022-08-25 15:30:59 -07:00 |
Ross Wightman
|
1d8d6f6072
|
Fix two default args in DenseNet blocks... fix #1427
|
2022-08-25 15:00:35 -07:00 |
Ross Wightman
|
527f9a4cb2
|
Updated to correct maxvit_nano weights...
|
2022-08-24 12:42:11 -07:00 |
Ross Wightman
|
b2e8426fca
|
Make k=stride=2 ('avg2') pooling default for coatnet/maxvit. Add weight links. Rename 'combined' partition to 'parallel'.
|
2022-08-24 11:01:20 -07:00 |
Ross Wightman
|
837c68263b
|
For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode
|
2022-08-23 15:17:12 -07:00 |
Ross Wightman
|
cac0a4570a
|
More test fixes, pool size for 256x256 maxvit models
|
2022-08-23 13:38:26 -07:00 |
Ross Wightman
|
e939ed19b9
|
Rename internal creation fn for maxvit, has not been just coatnet for a while...
|
2022-08-22 17:44:51 -07:00 |
Ross Wightman
|
ffaf97f813
|
MaxxVit! A very configurable MaxVit and CoAtNet impl with lots of goodies..
|
2022-08-22 17:42:10 -07:00 |
Ross Wightman
|
8c9696c9df
|
More model and test fixes
|
2022-08-22 17:40:31 -07:00 |
Ross Wightman
|
ca52108c2b
|
Fix some model support functions
|
2022-08-19 10:20:51 -07:00 |
Ross Wightman
|
f332fc2db7
|
Fix some test failures, torchscript issues
|
2022-08-18 16:19:46 -07:00 |
Ross Wightman
|
6e559e9b5f
|
Add MViT (Multi-Scale) V2
|
2022-08-17 15:12:31 -07:00 |
Ross Wightman
|
43aa84e861
|
Add 'fast' layer norm that doesn't cast to float32, support APEX LN impl for slight speed gain, update norm and act factories, tweak SE for ability to disable bias (needed by GCVit)
|
2022-08-17 14:32:58 -07:00 |
Ross Wightman
|
c486aa71f8
|
Add GCViT
|
2022-08-17 14:29:18 -07:00 |
Ross Wightman
|
fba6ecd39b
|
Add EfficientFormer
|
2022-08-17 14:08:53 -07:00 |
Ross Wightman
|
ff4a38e2c3
|
Add PyramidVisionTransformerV2
|
2022-08-17 12:06:05 -07:00 |
Ross Wightman
|
1d8ada359a
|
Add timm ConvNeXt 'atto' weights, change test resolution for FB ConvNeXt 224x224 weights, add support for different dw kernel_size
|
2022-08-15 17:56:08 -07:00 |
Ross Wightman
|
2544d3b80f
|
ConvNeXt pico, femto, and nano, pico, femto ols (overlapping stem) weights and model defs
|
2022-08-05 17:05:50 -07:00 |