Commit Graph

1534 Commits (b8e3ffd498225d0372e9f0fe969d7810e41076e4)
 

Author SHA1 Message Date
nateraw b8e3ffd498 👷 revert ci workflow to main 2022-10-05 12:40:44 -04:00
nateraw d56f9f7a23 🐛 fix link 2022-10-05 12:39:34 -04:00
nateraw 6153816250 🚧 test doc-builder branch to fix build here 2022-10-05 12:27:17 -04:00
nateraw 2f1abc2cf9 👷 add build_documentation workflow 2022-10-04 16:38:26 -04:00
nateraw 3a429d04ee 🚑 supply --not_python_module for now 2022-10-03 17:07:24 -04:00
nateraw d4d915caf3 🚧 update path_to_docs in pr doc builder workflow 2022-10-03 17:03:11 -04:00
nateraw 96109a909b 🚧 update docs path 2022-10-03 13:35:44 -04:00
nateraw 6c3d02a7e5 🚧 update inputs to build_pr_documentation workflow 2022-10-03 13:25:21 -04:00
nateraw 25059001a9 🚧 add repo_owner 2022-10-03 13:12:09 -04:00
nateraw e4c99d2bd6 📝 add hfdocs documentation 2022-09-23 17:51:45 -07:00
Ross Wightman 5dc4343308 version 0.6.11 2022-09-23 13:54:56 -07:00
Ross Wightman a383ef99f5 Make huggingface_hub necessary if it's the only source for a pretrained weight 2022-09-23 13:54:21 -07:00
Ross Wightman d199f6651d
Merge pull request #1467 from rwightman/clip_laion2b
Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14, and g/14.
2022-09-23 13:18:16 -07:00
Ross Wightman 33e30f8c8b Remove layer-decay print 2022-09-18 21:33:03 -07:00
Ross Wightman e069249a2d Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7 2022-09-16 21:39:05 -07:00
Ross Wightman 9d65557be3 Fix errant import 2022-09-15 17:47:23 -07:00
Ross Wightman 9709dbaaa9 Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP 2022-09-15 17:25:59 -07:00
Ross Wightman a520da9b49 Update tresnet features_info for v2 2022-09-13 20:54:54 -07:00
Ross Wightman c8ab747bf4 BEiT-V2 checkpoints didn't remove 'module' from weights, adapt checkpoint filter 2022-09-13 17:56:49 -07:00
Ross Wightman 73049dc2aa Fix type in dla weight update 2022-09-13 17:52:45 -07:00
Ross Wightman 3599c7e6a4 version 0.6.10 2022-09-13 16:37:02 -07:00
Ross Wightman e11efa872d Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights. 2022-09-13 16:35:26 -07:00
Ross Wightman fa8c84eede Update maxvit_tiny_256 weight to better iter, add coatnet / maxvit / maxxvit model defs for future runs 2022-09-07 12:37:37 -07:00
Ross Wightman de40f66536 Update README.md 2022-09-07 10:40:58 -07:00
Ross Wightman c1b3cea19d Add maxvit_rmlp_tiny_rw_256 model def and weights w/ 84.2 top-1 @ 256, 84.8 @ 320 2022-09-07 10:27:11 -07:00
Ross Wightman da6f8f5a40 Fix beitv2 tests 2022-09-07 08:09:47 -07:00
Ross Wightman 914544fc81 Add beitv2 224x224 checkpoints from https://github.com/microsoft/unilm/tree/master/beit2 2022-09-06 20:25:18 -07:00
Ross Wightman dc90816f26 Add `maxvit_tiny_rw_224` weights 83.5 @ 224 and `maxvit_rmlp_pico_rw_256` relpos weights, 80.5 @ 256, 81.3 @ 320 2022-09-06 16:14:41 -07:00
Ross Wightman f489f02ad1 Make gcvit window size ratio based to improve resolution changing support #1449. Change default init to original. 2022-09-06 16:14:00 -07:00
Ross Wightman c45c6ee8e4 Update README.md 2022-08-29 15:52:24 -07:00
Ross Wightman 7f1b223c02 Add maxvit_rmlp_nano_rw_256 model def & weights, make window/grid size dynamic wrt img_size by default 2022-08-29 15:49:32 -07:00
Ross Wightman e6a4361306 pretrained_cfg entry for mvitv2_small_cls 2022-08-28 15:27:01 -07:00
Ross Wightman f66e5f0e35 Fix class token support in MViT-V2, add small_class variant to ensure it's tested. Fix #1443 2022-08-28 15:24:04 -07:00
Ross Wightman b94b7cea65 Missed GCVit in README paper links 2022-08-28 15:23:07 -07:00
Ross Wightman f1d2160d85 Update a few maxxvit comments, rename PartitionAttention -> PartitionAttenionCl for consistency with other blocks 2022-08-26 12:53:49 -07:00
Ross Wightman eca6f0a25c Fix syntax error (extra dataclass comma) in maxxvit.py 2022-08-26 11:29:09 -07:00
Ross Wightman 4f72bae43b
Merge pull request #1415 from rwightman/more_vit
More ViT and ViT-CNN Hybrid architecture
2022-08-26 10:00:42 -07:00
Ross Wightman ff6a919cf5 Add --fast-norm arg to benchmark.py, train.py, validate.py 2022-08-25 17:20:46 -07:00
Ross Wightman 769ab4b98a Clean up no_grad for trunc normal weight inits 2022-08-25 16:29:52 -07:00
Ross Wightman 48e1df8b37 Add norm/norm_act header comments 2022-08-25 16:29:34 -07:00
Ross Wightman 99ee61e245 Add T/G legend to README.md maxvit list 2022-08-25 15:58:57 -07:00
Ross Wightman a54008bd97 Update README.md for merge 2022-08-25 15:56:56 -07:00
Ross Wightman 7c2660576d Tweak init for convnext block using maxxvit/coatnext. 2022-08-25 15:30:59 -07:00
Ross Wightman 1d8d6f6072 Fix two default args in DenseNet blocks... fix #1427 2022-08-25 15:00:35 -07:00
Ross Wightman 527f9a4cb2 Updated to correct maxvit_nano weights... 2022-08-24 12:42:11 -07:00
Ross Wightman 2a5b5b2a7b
Update feature_request.md 2022-08-24 12:24:32 -07:00
Ross Wightman e018253acc
Update config.yml 2022-08-24 12:21:03 -07:00
Ross Wightman 995e2691d6
Update config.yml 2022-08-24 12:20:26 -07:00
Ross Wightman b2e8426fca Make k=stride=2 ('avg2') pooling default for coatnet/maxvit. Add weight links. Rename 'combined' partition to 'parallel'. 2022-08-24 11:01:20 -07:00
Ross Wightman 837c68263b For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode 2022-08-23 15:17:12 -07:00