Commit Graph

464 Commits (cf0e371594294685e02df4d1d51c00e0c2c916e9)

Author SHA1 Message Date
talrid cf0e371594 84_0 2021-04-27 22:33:55 +03:00
talrid 0968bdeca3 vit, tresnet and mobilenetV3 ImageNet-21K-P weights 2021-04-27 20:54:06 +03:00
Norman Mu 79640fcc1f Enable uniform augmentation magnitude sampling and set AugMix default 2021-04-19 14:21:12 -07:00
Ross Wightman c1cf9712fc Add updated EfficientNet-V2S weights, 83.8 @ 384x384 test. Add PyTorch trained EfficientNet-B4 weights, 83.4 @ 384x384 test. Tweak non TF EfficientNet B1-B4 train/test res scaling. 2021-04-19 10:42:56 -07:00
Ross Wightman e8a64fb881 Test input size for efficientnet_v2s was wrong in last results run 2021-04-17 16:17:41 -07:00
Ross Wightman 2df77ee5cb Fix torchscript compat and features_only behaviour in GhostNet PR. A few minor formatting changes. Reuse existing layers. 2021-04-15 10:20:26 -07:00
Ross Wightman d793deb51a Merge branch 'master' of https://github.com/iamhankai/pytorch-image-models into iamhankai-master 2021-04-15 09:30:25 -07:00
Ross Wightman e685618f45
Merge pull request #550 from amaarora/wandb
Wandb Support
2021-04-15 09:26:35 -07:00
Ross Wightman f606c45c38 Add Swin Transformer models from https://github.com/microsoft/Swin-Transformer 2021-04-13 12:17:21 -07:00
iamhankai de445e7827 Add GhostNet 2021-04-13 23:19:51 +08:00
Ross Wightman 5a196dddf6 Update README.md with latest, bump version to 0.4.8 2021-04-12 13:15:00 -07:00
Ross Wightman b3d7580df1 Update ByoaNet comments. Fix first Steam feat chs for ByobNet. 2021-04-12 12:11:35 -07:00
Ross Wightman 16f7aa9f54 Add default_cfg options for min_input_size / fixed_input_size, queries in model registry, and use for testing self-attn models 2021-04-12 11:54:22 -07:00
Ross Wightman 4e4b863b15 Missed norm.py 2021-04-12 09:57:56 -07:00
Ross Wightman 7c97e66f7c Remove commented code, add more consistent seed fn 2021-04-12 09:51:36 -07:00
Ross Wightman 364dd6a58e Merge branch 'master' into byoanet-self_attn 2021-04-12 09:38:59 -07:00
Ross Wightman ce62f96d4d ByoaNet with bottleneck transformer, lambda resnet, and halo net experiments 2021-04-12 09:38:02 -07:00
Ross Wightman cd3dc4979f Fix adabelief imports, remove prints, preserve memory format is the default arg for zeros_like 2021-04-12 08:25:31 -07:00
Ross Wightman 21812d33aa Add prelim efficientnet_v2s weights from 224x224 train, eval 83.3 @ 288. Add eca_nfnet_l1 weights, train at 256, eval 84 @ 320. 2021-04-11 23:26:13 -07:00
Aman Arora 5772c55c57 Make wandb optional 2021-04-10 01:34:20 -04:00
Aman Arora f54897cc0b make wandb not required but rather optional as huggingface_hub 2021-04-10 01:27:23 -04:00
Aman Arora 3f028ebc0f import wandb in summary.py 2021-04-08 03:48:51 -04:00
Aman Arora 624c9b6949 log to wandb only if using using wandb 2021-04-08 03:40:22 -04:00
juntang addfc7c1ac adabelief 2021-04-04 23:48:15 -04:00
Ross Wightman fb896c0b26 Update some comments re preliminary EfficientNet-V2 assumptions 2021-04-03 12:00:25 -07:00
Ross Wightman 2b49ab7a36 Fix ResNetV2 pretrained classifier issue. Fixes #540 2021-04-03 11:18:12 -07:00
Ross Wightman de9dff933a EfficientNet-V2S preliminary model def (for experimentation) 2021-04-02 09:36:51 -07:00
Ross Wightman 37c71a5609 Some further create_optimizer_v2 tweaks, remove some redudnant code, add back safe model str. Benchmark step times per batch. 2021-04-01 22:34:55 -07:00
Ross Wightman 2bb65bd875 Wrong default_cfg pool_size for L1 2021-04-01 20:00:41 -07:00
Ross Wightman bf2ca6bdf4 Merge jax and original weight init 2021-04-01 18:11:51 -07:00
Ross Wightman acbd698c83 Update README.md with updates. Small tweak to head_dist handling. 2021-04-01 17:49:05 -07:00
Ross Wightman 9071568f0e Add weights for SE NFNet-L0 model, rename nfnet_l0b -> nfnet_l0. 82.75 top-1 @ 288. Add nfnet_l1 model def for training. 2021-04-01 17:22:27 -07:00
Ross Wightman c468c47a9c Add regnety_160 weights from DeiT teacher model, update that and my regnety_032 weights to use higher test size. 2021-04-01 16:41:04 -07:00
Ross Wightman 288682796f Update benchmark script to add precision arg. Fix some downstream (DeiT) compat issues with latest changes. Bump version to 0.4.7 2021-04-01 16:40:12 -07:00
Ross Wightman ea9c9550b2 Fully move ViT hybrids to their own file, including embedding module. Remove some extra DeiT models that were for benchmarking only. 2021-04-01 14:17:38 -07:00
Ross Wightman a5310a3451 Merge remote-tracking branch 'origin/benchmark-fixes-vit_hybrids' into pit_and_vit_update 2021-04-01 12:15:34 -07:00
Ross Wightman 7953e5d11a Fix pos_embed scaling for ViT and num_classes != 1000 for pretrained distilled deit and pit models. Fix #426 and fix #433 2021-03-31 23:11:28 -07:00
Ross Wightman a760a4c3f4 Some ViT cleanup, merge distilled model with main, fixup torchscript support for distilled models 2021-03-31 18:21:02 -07:00
Ross Wightman 0dfc5a66bb Add PiT model from https://github.com/naver-ai/pit 2021-03-31 18:20:14 -07:00
Ross Wightman 51febd869b Small tweak to tests for tnt model, reorder model imports. 2021-03-29 11:33:08 -07:00
Ross Wightman b27a4e0d88 Merge branch 'master' of https://github.com/contrastive/pytorch-image-models into contrastive-master 2021-03-29 10:37:05 -07:00
Aman Arora 6b18061773 Add GIST to docstring for quick access 2021-03-29 15:33:31 +11:00
contrastive de86314655 Update TNT 2021-03-29 08:23:34 +08:00
Aman Arora 92b1db9a79 update docstrings and add check on and 2021-03-29 10:04:51 +11:00
Aman Arora b85be24054 update to work with fnmatch 2021-03-29 09:36:31 +11:00
contrastive cfc15283a4 Update TNT url 2021-03-28 23:19:15 +08:00
contrastive 4a09bc851e Add TNT model 2021-03-28 19:53:42 +08:00
Aman Arora 20626e8387 Add to extract stats for SPP 2021-03-27 05:40:04 +11:00
Ross Wightman cf5fec5047 Cleanup experimental vit weight init a bit 2021-03-20 09:44:24 -07:00
Ross Wightman f42f1df26c Improve evenness of per-worker split for validation set with TFDS 2021-03-18 23:16:14 -07:00