459 Commits

Author SHA1 Message Date
Ross Wightman
c16e965037 Add some ViT comments and fix a few minor issues. 2021-01-24 23:18:35 -08:00
Ross Wightman
22748f1a2d Convert samples/targets in ParserImageInTar to numpy arrays, slightly less mem usage for massive datasets. Add a few more se/eca model defs to resnet.py 2021-01-22 16:54:33 -08:00
Ross Wightman
5d4c3d0af3 Add enhanced ParserImageInTar that can read images from tars within tars, folders with multiple tars, etc. Additional comment cleanup. 2021-01-22 10:52:04 -08:00
Ross Wightman
55f7dfa9ea Refactor vision_transformer entrpy fns, add pos embedding resize support for fine tuning, add some deit models for testing 2021-01-18 16:11:02 -08:00
Ross Wightman
d55bcc0fee Finishing adding stochastic depth support to BiT ResNetV2 models 2021-01-16 16:32:03 -08:00
Ross Wightman
855d6cc217 More dataset work including factories and a tensorflow datasets (TFDS) wrapper
* Add parser/dataset factory methods for more flexible dataset & parser creation
* Add dataset parser that wraps TFDS image classification datasets
* Tweak num_classes handling bug for 21k models
* Add initial deit models so they can be benchmarked in next csv results runs
2021-01-15 17:26:20 -08:00
Ross Wightman
20516abc18 Fix some broken tests for ResNetV2 BiT models 2021-01-04 23:21:39 -08:00
Ross Wightman
59ec7e6a53 Merge branch 'master' into imagenet21k_datasets_more 2021-01-04 12:11:05 -08:00
Ross Wightman
e7a9ddf982
Merge pull request #334 from kecsap/links
Follow symbolic links during dataset scanning
2021-01-04 10:30:58 -08:00
Csaba Kertesz
7cae7e7035 Follow links during dataset scanning 2021-01-04 00:16:45 +02:00
Ross Wightman
c96e9f99a0 Update version to 0.3.3 2021-01-03 12:43:44 -08:00
Ross Wightman
4e2533db77 Add 320x320 model default cfgs for 101D and 152D ResNets. Add SEResNet-152D weights and 320x320 cfg. 2021-01-03 12:10:25 -08:00
Ross Wightman
0167f749d3 Remove some old __future__ imports 2021-01-03 11:24:16 -08:00
Ross Wightman
e35e9760a6 More work on dataset / parser split and imagenet21k (tar) support 2020-12-28 16:59:15 -08:00
Ross Wightman
ce69de70d3 Add 21k weight urls to vision_transformer. Cleanup feature_info for preact ResNetV2 (BiT) models 2020-12-28 16:59:15 -08:00
Ross Wightman
231d04e91a ResNetV2 pre-act and non-preact model, w/ BiT pretrained weights and support for ViT R50 model. Tweaks for in21k num_classes passing. More to do... tests failing. 2020-12-28 16:59:15 -08:00
Ross Wightman
de6046e213 Initial commit for dataset / parser reorg to support additional datasets / types 2020-12-28 16:59:15 -08:00
Ross Wightman
392595c7eb Add pool_size to default cfgs for new models to prevent tests from failing. Add explicit 200D_320 model entrypoint for next benchmark run. 2020-12-18 21:28:47 -08:00
Ross Wightman
b1f1228a41 Add ResNet101D, 152D, and 200D weights, remove meh 66d model 2020-12-18 17:13:37 -08:00
Jasha
7c56c718f3 Configure create_optimizer with args.opt_args
Closes #301
2020-12-08 00:03:09 -06:00
Ross Wightman
9a25fdf3ad
Merge pull request #297 from rwightman/ema_simplify
Simplified JIT compatible Ema module. Fixes for SiLU export and torchscript training w/ Linear layer.
2020-12-05 11:42:45 -08:00
Tymoteusz Wiśniewski
de15b43865 Fix a bug with accuracy retrieving from RealLabels 2020-12-04 16:12:50 +01:00
Ross Wightman
cd72e66eff Bug in last mod for features_only default_cfg 2020-12-03 12:33:01 -08:00
Ross Wightman
867a0e5a04 Add default_cfg back to models wrapped in feature extraction module as per discussion in #294. 2020-12-03 10:24:35 -08:00
Ross Wightman
4ca52d73d8 Add separate set and update method to ModelEmaV2 2020-12-03 10:05:09 -08:00
Ross Wightman
2ed8f24715 A few more changes for 0.3.2 maint release. Linear layer change for mobilenetv3 and inception_v3, support no bias for linear wrapper. 2020-11-30 16:19:52 -08:00
Ross Wightman
6504a42832 Version 0.3.2 2020-11-30 13:39:08 -08:00
Ross Wightman
460eba7f24 Work around casting issue with combination of native torch AMP and torchscript for Linear layers 2020-11-30 13:30:51 -08:00
Ross Wightman
5f4b6076d8 Fix inplace arg compat for GELU and PreLU via activation factory 2020-11-30 13:27:40 -08:00
Ross Wightman
fd962c4b4a Native SiLU (Swish) op doesn't export to ONNX 2020-11-29 21:56:55 -08:00
Ross Wightman
27bbc70d71 Add back old ModelEma and rename new one to ModelEmaV2 to avoid compat breaks in dependant code. Shuffle train script, add a few comments, remove DataParallel support, support experimental torchscript training. 2020-11-29 16:22:19 -08:00
tigertang
43f2500c26
Add symbolic for SwishJitAutoFn to support onnx 2020-11-18 14:36:12 +08:00
Ross Wightman
9214ca0716 Simplifying EMA... 2020-11-16 12:51:52 -08:00
Ross Wightman
53aeed3499 ver 0.3.1 2020-10-31 18:14:58 -07:00
Ross Wightman
30ab4a1494 Fix issue in optim factory with sgd / eps flag. Bump version to 0.3.1 2020-10-31 18:05:30 -07:00
Ross Wightman
741572dc9d Bump version to 0.3.0 for pending PyPi push 2020-10-29 17:31:39 -07:00
Ross Wightman
b401952caf Add newly added vision transformer large/base 224x224 weights ported from JAX official repo 2020-10-29 17:31:01 -07:00
Ross Wightman
61200db0ab in_chans=1 working w/ pretrained weights for vision_transformer 2020-10-29 15:49:36 -07:00
Ross Wightman
e90edce438 Support native silu activation (aka swish). An optimized ver is available in PyTorch 1.7. 2020-10-29 15:45:17 -07:00
Ross Wightman
da6cd2cc1f Fix regression for pretrained classifier loading when using entrypt functions directly 2020-10-29 15:43:39 -07:00
Ross Wightman
f591e90b0d Make sure num_features attr is present in vit models as with others 2020-10-29 15:33:47 -07:00
Ross Wightman
4a3df7842a Fix topn metric view regression on PyTorch 1.7 2020-10-29 14:04:15 -07:00
Ross Wightman
f944242cb0 Fix #262, num_classes arg mixup. Make vision_transformers a bit closer to other models wrt get/reset classfier/forward_features. Fix torchscript for ViT. 2020-10-29 13:58:28 -07:00
Ross Wightman
736f209e7d Update vision transformers to be compatible with official code. Port official ViT weights from jax impl. 2020-10-26 18:42:11 -07:00
Ross Wightman
477a78ed81 Fix optimizer factory regressin for optimizers like sgd/momentum that don't have an eps arg 2020-10-22 15:59:47 -07:00
Ross Wightman
27a93e9de7 Improve test crop for ViT models. Small now 77.85, added base weights at 79.35 top-1. 2020-10-21 23:35:25 -07:00
Ross Wightman
d4db9e7977 Add small vision transformer weights. 77.42 top-1. 2020-10-21 12:14:12 -07:00
talrid
27fadaa922 asymmetric_loss 2020-10-16 17:12:28 +03:00
Ross Wightman
f31933cb37 Initial Vision Transformer impl w/ patch and hybrid variants. Refactor tuple helpers. 2020-10-13 13:33:44 -07:00
Ross Wightman
a4d8fea61e Add model based wd skip support. Improve cross version compat of optimizer factory. Fix #247 2020-10-13 12:49:47 -07:00