Ross Wightman
780c0a96a4
Change args for RandomErasing so only one required for pixel/color mode
2019-05-18 12:29:30 -07:00
Ross Wightman
76539d905e
Some transform/data/loader refactoring, hopefully didn't break things
...
* factor out data related constants to own file
* move data related config helpers to own file
* add a variant of RandomResizeCrop that randomizes interpolation method
* remove old Numpy version of RandomErasing
* cleanup torch version of RandomErasing and use it in either GPU loader batch mode or single image cpu Transform
2019-05-16 22:52:17 -07:00
Ross Wightman
fee607edf6
Mixup implemention in progress
...
* initial impl w/ label smoothing converging, but needs more testing
2019-05-13 19:05:40 -07:00
Ross Wightman
8fbd62a169
Exclude batchnorm and bias params from weight_decay by default
2019-04-22 17:33:22 -07:00
Ross Wightman
bc264269c9
Morph mnasnet impl into a generic mobilenet that covers Mnasnet, MobileNetV1/V2, ChamNet, FBNet, and related
...
* add an alternate RMSprop opt that applies eps like TF
* add bn params for passing through alternates and changing defaults to TF style
2019-04-21 15:54:28 -07:00
Ross Wightman
e9c7961efc
Fix pooling in mnasnet, more sensible default for AMP opt level
2019-04-17 18:06:37 -07:00
Ross Wightman
0562b91c38
Add per model crop pct, interpolation defaults, tie it all together
...
* create one resolve fn to pull together model defaults + cmd line args
* update attribution comments in some models
* test update train/validation/inference scripts
2019-04-12 22:55:24 -07:00
Ross Wightman
c328b155e9
Random erasing crash fix and args pass through
2019-04-11 22:06:43 -07:00
Ross Wightman
9c3859fb9c
Uniform pretrained model handling.
...
* All models have 'default_cfgs' dict
* load/resume/pretrained helpers factored out
* pretrained load operates on state_dict based on default_cfg
* test all models in validate
* schedule, optim factor factored out
* test time pool wrapper applied based on default_cfg
2019-04-11 21:32:16 -07:00
Ross Wightman
f1cd1a5ce3
Cleanup CheckpointSaver, add support for increasing or decreasing metric, switch to prec1 metric in train loop
2019-04-07 10:22:55 -07:00
Ross Wightman
5180f94c7e
Distributed (multi-process) train, multi-gpu single process train, and NVIDIA AMP support
2019-04-05 10:53:04 -07:00
Ross Wightman
45cde6f0c7
Improve creation of data pipeline with prefetch enabled vs disabled, fixup inception_res_v2 and dpn models
2019-03-11 22:17:42 -07:00
Ross Wightman
2295cf56c2
Add some Nvidia performance enhancements (prefetch loader, fast collate), and refactor some of training and model fact/transforms
2019-03-10 14:23:16 -07:00
Ross Wightman
9d927a389a
Add adabound, random erasing
2019-03-01 22:03:42 -08:00
Ross Wightman
1577c52976
Resnext added, changes to bring it and seresnet in line with rest of models
2019-03-01 15:44:04 -08:00
Ross Wightman
31055466fc
Fixup validate/inference script args, fix senet init for better test accuracy
2019-02-22 14:07:50 -08:00
Ross Wightman
b1a5a71151
Update schedulers
2019-02-17 12:50:15 -08:00
Ross Wightman
b5255960d9
Tweaking tanh scheduler, senet weight init (for BN), transform defaults
2019-02-13 23:11:09 -08:00
Ross Wightman
a336e5bff3
Minor updates
2019-02-08 20:56:24 -08:00
Ross Wightman
cf0c280e1b
Cleanup tranforms, add custom schedulers, tweak senet34 model
2019-02-06 20:19:11 -08:00
Ross Wightman
c57717d325
Fix tta train bug, improve logging
2019-02-02 10:17:04 -08:00
Ross Wightman
72b4d162a2
Increase training performance
2019-02-01 22:48:31 -08:00
Ross Wightman
5855b07ae0
Initial commit, puting some ol pieces together
2019-02-01 22:07:34 -08:00