Commit Graph

55 Commits (2060e433c02d80c117635acc3267ab6cb3515824)

Author SHA1 Message Date
Ross Wightman 2060e433c0 Add native PyTorch weights for SE-MnasNet aka MnasNet-A1 2019-06-11 08:54:54 -07:00
Ross Wightman 7d17394bdc Add native PyTorch weights for EfficientNet-B0 w/ top-1 > 76.9
* also add pooling details to default cfg for efficiennets so testtimepool wrapper works
2019-06-10 13:11:36 -07:00
Ross Wightman 6688d0669f PyTorch trained MobileNetV3 weights that match/best paper. Update fbnetc weights to improved copy. 2019-06-04 09:51:05 -07:00
Ross Wightman ff99625603 Missed a few models in the model/_all_ list for senet 2019-05-31 10:51:18 -07:00
Ross Wightman 99122aac1c Replace ResNet-34 default weights with a great result from my experiments. 2019-05-31 10:42:46 -07:00
Ross Wightman 4bb5e9b224 Ported Tensorflow pretrained EfficientNet weights and some model cleanup
* B0-B3 weights ported from TF with close to paper accuracy
* Renamed gen_mobilenet to gen_efficientnet since scaling params go well beyond 'mobile' specific
* Add Tensorflow preprocessing option for closer images to source repo
2019-05-30 17:55:35 -07:00
Ross Wightman 4efecfdc47 Add drop_connect impl to try during training, fix a few comments 2019-05-30 08:42:12 -07:00
Ross Wightman 0fc4cca2ff Cleanup unused member with old name 2019-05-29 23:44:01 -07:00
Ross Wightman 7a6d61566e Add EfficientNet impl, change existing depth_multipler -> channel_multiplier as definitions have been confused 2019-05-29 23:32:52 -07:00
Ross Wightman 6bff9c75dc Cleanup model_factory imports, consistent __all__ for models, fixed inception_v4 weight url 2019-05-28 21:41:10 -07:00
Ross Wightman e8cf619005
Update gluon_resnet.py
Update header comment
2019-05-24 00:01:37 -07:00
Ross Wightman 7419e9835f Add MxNet Gluon ResNet variants w/ converted pretrained weights. Very well trained set of models. 2019-05-23 23:52:23 -07:00
Ross Wightman 2da0b4dbc1 Add inception_v3 models via torchvision, 4 different pretrained weight choices 2019-05-23 11:07:06 -07:00
Ross Wightman 8ceceef889 Densenet should default to bicubic interpolation, update model links to 'cpu fix' from long ago 2019-05-20 11:47:30 -07:00
Ross Wightman a4516fe0fb Add pretrained weights for FBNet-C 2019-05-20 11:05:25 -07:00
Ross Wightman 76539d905e Some transform/data/loader refactoring, hopefully didn't break things
* factor out data related constants to own file
* move data related config helpers to own file
* add a variant of RandomResizeCrop that randomizes interpolation method
* remove old Numpy version of RandomErasing
* cleanup torch version of RandomErasing and use it in either GPU loader batch mode or single image cpu Transform
2019-05-16 22:52:17 -07:00
Ross Wightman db8ad25a23 MobileNetV3 appears correct based on paper update, cleaned up comments and compacted last block def 2019-05-15 08:53:27 -07:00
cclauss 51f85702a7
Identity is not the same thing as equality in Python
Identity is not the same thing as equality in Python.  In these instances, we want the latter.

Use ==/!= to compare str, bytes, and int literals.

$ __python__
```python
>>> proj = "pro"
>>> proj += 'j'
>>> proj
'proj'
>>> proj == 'proj'
True
>>> proj is 'proj'
False
>>> 0 == 0.0
True
>>> 0 is 0.0
False
```
[flake8](http://flake8.pycqa.org) testing of https://github.com/rwightman/pytorch-image-models on Python 3.7.1

$ __flake8 . --count --select=E9,F63,F72,F82 --show-source --statistics__
```
./data/loader.py:48:23: F823 local variable 'input' defined as a builtin referenced before assignment
                yield input, target
                      ^
./models/dpn.py:170:12: F632 use ==/!= to compare str, bytes, and int literals
        if block_type is 'proj':
           ^
./models/dpn.py:173:14: F632 use ==/!= to compare str, bytes, and int literals
        elif block_type is 'down':
             ^
./models/dpn.py:177:20: F632 use ==/!= to compare str, bytes, and int literals
            assert block_type is 'normal'
                   ^
3     F632 use ==/!= to compare str, bytes, and int literals
1     F823 local variable 'input' defined as a builtin referenced before assignment
4
```
__E901,E999,F821,F822,F823__ are the "_showstopper_" [flake8](http://flake8.pycqa.org) issues that can halt the runtime with a SyntaxError, NameError, etc. These 5 are different from most other flake8 issues which are merely "style violations" -- useful for readability but they do not effect runtime safety.
* F821: undefined name `name`
* F822: undefined name `name` in `__all__`
* F823: local variable name referenced before assignment
* E901: SyntaxError or IndentationError
* E999: SyntaxError -- failed to compile a file into an Abstract Syntax Tree
2019-05-14 17:19:57 +02:00
Ross Wightman c3fbdd4655 Fix efficient head for MobileNetV3 2019-05-11 11:23:11 -07:00
Ross Wightman 17da1adaca A few MobileNetV3 tweaks
* fix expansion ratio on early block
* change comment re stride mistake in paper
* fix rounding not being called properly for all multipliers != 1.0
2019-05-11 10:23:40 -07:00
Ross Wightman 6523e4abe4 More appropriate name for se channel 2019-05-10 23:32:18 -07:00
Ross Wightman db056d97e2 Add MobileNetV3 and associated changes hard-swish, hard-sigmoid, efficient head, etc 2019-05-10 23:28:13 -07:00
Ross Wightman 02abeb95bf Add the url for tflite mnasnet ported weights 2019-04-28 23:21:35 -07:00
Ross Wightman 4663fc2132 Add support for tflite mnasnet pretrained weights and included spnasnet pretrained weights of my own.
* tensorflow 'SAME' padding support added to GenMobileNet models for tflite pretrained weights
* folded batch norm support (made batch norm optional and enable conv bias) for tflite pretrained weights
* add url for spnasnet1_00 weights that I recently trained
* fix SE reduction size for semnasnet models
2019-04-28 17:45:07 -07:00
Ross Wightman afb357ff68 Make genmobilenet weight init switchable, fix fan_out in google style linear init 2019-04-22 17:46:17 -07:00
Ross Wightman 34cd76899f Add Single-Path NAS pixel1 model 2019-04-22 12:43:45 -07:00
Ross Wightman 419555be62 Update a few GenMobileNet comments 2019-04-22 12:30:55 -07:00
Ross Wightman bc264269c9 Morph mnasnet impl into a generic mobilenet that covers Mnasnet, MobileNetV1/V2, ChamNet, FBNet, and related
* add an alternate RMSprop opt that applies eps like TF
* add bn params for passing through alternates and changing defaults to TF style
2019-04-21 15:54:28 -07:00
Ross Wightman e9c7961efc Fix pooling in mnasnet, more sensible default for AMP opt level 2019-04-17 18:06:37 -07:00
Ross Wightman 996c77aa94 Prep mnasnet for pretrained models, use the select global pool, some comment mistakes 2019-04-15 16:58:40 -07:00
Ross Wightman 6b4f9ba223 Add MNASNet A1, B1, and Small models as per the TF impl. Testing/training in progress... 2019-04-15 09:03:59 -07:00
Ross Wightman 9e296dbffb Add seresnet26_32x4d cfg and weights + interpolation str->PIL enum fn 2019-04-14 13:43:46 -07:00
Ross Wightman 79f615639e Add pretrained weights for seresnet18 2019-04-13 14:52:21 -07:00
Ross Wightman 8a33a6c90a Add checkpoint clean script, add link to pretrained resnext50 weights 2019-04-13 14:15:35 -07:00
Ross Wightman 6e9697eb9c Fix small bug in seresnet input size and eval transform handling of img size 2019-04-13 10:06:43 -07:00
Ross Wightman 0562b91c38 Add per model crop pct, interpolation defaults, tie it all together
* create one resolve fn to pull together model defaults + cmd line args
* update attribution comments in some models
* test update train/validation/inference scripts
2019-04-12 22:55:24 -07:00
Ross Wightman 9c3859fb9c Uniform pretrained model handling.
* All models have 'default_cfgs' dict
* load/resume/pretrained helpers factored out
* pretrained load operates on state_dict based on default_cfg
* test all models in validate
* schedule, optim factor factored out
* test time pool wrapper applied based on default_cfg
2019-04-11 21:32:16 -07:00
Ross Wightman 63e677d03b Merge branch 'master' of github.com:rwightman/pytorch-models 2019-04-10 14:55:54 -07:00
Ross Wightman 0bc50e84f8 Lots of refactoring and cleanup.
* Move 'test time pool' to Module that can be used by any model, remove from DPN
* Remove ResNext model file and combine with ResNet
* Remove fbresnet200 as it was an old conversion and pretrained performance not worth param count
* Cleanup adaptive avgmax pooling and add back conctat variant
* Factor out checkpoint load fn
2019-04-10 14:53:34 -07:00
Ross Wightman c0e6e5f3db Add common model interface to pnasnet and xception, update factory 2019-04-06 13:59:15 -07:00
Ross Wightman 183d8e4aef Xception model working 2019-04-05 12:09:25 -07:00
Ross Wightman 58571e992e Change block avgpool in senets to mean for performance issues with NVIDIA and AMP especially 2019-04-05 10:53:13 -07:00
Ross Wightman 6f9a0c8ef2 Merge branch 'master' of github.com:rwightman/pytorch-models 2019-04-01 11:07:05 -07:00
Ross Wightman 5cb1a35c6b Fixup Resnext, remove alternate shortcut types 2019-04-01 11:03:37 -07:00
Ross Wightman d87824bd65 Merge branch 'master' of github.com:rwightman/pytorch-models 2019-03-17 09:57:36 -07:00
Ross Wightman 45cde6f0c7 Improve creation of data pipeline with prefetch enabled vs disabled, fixup inception_res_v2 and dpn models 2019-03-11 22:17:42 -07:00
Ross Wightman 321435e6b4 Update resnext init 2019-03-10 14:24:53 -07:00
Ross Wightman 2295cf56c2 Add some Nvidia performance enhancements (prefetch loader, fast collate), and refactor some of training and model fact/transforms 2019-03-10 14:23:16 -07:00
Ross Wightman 9d927a389a Add adabound, random erasing 2019-03-01 22:03:42 -08:00
Ross Wightman 1577c52976 Resnext added, changes to bring it and seresnet in line with rest of models 2019-03-01 15:44:04 -08:00