Commit Graph

1811 Commits (647ba98d23d11f0d73e6759db8bff020e611133b)
 

Author SHA1 Message Date
cclauss 51f85702a7
Identity is not the same thing as equality in Python
Identity is not the same thing as equality in Python.  In these instances, we want the latter.

Use ==/!= to compare str, bytes, and int literals.

$ __python__
```python
>>> proj = "pro"
>>> proj += 'j'
>>> proj
'proj'
>>> proj == 'proj'
True
>>> proj is 'proj'
False
>>> 0 == 0.0
True
>>> 0 is 0.0
False
```
[flake8](http://flake8.pycqa.org) testing of https://github.com/rwightman/pytorch-image-models on Python 3.7.1

$ __flake8 . --count --select=E9,F63,F72,F82 --show-source --statistics__
```
./data/loader.py:48:23: F823 local variable 'input' defined as a builtin referenced before assignment
                yield input, target
                      ^
./models/dpn.py:170:12: F632 use ==/!= to compare str, bytes, and int literals
        if block_type is 'proj':
           ^
./models/dpn.py:173:14: F632 use ==/!= to compare str, bytes, and int literals
        elif block_type is 'down':
             ^
./models/dpn.py:177:20: F632 use ==/!= to compare str, bytes, and int literals
            assert block_type is 'normal'
                   ^
3     F632 use ==/!= to compare str, bytes, and int literals
1     F823 local variable 'input' defined as a builtin referenced before assignment
4
```
__E901,E999,F821,F822,F823__ are the "_showstopper_" [flake8](http://flake8.pycqa.org) issues that can halt the runtime with a SyntaxError, NameError, etc. These 5 are different from most other flake8 issues which are merely "style violations" -- useful for readability but they do not effect runtime safety.
* F821: undefined name `name`
* F822: undefined name `name` in `__all__`
* F823: local variable name referenced before assignment
* E901: SyntaxError or IndentationError
* E999: SyntaxError -- failed to compile a file into an Abstract Syntax Tree
2019-05-14 17:19:57 +02:00
Ross Wightman fee607edf6 Mixup implemention in progress
* initial impl w/ label smoothing converging, but needs more testing
2019-05-13 19:05:40 -07:00
Ross Wightman c3fbdd4655 Fix efficient head for MobileNetV3 2019-05-11 11:23:11 -07:00
Ross Wightman 17da1adaca A few MobileNetV3 tweaks
* fix expansion ratio on early block
* change comment re stride mistake in paper
* fix rounding not being called properly for all multipliers != 1.0
2019-05-11 10:23:40 -07:00
Ross Wightman 6523e4abe4 More appropriate name for se channel 2019-05-10 23:32:18 -07:00
Ross Wightman db056d97e2 Add MobileNetV3 and associated changes hard-swish, hard-sigmoid, efficient head, etc 2019-05-10 23:28:13 -07:00
Ross Wightman 02abeb95bf Add the url for tflite mnasnet ported weights 2019-04-28 23:21:35 -07:00
Ross Wightman 0956cd4b66
Update README.md
Add notes for latest mobile model weights...
2019-04-28 18:06:20 -07:00
Ross Wightman 4663fc2132 Add support for tflite mnasnet pretrained weights and included spnasnet pretrained weights of my own.
* tensorflow 'SAME' padding support added to GenMobileNet models for tflite pretrained weights
* folded batch norm support (made batch norm optional and enable conv bias) for tflite pretrained weights
* add url for spnasnet1_00 weights that I recently trained
* fix SE reduction size for semnasnet models
2019-04-28 17:45:07 -07:00
Ross Wightman afb357ff68 Make genmobilenet weight init switchable, fix fan_out in google style linear init 2019-04-22 17:46:17 -07:00
Ross Wightman 0a853990e7 Add distributed sampler that maintains order of original dataset (for validation) 2019-04-22 17:44:53 -07:00
Ross Wightman 8fbd62a169 Exclude batchnorm and bias params from weight_decay by default 2019-04-22 17:33:22 -07:00
Ross Wightman 34cd76899f Add Single-Path NAS pixel1 model 2019-04-22 12:43:45 -07:00
Ross Wightman 419555be62 Update a few GenMobileNet comments 2019-04-22 12:30:55 -07:00
Ross Wightman 1cf3ea0467
Update README.md 2019-04-21 16:49:00 -07:00
Ross Wightman bc264269c9 Morph mnasnet impl into a generic mobilenet that covers Mnasnet, MobileNetV1/V2, ChamNet, FBNet, and related
* add an alternate RMSprop opt that applies eps like TF
* add bn params for passing through alternates and changing defaults to TF style
2019-04-21 15:54:28 -07:00
Ross Wightman e9c7961efc Fix pooling in mnasnet, more sensible default for AMP opt level 2019-04-17 18:06:37 -07:00
Ross Wightman 996c77aa94 Prep mnasnet for pretrained models, use the select global pool, some comment mistakes 2019-04-15 16:58:40 -07:00
Ross Wightman 6b4f9ba223 Add MNASNet A1, B1, and Small models as per the TF impl. Testing/training in progress... 2019-04-15 09:03:59 -07:00
Ross Wightman c88e80081d Fix missing cfg key check 2019-04-15 09:03:59 -07:00
Ross Wightman 073d31a076
Update README.md 2019-04-14 15:19:58 -07:00
Ross Wightman 7ba78aaaeb
Update README.md 2019-04-14 15:14:37 -07:00
Ross Wightman e8e8bce335
Create README.md 2019-04-14 15:10:52 -07:00
Ross Wightman 9e296dbffb Add seresnet26_32x4d cfg and weights + interpolation str->PIL enum fn 2019-04-14 13:43:46 -07:00
Ross Wightman 71afec86d3 Loader tweaks 2019-04-13 14:52:38 -07:00
Ross Wightman 79f615639e Add pretrained weights for seresnet18 2019-04-13 14:52:21 -07:00
Ross Wightman 8a33a6c90a Add checkpoint clean script, add link to pretrained resnext50 weights 2019-04-13 14:15:35 -07:00
Ross Wightman 6e9697eb9c Fix small bug in seresnet input size and eval transform handling of img size 2019-04-13 10:06:43 -07:00
Ross Wightman db1fe34d0c Update a few comment, add some references 2019-04-12 23:16:49 -07:00
Ross Wightman 0562b91c38 Add per model crop pct, interpolation defaults, tie it all together
* create one resolve fn to pull together model defaults + cmd line args
* update attribution comments in some models
* test update train/validation/inference scripts
2019-04-12 22:55:24 -07:00
Ross Wightman c328b155e9 Random erasing crash fix and args pass through 2019-04-11 22:06:43 -07:00
Ross Wightman 9c3859fb9c Uniform pretrained model handling.
* All models have 'default_cfgs' dict
* load/resume/pretrained helpers factored out
* pretrained load operates on state_dict based on default_cfg
* test all models in validate
* schedule, optim factor factored out
* test time pool wrapper applied based on default_cfg
2019-04-11 21:32:16 -07:00
Ross Wightman 63e677d03b Merge branch 'master' of github.com:rwightman/pytorch-models 2019-04-10 14:55:54 -07:00
Ross Wightman 0bc50e84f8 Lots of refactoring and cleanup.
* Move 'test time pool' to Module that can be used by any model, remove from DPN
* Remove ResNext model file and combine with ResNet
* Remove fbresnet200 as it was an old conversion and pretrained performance not worth param count
* Cleanup adaptive avgmax pooling and add back conctat variant
* Factor out checkpoint load fn
2019-04-10 14:53:34 -07:00
Ross Wightman f1cd1a5ce3 Cleanup CheckpointSaver, add support for increasing or decreasing metric, switch to prec1 metric in train loop 2019-04-07 10:22:55 -07:00
Ross Wightman c0e6e5f3db Add common model interface to pnasnet and xception, update factory 2019-04-06 13:59:15 -07:00
Ross Wightman f2029dfb65 Add smooth loss 2019-04-05 20:50:26 -07:00
Ross Wightman b0158a593e Fix distributed train script 2019-04-05 20:49:58 -07:00
Ross Wightman 183d8e4aef Xception model working 2019-04-05 12:09:25 -07:00
Ross Wightman 1e23727f2f Update inference script for new loader style 2019-04-05 11:58:16 -07:00
Ross Wightman 58571e992e Change block avgpool in senets to mean for performance issues with NVIDIA and AMP especially 2019-04-05 10:53:13 -07:00
Ross Wightman 5180f94c7e Distributed (multi-process) train, multi-gpu single process train, and NVIDIA AMP support 2019-04-05 10:53:04 -07:00
Ross Wightman 6f9a0c8ef2 Merge branch 'master' of github.com:rwightman/pytorch-models 2019-04-01 11:07:05 -07:00
Ross Wightman 5cb1a35c6b Fixup Resnext, remove alternate shortcut types 2019-04-01 11:03:37 -07:00
Ross Wightman d87824bd65 Merge branch 'master' of github.com:rwightman/pytorch-models 2019-03-17 09:57:36 -07:00
Ross Wightman 45cde6f0c7 Improve creation of data pipeline with prefetch enabled vs disabled, fixup inception_res_v2 and dpn models 2019-03-11 22:17:42 -07:00
Ross Wightman 321435e6b4 Update resnext init 2019-03-10 14:24:53 -07:00
Ross Wightman 2295cf56c2 Add some Nvidia performance enhancements (prefetch loader, fast collate), and refactor some of training and model fact/transforms 2019-03-10 14:23:16 -07:00
Ross Wightman 9d927a389a Add adabound, random erasing 2019-03-01 22:03:42 -08:00
Ross Wightman 1577c52976 Resnext added, changes to bring it and seresnet in line with rest of models 2019-03-01 15:44:04 -08:00