* Add `@threaded` decorator
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Implement DDP `static_graph=True`
Experimental implementation of new PyTorch 1.11.0 DDP feature.
* Add 1.11.0 check
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* PyTorch Hub `_verbose=False` fix2
* Update downloads.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update hubconf.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update
* Update
* Update
* Update
* Update
* Update
* Update
* Update
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* support nomedia
* support nomedia for validation
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update train.py
* Revert no plot evolve
evolve plots do not contain any images
* Revert plot_results
contains no media
* Update wandb_utils.py
* sync-bn cleanup
* Cleanup
* Rename nomedia -> noplots
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Add support for different normalization layers.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Cleanup
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Print dataset scan only `if RANK in (-1, 0)`
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* precommit: yapf
* align isort
* fix
# Conflicts:
# utils/plots.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update setup.cfg
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update setup.cfg
* Update setup.cfg
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update wandb_utils.py
* Update augmentations.py
* Update setup.cfg
* Update yolo.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update val.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* simplify colorstr
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* val run fix
* export.py last comma
* Update export.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update hubconf.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* PyTorch Hub tuple fix
* PyTorch Hub tuple fix2
* PyTorch Hub tuple fix3
* Update setup
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
New `--cache val` argument will cache validation set only into RAM. Should help multi-GPU training speeds without consuming as much RAM as full `--cache ram`.
* Update train.py
As see in #6463, modification on train in evolve process to allow custom save directory.
* fix val
* PEP8
whitespace around operator
* Cleanup
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* New flag 'stop_training' in util.callbacks.Callbacks class to prematurely stop training from callback handler
* Removed most of the new checks, leaving only the one after calling 'on_train_batch_end'
* Cleanup
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Remove NCOLS from tqdm
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* reformat
* Single-line argparser argument
* Update README.md
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update README.md
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Logger consolidation
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* write date in checkpoint file
write date in checkpoint file
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* isoformat
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* precommit: isort
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update isort config
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update name
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* use os.path.relpath instead of relative_to
* use os.path.relpath instead of relative_to
* Remove os.path from val.py
* Remove os.path from train.py
* Update detect.py import to os
* Update export.py import to os
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This PR adds a new training argument `--save-period` to save training checkpoints every `x` epochs. To save training every 50 epochs for example:
```
python train.py --save-period 50 # saves epoch50.pt, epoch100.pt, epoch150.pt, ... etc.
```
This saved checkpoints in addition to existing last.pt and best.pt checkpoints and does not affect their behavior. Default value is -1, i.e. disabled.
* Validate best.pt on train end
* 0.7 iou for COCO only
* pass callbacks
* active model.float() if not half
* print Validating best.pt...
* add newline
* Add cache-on-disk and cache-directory to cache images on disk
* Fix load_image with cache_on_disk
* Add no_cache flag for load_image
* Revert the parts('logging' and a new line) that do not need to be modified
* Add the assertion for shapes of cached images
* Add a suffix string for cached images
* Fix boundary-error of letterbox for load_mosaic
* Add prefix as cache-key of cache-on-disk
* Update cache-function on disk
* Add psutil in requirements.txt
* Update train.py
* Cleanup1
* Cleanup2
* Skip existing npy
* Include re-space
* Export return character fix
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Add freeze as an argument
I train on different platforms and sometimes I want to freeze some layers. I have to go into the code and change it and also keep track of how many layers I froze on what platform. Please add the number of layers to freeze as an argument in future versions thanks.
* Update train.py
* Update train.py
* Cleanup
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>