If < 1 or > 1024 set output to default batch size 16.
May partially address https://github.com/ultralytics/yolov5/issues/9156
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* AutoBatch protect from negative batch sizes
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* cleanup
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* AutoBatch checks against failed solutions
@kalenmike this is a simple improvement to AutoBatch to verify that returned solutions have not already failed, i.e. return batch-size 8 when 8 already produced CUDA out of memory.
This is a halfway fix until I can implement a 'final solution' that will actively verify the solved-for batch size rather than passively assume it works.
* Update autobatch.py
* Update autobatch.py
* Add PyTorch AMP check
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Cleanup
* Cleanup
* Cleanup
* Robust for DDP
* Fixes
* Add amp enabled boolean to check_train_batch_size
* Simplify
* space to prefix
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* AutoBatch, AutoAnchor `LOGGER`
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update autoanchor.py
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>