* call model.eval() when opt.train is False
call model.eval() when opt.train is False
* single-line if statement
* cleanup
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Add output names & dynamic axes for onnx export
Add output_names and dynamic_axes names for all outputs in torch.onnx.export. The first four outputs of the model will have names output0, output1, output2, output3
* use first output only + cleanup
Co-authored-by: Samridha Shrestha <samridha.shrestha@g42.ai>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Allow users to skip exporting in formats that they don't care about
* Correct comments
* Update export.py
renamed --skip-format to --exclude
* Switched format from exclude to include (as instructed by @glenn-jocher)
* cleanup
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Per https://pytorch.org/tutorials/recipes/script_optimized.html this should improve performance on torchscript models (and maybe coreml models also since coremltools operates on a torchscript model input, though this still requires testing).
* ONNX Simplifier
Add ONNX Simplifier to ONNX export pipeline in export.py. Will auto-install onnx-simplifier if onnx is installed but onnx-simplifier is not.
* Update general.py
* option for skip last layer and cuda export support
* added parameter device
* fix import
* cleanup 1
* cleanup 2
* opt-in grid
--grid will export with grid computation, default export will skip grid (same as current)
* default --device cpu
GPU export causes ONNX and CoreML errors.
Co-authored-by: Jan Hajek <jan.hajek@gmail.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Add logging setup
* Fix fusing layers message
* Fix logging does not have end
* Add logging
* Change logging to use logger
* Update yolo.py
I tried this in a cloned branch, and everything seems to work fine
* Update yolo.py
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>