* Update greeting
* Cleanup README
* Created using Colaboratory
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Created using Colaboratory
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Update Requirements with PyTorch CUDA
Added --extra-index-url https://download.pytorch.org/whl/cu116 URL to requirements file for ease of creating venv with CUDA enabled PyTorch. Otherwise CPU PyTorch is installed an unable to use local GPUs.
Signed-off-by: David A. Macey <davidamacey@gmail.com>
* Update requirements.txt
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update requirements.txt
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Signed-off-by: David A. Macey <davidamacey@gmail.com>
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* update coco128-seg comments
* Enables detect.py to use Triton for inference
Triton Inference Server is an open source inference serving software
that streamlines AI inferencing.
https://github.com/triton-inference-server/server
The user can now provide a "--triton-url" argument to detect.py to use
a local or remote Triton server for inference.
For e.g., http://localhost:8000 will use http over port 8000
and grpc://localhost:8001 will use grpc over port 8001.
Note, it is not necessary to specify a weights file to use Triton.
A Triton container can be created by first exporting the Yolov5 model
to a Triton supported runtime. Onnx, Torchscript, TensorRT are
supported by both Triton and the export.py script.
The exported model can then be containerized via the OctoML CLI.
See https://github.com/octoml/octo-cli#getting-started for a guide.
* added triton client to requirements
* fixed support for TFSavedModels in Triton
* reverted change
* Test CoreML update
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update ci-testing.yml
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Use pathlib
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Refacto DetectMultiBackend to directly accept triton url as --weights http://...
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Deploy category
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update detect.py
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update common.py
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update common.py
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update predict.py
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update predict.py
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update predict.py
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update triton.py
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update triton.py
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Add printout and requirements check
* Cleanup
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* triton fixes
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fixed triton model query over grpc
* Update check_requirements('tritonclient[all]')
* group imports
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix likely remote URL bug
* update comment
* Update is_url()
* Fix 2x download attempt on http://path/to/model.pt
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: glennjocher <glenn.jocher@ultralytics.com>
Co-authored-by: Gaz Iqbal <giqbal@octoml.ai>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* TensorFlow macOS AutoUpdate
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
The low package version is causing conflicts among other dependencies, commenting it causes no ill effects in CI so this should be fine.
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* assert torch!=1.12.0 for DDP training
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Add `psutil` and `ipython` to requirements.txt
Lightweight packages used by YOLOv5 for system utilization (psutil) and interactive notebooks (IPython)
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* sort alphabetically
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Requirements updated
1. Requests added to requirements.txt. That might not be included in all docker base images, adding it to the requirements is safer.
2. Added a minimum version to Pandas. It's a good practice to have versions for all dependencies.
* Sort alphabetically
* Update requirements.txt
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Add models/tf.py for TensorFlow and TFLite export
* Set auto=False for int8 calibration
* Update requirements.txt for TensorFlow and TFLite export
* Read anchors directly from PyTorch weights
* Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export
* Remove check_anchor_order, check_file, set_logging from import
* Reformat code and optimize imports
* Autodownload model and check cfg
* update --source path, img-size to 320, single output
* Adjust representative_dataset
* Put representative dataset in tfl_int8 block
* detect.py TF inference
* weights to string
* weights to string
* cleanup tf.py
* Add --dynamic-batch-size
* Add xywh normalization to reduce calibration error
* Update requirements.txt
TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error
* Fix imports
Move C3 from models.experimental to models.common
* Add models/tf.py for TensorFlow and TFLite export
* Set auto=False for int8 calibration
* Update requirements.txt for TensorFlow and TFLite export
* Read anchors directly from PyTorch weights
* Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export
* Remove check_anchor_order, check_file, set_logging from import
* Reformat code and optimize imports
* Autodownload model and check cfg
* update --source path, img-size to 320, single output
* Adjust representative_dataset
* detect.py TF inference
* Put representative dataset in tfl_int8 block
* weights to string
* weights to string
* cleanup tf.py
* Add --dynamic-batch-size
* Add xywh normalization to reduce calibration error
* Update requirements.txt
TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error
* Fix imports
Move C3 from models.experimental to models.common
* implement C3() and SiLU()
* Fix reshape dim to support dynamic batching
* Add epsilon argument in tf_BN, which is different between TF and PT
* Set stride to None if not using PyTorch, and do not warmup without PyTorch
* Add list support in check_img_size()
* Add list input support in detect.py
* sys.path.append('./') to run from yolov5/
* Add int8 quantization support for TensorFlow 2.5
* Add get_coco128.sh
* Remove --no-tfl-detect in models/tf.py (Use tf-android-tfl-detect branch for EdgeTPU)
* Update requirements.txt
* Replace torch.load() with attempt_load()
* Update requirements.txt
* Add --tf-raw-resize to set half_pixel_centers=False
* Add --agnostic-nms for TF class-agnostic NMS
* Cleanup after merge
* Cleanup2 after merge
* Cleanup3 after merge
* Add tf.py docstring with credit and usage
* pb saved_model and tflite use only one model in detect.py
* Add use cases in docstring of tf.py
* Remove redundant `stride` definition
* Remove keras direct import
* Fix `check_requirements(('tensorflow>=2.4.1',))`
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>