* Remove NCOLS from tqdm
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* reformat
* Single-line argparser argument
* Update README.md
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update README.md
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Add feature map to save npy files
Add feature map to save npy files,export npy files with 32 feature maps per layer.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update plots.py
* Update plots.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update plots.py
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* prune unused imports
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* handle exceptions| attempt CI
* update
* Pre-commit manual run
* yaml one-liner
* Update ci-testing.yml
* Comment W&B CI
Leave as example for future separate CI
* Update ci-testing.yml
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* AutoBatch, AutoAnchor `LOGGER`
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update autoanchor.py
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Improve plots.py robustness
Addresses issues #5374, #5395, #5611
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Fix to resource warning allocation; utilize file.open within a context manager
* rename fh to f
in keeping with naming convention
Co-authored-by: Ayman Saleh <aymansaleh@Aymans-MacBook-Pro-2.local>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Update __init__.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* notebook_init
* notebook_init
* notebook_init
* notebook_init
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* notebook_init
* Created using Colaboratory
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Logger consolidation
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Fix `save_one_box()`
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Resolves https://github.com/ultralytics/yolov5/pull/5341#issuecomment-961774729
Tests:
```python
import glob
import re
from pathlib import Path
def increment_path(path, exist_ok=False, sep='', mkdir=False):
# Increment file or directory path, i.e. runs/exp --> runs/exp{sep}2, runs/exp{sep}3, ... etc.
path = Path(path) # os-agnostic
if path.exists() and not exist_ok:
path, suffix = (path.with_suffix(''), path.suffix) if path.is_file() else (path, '')
dirs = glob.glob(f"{path}{sep}*") # similar paths
matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs]
i = [int(m.groups()[0]) for m in matches if m] # indices
n = max(i) + 1 if i else 2 # increment number
path = Path(f"{path}{sep}{n}{suffix}") # increment path
if mkdir:
path.mkdir(parents=True, exist_ok=True) # make directory
return path
print(increment_path('runs'))
print(increment_path('export.py'))
print(increment_path('abc.def.dir'))
print(increment_path('abc.def.file'))
```
* precommit: isort
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update isort config
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update name
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Meshgrid `indexing='ij'` for PyTorch 1.10
Will not merge currently as breaks backwards compatibility.
* Meshgrid `indexing='ij'` for PyTorch 1.10
Will not merge currently as breaks backwards compatibility.
* Add check_version hard argument
* Update comment
* take EXIF orientation tags into account when fixing corrupt images
* fit 120 char
* sort imports
* Update local exif_transpose comment
We have a local inplace version that is faster than the official as the image is not copied. AutoShape() uses this for Hub models, but here it is not important as the datasets.py usage is infrequent (AutoShape() it is applied every image).
* Update datasets.py
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* take image files with uppercase extensions into account in autosplit
* case fix
* Refactor implementation
Removes additional variable (capital variable names are also only for global variables), and uses the same methodology as implemented earlier in datasets.py L409.
* Remove redundant rglob characters
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Autofix duplicate labels
PR changes duplicate label handling from report error and ignore image-label pair to report warning and autofix image-label pair.
This should fix this common issue for users and allow everyone to get started and get a model trained faster and easier than before.
* sign fix
* Cleanup
* Increment cache version
* all to any fix
* Add train class filter feature to datasets.py
Allows for training on a subset of total classes if `include_class` list is defined on datasets.py L448:
```python
include_class = [] # filter labels to include only these classes (optional)
```
* segments fix
* added callbacks
* added back callback to main
* added save_dir to callback output
* merged in upstream
* removed ghost code
* fixed parsing error for google temp links
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
`check_file()` is now limited to searching opt-in directories: /data, /models, /utils. This prevents large non-project directories like /.git and /venv from being searched, which may cause `check_file()` to slow significantly.
PR should produce datasets sorted alphabetically by filename. Cache version incremented to 0.5.
Note: will force a one-time re-caching of existing datasets on first-use.
Thank you for sharing nice open-source codes 👍
I applied to shuffle the order of all 4(or 9) images in mosaic augmentation
Currently, the order of images in mosaic augmentation is not completely random.
The remaining images except the first are randomly arranged. Apply shuffle all to increase the diversity of data composition.
* add callbacks to train function in wandb sweep
Fix following https://github.com/ultralytics/yolov5/pull/4688 which modified the function signature to `train`
* Cleanup
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Fix `user_config_dir()` for GCP/AWS functions
Compatability fix for GCP functions and AWS lambda for user config directory in https://github.com/ultralytics/yolov5/pull/4628
* Windows skip check
Auto-fix corrupt JPEGs PR introduced a bug whereby the f.seek() operation read all of the bytes in the image, resulting in the PIL image having nothing to read upon the .save() operation.
Fix was to re-open the image using PIL before saving.
* Autofix corrupt JPEGs
This PR automatically re-saves corrupt JPEGs and trains with the resaved images. WARNING: this will overwrite the existing corrupt JPEGs in a dataset and replace them with correct JPEGs, though the filesize may increase and the image contents may not be exactly the same due to lossy JPEG compression schemes. Results may vary by JPEG decoder and hardware.
Current behavior is to exclude corrupt JPEGs from training with a warning to the user, but many users have been complaining about large parts of their dataset being excluded from training.
* Clarify re-save reason
* Add models/tf.py for TensorFlow and TFLite export
* Set auto=False for int8 calibration
* Update requirements.txt for TensorFlow and TFLite export
* Read anchors directly from PyTorch weights
* Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export
* Remove check_anchor_order, check_file, set_logging from import
* Reformat code and optimize imports
* Autodownload model and check cfg
* update --source path, img-size to 320, single output
* Adjust representative_dataset
* Put representative dataset in tfl_int8 block
* detect.py TF inference
* weights to string
* weights to string
* cleanup tf.py
* Add --dynamic-batch-size
* Add xywh normalization to reduce calibration error
* Update requirements.txt
TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error
* Fix imports
Move C3 from models.experimental to models.common
* Add models/tf.py for TensorFlow and TFLite export
* Set auto=False for int8 calibration
* Update requirements.txt for TensorFlow and TFLite export
* Read anchors directly from PyTorch weights
* Add --tf-nms to append NMS in TensorFlow SavedModel and GraphDef export
* Remove check_anchor_order, check_file, set_logging from import
* Reformat code and optimize imports
* Autodownload model and check cfg
* update --source path, img-size to 320, single output
* Adjust representative_dataset
* detect.py TF inference
* Put representative dataset in tfl_int8 block
* weights to string
* weights to string
* cleanup tf.py
* Add --dynamic-batch-size
* Add xywh normalization to reduce calibration error
* Update requirements.txt
TensorFlow 2.3.1 -> 2.4.0 to avoid int8 quantization error
* Fix imports
Move C3 from models.experimental to models.common
* implement C3() and SiLU()
* Fix reshape dim to support dynamic batching
* Add epsilon argument in tf_BN, which is different between TF and PT
* Set stride to None if not using PyTorch, and do not warmup without PyTorch
* Add list support in check_img_size()
* Add list input support in detect.py
* sys.path.append('./') to run from yolov5/
* Add int8 quantization support for TensorFlow 2.5
* Add get_coco128.sh
* Remove --no-tfl-detect in models/tf.py (Use tf-android-tfl-detect branch for EdgeTPU)
* Update requirements.txt
* Replace torch.load() with attempt_load()
* Update requirements.txt
* Add --tf-raw-resize to set half_pixel_centers=False
* Add --agnostic-nms for TF class-agnostic NMS
* Cleanup after merge
* Cleanup2 after merge
* Cleanup3 after merge
* Add tf.py docstring with credit and usage
* pb saved_model and tflite use only one model in detect.py
* Add use cases in docstring of tf.py
* Remove redundant `stride` definition
* Remove keras direct import
* Fix `check_requirements(('tensorflow>=2.4.1',))`
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
* Add cache-on-disk and cache-directory to cache images on disk
* Fix load_image with cache_on_disk
* Add no_cache flag for load_image
* Revert the parts('logging' and a new line) that do not need to be modified
* Add the assertion for shapes of cached images
* Add a suffix string for cached images
* Fix boundary-error of letterbox for load_mosaic
* Add prefix as cache-key of cache-on-disk
* Update cache-function on disk
* Add psutil in requirements.txt
* Update train.py
* Cleanup1
* Cleanup2
* Skip existing npy
* Include re-space
* Export return character fix
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>