Merge branch 'ultralytics:master' into master
commit
cddd180cc1
.github/workflows
models
utils
flask_rest_api
|
@ -142,7 +142,7 @@ jobs:
|
|||
if: always()
|
||||
steps:
|
||||
- name: Check for failure and notify
|
||||
if: (needs.Benchmarks.result == 'failure' || needs.Tests.result == 'failure' || needs.Benchmarks.result == 'cancelled' || needs.Tests.result == 'cancelled') && github.repository == 'ultralytics/yolov5' && (github.event_name == 'schedule' || github.event_name == 'push')
|
||||
if: (needs.Benchmarks.result == 'failure' || needs.Tests.result == 'failure' || needs.Benchmarks.result == 'cancelled' || needs.Tests.result == 'cancelled') && github.repository == 'ultralytics/yolov5' && (github.event_name == 'schedule' || github.event_name == 'push') && github.run_attempt == '1'
|
||||
uses: slackapi/slack-github-action@v2.0.0
|
||||
with:
|
||||
webhook-type: incoming-webhook
|
||||
|
|
371
README.md
371
README.md
|
@ -7,7 +7,7 @@
|
|||
[中文](https://docs.ultralytics.com/zh) | [한국어](https://docs.ultralytics.com/ko) | [日本語](https://docs.ultralytics.com/ja) | [Русский](https://docs.ultralytics.com/ru) | [Deutsch](https://docs.ultralytics.com/de) | [Français](https://docs.ultralytics.com/fr) | [Español](https://docs.ultralytics.com/es) | [Português](https://docs.ultralytics.com/pt) | [Türkçe](https://docs.ultralytics.com/tr) | [Tiếng Việt](https://docs.ultralytics.com/vi) | [العربية](https://docs.ultralytics.com/ar)
|
||||
|
||||
<div>
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI Testing"></a>
|
||||
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv5 Citation"></a>
|
||||
<a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
|
||||
<a href="https://discord.com/invite/ultralytics"><img alt="Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a> <a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a> <a href="https://reddit.com/r/ultralytics"><img alt="Ultralytics Reddit" src="https://img.shields.io/reddit/subreddit-subscribers/ultralytics?style=flat&logo=reddit&logoColor=white&label=Reddit&color=blue"></a>
|
||||
|
@ -18,84 +18,92 @@
|
|||
</div>
|
||||
<br>
|
||||
|
||||
YOLOv5 🚀 is the world's most loved vision AI, representing <a href="https://www.ultralytics.com/">Ultralytics</a> open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
|
||||
Ultralytics YOLOv5 🚀 is a cutting-edge, state-of-the-art (SOTA) computer vision model developed by [Ultralytics](https://www.ultralytics.com/). Based on the [PyTorch](https://pytorch.org/) framework, YOLOv5 is renowned for its ease of use, speed, and accuracy. It incorporates insights and best practices from extensive research and development, making it a popular choice for a wide range of vision AI tasks, including [object detection](https://docs.ultralytics.com/tasks/detect/), [image segmentation](https://docs.ultralytics.com/tasks/segment/), and [image classification](https://docs.ultralytics.com/tasks/classify/).
|
||||
|
||||
We hope that the resources here will help you get the most out of YOLOv5. Please browse the YOLOv5 <a href="https://docs.ultralytics.com/yolov5/">Docs</a> for details, raise an issue on <a href="https://github.com/ultralytics/yolov5/issues/new/choose">GitHub</a> for support, and join our <a href="https://discord.com/invite/ultralytics">Discord</a> community for questions and discussions!
|
||||
We hope the resources here help you get the most out of YOLOv5. Please browse the [YOLOv5 Docs](https://docs.ultralytics.com/yolov5/) for detailed information, raise an issue on [GitHub](https://github.com/ultralytics/yolov5/issues/new/choose) for support, and join our [Discord community](https://discord.com/invite/ultralytics) for questions and discussions!
|
||||
|
||||
To request an Enterprise License please complete the form at [Ultralytics Licensing](https://www.ultralytics.com/license).
|
||||
To request an Enterprise License, please complete the form at [Ultralytics Licensing](https://www.ultralytics.com/license).
|
||||
|
||||
<div align="center">
|
||||
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="Ultralytics GitHub"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
||||
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="2%" alt="Ultralytics LinkedIn"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
||||
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="2%" alt="Ultralytics Twitter"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
||||
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="2%" alt="Ultralytics YouTube"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
||||
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="2%" alt="Ultralytics TikTok"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
||||
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="2%" alt="Ultralytics BiliBili"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
||||
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="2%" alt="Ultralytics Discord"></a>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
<br>
|
||||
|
||||
## <div align="center">YOLO11 🚀 NEW</div>
|
||||
## 🚀 YOLO11: The Next Evolution
|
||||
|
||||
We are excited to unveil the launch of Ultralytics YOLO11 🚀, the latest advancement in our state-of-the-art (SOTA) vision models! Available now at **[GitHub](https://github.com/ultralytics/ultralytics)**, YOLO11 builds on our legacy of speed, precision, and ease of use. Whether you're tackling object detection, image segmentation, or image classification, YOLO11 delivers the performance and versatility needed to excel in diverse applications.
|
||||
We are excited to announce the launch of **Ultralytics YOLO11** 🚀, the latest advancement in our state-of-the-art (SOTA) vision models! Available now at the [Ultralytics YOLO GitHub repository](https://github.com/ultralytics/ultralytics), YOLO11 builds on our legacy of speed, precision, and ease of use. Whether you're tackling [object detection](https://docs.ultralytics.com/tasks/detect/), [instance segmentation](https://docs.ultralytics.com/tasks/segment/), [pose estimation](https://docs.ultralytics.com/tasks/pose/), [image classification](https://docs.ultralytics.com/tasks/classify/), or [oriented object detection (OBB)](https://docs.ultralytics.com/tasks/obb/), YOLO11 delivers the performance and versatility needed to excel in diverse applications.
|
||||
|
||||
Get started today and unlock the full potential of YOLO11! Visit the [Ultralytics Docs](https://docs.ultralytics.com/) for comprehensive guides and resources:
|
||||
|
||||
[](https://badge.fury.io/py/ultralytics) [](https://www.pepy.tech/projects/ultralytics)
|
||||
|
||||
```bash
|
||||
# Install the ultralytics package
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
<div align="center">
|
||||
<a href="https://www.ultralytics.com/yolo" target="_blank">
|
||||
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/refs/heads/main/yolo/performance-comparison.png"></a>
|
||||
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/refs/heads/main/yolo/performance-comparison.png" alt="Ultralytics YOLO Performance Comparison"></a>
|
||||
</div>
|
||||
|
||||
## <div align="center">Documentation</div>
|
||||
## 📚 Documentation
|
||||
|
||||
See the [YOLOv5 Docs](https://docs.ultralytics.com/yolov5/) for full documentation on training, testing and deployment. See below for quickstart examples.
|
||||
See the [YOLOv5 Docs](https://docs.ultralytics.com/yolov5/) for full documentation on training, testing, and deployment. See below for quickstart examples.
|
||||
|
||||
<details open>
|
||||
<summary>Install</summary>
|
||||
|
||||
Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a [**Python>=3.8.0**](https://www.python.org/) environment, including [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/).
|
||||
Clone the repository and install dependencies in a [**Python>=3.8.0**](https://www.python.org/) environment. Ensure you have [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/) installed.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/ultralytics/yolov5 # clone
|
||||
# Clone the YOLOv5 repository
|
||||
git clone https://github.com/ultralytics/yolov5
|
||||
|
||||
# Navigate to the cloned directory
|
||||
cd yolov5
|
||||
pip install -r requirements.txt # install
|
||||
|
||||
# Install required packages
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details open>
|
||||
<summary>Inference</summary>
|
||||
<summary>Inference with PyTorch Hub</summary>
|
||||
|
||||
YOLOv5 [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/) inference. [Models](https://github.com/ultralytics/yolov5/tree/master/models) download automatically from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases).
|
||||
Use YOLOv5 via [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/) for inference. [Models](https://github.com/ultralytics/yolov5/tree/master/models) are automatically downloaded from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases).
|
||||
|
||||
```python
|
||||
import torch
|
||||
|
||||
# Load YOLOv5 model (options: yolov5n, yolov5s, yolov5m, yolov5l, yolov5x)
|
||||
model = torch.hub.load("ultralytics/yolov5", "yolov5s")
|
||||
# Load a YOLOv5 model (options: yolov5n, yolov5s, yolov5m, yolov5l, yolov5x)
|
||||
model = torch.hub.load("ultralytics/yolov5", "yolov5s") # Default: yolov5s
|
||||
|
||||
# Input source (URL, file, PIL, OpenCV, numpy array, or list)
|
||||
img = "https://ultralytics.com/images/zidane.jpg"
|
||||
# Define the input image source (URL, local file, PIL image, OpenCV frame, numpy array, or list)
|
||||
img = "https://ultralytics.com/images/zidane.jpg" # Example image
|
||||
|
||||
# Perform inference (handles batching, resizing, normalization)
|
||||
# Perform inference (handles batching, resizing, normalization automatically)
|
||||
results = model(img)
|
||||
|
||||
# Process results (options: .print(), .show(), .save(), .crop(), .pandas())
|
||||
results.print()
|
||||
# Process the results (options: .print(), .show(), .save(), .crop(), .pandas())
|
||||
results.print() # Print results to console
|
||||
results.show() # Display results in a window
|
||||
results.save() # Save results to runs/detect/exp
|
||||
```
|
||||
|
||||
</details>
|
||||
|
@ -103,19 +111,38 @@ results.print()
|
|||
<details>
|
||||
<summary>Inference with detect.py</summary>
|
||||
|
||||
`detect.py` runs inference on a variety of sources, downloading [models](https://github.com/ultralytics/yolov5/tree/master/models) automatically from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
|
||||
The `detect.py` script runs inference on various sources. It automatically downloads [models](https://github.com/ultralytics/yolov5/tree/master/models) from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) and saves the results to the `runs/detect` directory.
|
||||
|
||||
```bash
|
||||
python detect.py --weights yolov5s.pt --source 0 # webcam
|
||||
python detect.py --weights yolov5s.pt --source img.jpg # image
|
||||
python detect.py --weights yolov5s.pt --source vid.mp4 # video
|
||||
python detect.py --weights yolov5s.pt --source screen # screenshot
|
||||
python detect.py --weights yolov5s.pt --source path/ # directory
|
||||
python detect.py --weights yolov5s.pt --source list.txt # list of images
|
||||
python detect.py --weights yolov5s.pt --source list.streams # list of streams
|
||||
python detect.py --weights yolov5s.pt --source 'path/*.jpg' # glob
|
||||
python detect.py --weights yolov5s.pt --source 'https://youtu.be/LNwODJXcvt4' # YouTube
|
||||
python detect.py --weights yolov5s.pt --source 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
|
||||
# Run inference using a webcam
|
||||
python detect.py --weights yolov5s.pt --source 0
|
||||
|
||||
# Run inference on a local image file
|
||||
python detect.py --weights yolov5s.pt --source img.jpg
|
||||
|
||||
# Run inference on a local video file
|
||||
python detect.py --weights yolov5s.pt --source vid.mp4
|
||||
|
||||
# Run inference on a screen capture
|
||||
python detect.py --weights yolov5s.pt --source screen
|
||||
|
||||
# Run inference on a directory of images
|
||||
python detect.py --weights yolov5s.pt --source path/to/images/
|
||||
|
||||
# Run inference on a text file listing image paths
|
||||
python detect.py --weights yolov5s.pt --source list.txt
|
||||
|
||||
# Run inference on a text file listing stream URLs
|
||||
python detect.py --weights yolov5s.pt --source list.streams
|
||||
|
||||
# Run inference using a glob pattern for images
|
||||
python detect.py --weights yolov5s.pt --source 'path/to/*.jpg'
|
||||
|
||||
# Run inference on a YouTube video URL
|
||||
python detect.py --weights yolov5s.pt --source 'https://youtu.be/LNwODJXcvt4'
|
||||
|
||||
# Run inference on an RTSP, RTMP, or HTTP stream
|
||||
python detect.py --weights yolov5s.pt --source 'rtsp://example.com/media.mp4'
|
||||
```
|
||||
|
||||
</details>
|
||||
|
@ -123,49 +150,58 @@ python detect.py --weights yolov5s.pt --source 'rtsp://example.com/media.mp4' #
|
|||
<details>
|
||||
<summary>Training</summary>
|
||||
|
||||
The commands below reproduce YOLOv5 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) results. [Models](https://github.com/ultralytics/yolov5/tree/master/models) and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) download automatically from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases). Training times for YOLOv5n/s/m/l/x are 1/2/4/6/8 days on a V100 GPU ([Multi-GPU](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/) times faster). Use the largest `--batch-size` possible, or pass `--batch-size -1` for YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092). Batch sizes shown for V100-16GB.
|
||||
The commands below demonstrate how to reproduce YOLOv5 [COCO dataset](https://docs.ultralytics.com/datasets/detect/coco/) results. Both [models](https://github.com/ultralytics/yolov5/tree/master/models) and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) are downloaded automatically from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases). Training times for YOLOv5n/s/m/l/x are approximately 1/2/4/6/8 days on a single [NVIDIA V100 GPU](https://www.nvidia.com/en-us/data-center/v100/). Using [Multi-GPU training](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/) can significantly reduce training time. Use the largest `--batch-size` your hardware allows, or use `--batch-size -1` for YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092). The batch sizes shown below are for V100-16GB GPUs.
|
||||
|
||||
```bash
|
||||
# Train YOLOv5n on COCO for 300 epochs
|
||||
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128
|
||||
|
||||
# Train YOLOv5s on COCO for 300 epochs
|
||||
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5s.yaml --batch-size 64
|
||||
|
||||
# Train YOLOv5m on COCO for 300 epochs
|
||||
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5m.yaml --batch-size 40
|
||||
|
||||
# Train YOLOv5l on COCO for 300 epochs
|
||||
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5l.yaml --batch-size 24
|
||||
|
||||
# Train YOLOv5x on COCO for 300 epochs
|
||||
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5x.yaml --batch-size 16
|
||||
```
|
||||
|
||||
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
|
||||
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png" alt="YOLOv5 Training Results">
|
||||
|
||||
</details>
|
||||
|
||||
<details open>
|
||||
<summary>Tutorials</summary>
|
||||
|
||||
- [Train Custom Data](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/) 🚀 RECOMMENDED
|
||||
- [Tips for Best Training Results](https://docs.ultralytics.com/guides/model-training-tips/) ☘️
|
||||
- [Multi-GPU Training](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/)
|
||||
- [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/) 🌟 NEW
|
||||
- [TFLite, ONNX, CoreML, TensorRT Export](https://docs.ultralytics.com/yolov5/tutorials/model_export/) 🚀
|
||||
- [NVIDIA Jetson platform Deployment](https://docs.ultralytics.com/yolov5/tutorials/running_on_jetson_nano/) 🌟 NEW
|
||||
- [Test-Time Augmentation (TTA)](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/)
|
||||
- [Model Ensembling](https://docs.ultralytics.com/yolov5/tutorials/model_ensembling/)
|
||||
- [Model Pruning/Sparsity](https://docs.ultralytics.com/yolov5/tutorials/model_pruning_and_sparsity/)
|
||||
- [Hyperparameter Evolution](https://docs.ultralytics.com/yolov5/tutorials/hyperparameter_evolution/)
|
||||
- [Transfer Learning with Frozen Layers](https://docs.ultralytics.com/yolov5/tutorials/transfer_learning_with_frozen_layers/)
|
||||
- [Architecture Summary](https://docs.ultralytics.com/yolov5/tutorials/architecture_description/) 🌟 NEW
|
||||
- [Ultralytics HUB to train and deploy YOLO](https://www.ultralytics.com/hub) 🚀 RECOMMENDED
|
||||
- [ClearML Logging](https://docs.ultralytics.com/yolov5/tutorials/clearml_logging_integration/)
|
||||
- [YOLOv5 with Neural Magic's Deepsparse](https://docs.ultralytics.com/yolov5/tutorials/neural_magic_pruning_quantization/)
|
||||
- [Comet Logging](https://docs.ultralytics.com/yolov5/tutorials/comet_logging_integration/) 🌟 NEW
|
||||
- **[Train Custom Data](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/)** 🚀 **RECOMMENDED**: Learn how to train YOLOv5 on your own datasets.
|
||||
- **[Tips for Best Training Results](https://docs.ultralytics.com/guides/model-training-tips/)** ☘️: Improve your model's performance with expert tips.
|
||||
- **[Multi-GPU Training](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/)**: Speed up training using multiple GPUs.
|
||||
- **[PyTorch Hub Integration](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/)** 🌟 **NEW**: Easily load models using PyTorch Hub.
|
||||
- **[Model Export (TFLite, ONNX, CoreML, TensorRT)](https://docs.ultralytics.com/yolov5/tutorials/model_export/)** 🚀: Convert your models to various deployment formats like [ONNX](https://onnx.ai/) or [TensorRT](https://developer.nvidia.com/tensorrt).
|
||||
- **[NVIDIA Jetson Deployment](https://docs.ultralytics.com/yolov5/tutorials/running_on_jetson_nano/)** 🌟 **NEW**: Deploy YOLOv5 on [NVIDIA Jetson](https://developer.nvidia.com/embedded-computing) devices.
|
||||
- **[Test-Time Augmentation (TTA)](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/)**: Enhance prediction accuracy with TTA.
|
||||
- **[Model Ensembling](https://docs.ultralytics.com/yolov5/tutorials/model_ensembling/)**: Combine multiple models for better performance.
|
||||
- **[Model Pruning/Sparsity](https://docs.ultralytics.com/yolov5/tutorials/model_pruning_and_sparsity/)**: Optimize models for size and speed.
|
||||
- **[Hyperparameter Evolution](https://docs.ultralytics.com/yolov5/tutorials/hyperparameter_evolution/)**: Automatically find the best training hyperparameters.
|
||||
- **[Transfer Learning with Frozen Layers](https://docs.ultralytics.com/yolov5/tutorials/transfer_learning_with_frozen_layers/)**: Adapt pretrained models to new tasks efficiently using [transfer learning](https://www.ultralytics.com/glossary/transfer-learning).
|
||||
- **[Architecture Summary](https://docs.ultralytics.com/yolov5/tutorials/architecture_description/)** 🌟 **NEW**: Understand the YOLOv5 model architecture.
|
||||
- **[Ultralytics HUB Training](https://www.ultralytics.com/hub)** 🚀 **RECOMMENDED**: Train and deploy YOLO models using Ultralytics HUB.
|
||||
- **[ClearML Logging](https://docs.ultralytics.com/yolov5/tutorials/clearml_logging_integration/)**: Integrate with [ClearML](https://clear.ml/) for experiment tracking.
|
||||
- **[Neural Magic DeepSparse Integration](https://docs.ultralytics.com/yolov5/tutorials/neural_magic_pruning_quantization/)**: Accelerate inference with DeepSparse.
|
||||
- **[Comet Logging](https://docs.ultralytics.com/yolov5/tutorials/comet_logging_integration/)** 🌟 **NEW**: Log experiments using [Comet ML](https://www.comet.com/).
|
||||
|
||||
</details>
|
||||
|
||||
## <div align="center">Integrations</div>
|
||||
## 🧩 Integrations
|
||||
|
||||
Our key integrations with leading AI platforms extend the functionality of Ultralytics' offerings, enhancing tasks like dataset labeling, training, visualization, and model management. Discover how Ultralytics, in collaboration with [W&B](https://docs.wandb.ai/guides/integrations/ultralytics/), [Comet](https://bit.ly/yolov8-readme-comet), [Roboflow](https://roboflow.com/?ref=ultralytics) and [OpenVINO](https://docs.ultralytics.com/integrations/openvino/), can optimize your AI workflow.
|
||||
Our key integrations with leading AI platforms extend the functionality of Ultralytics' offerings, enhancing tasks like dataset labeling, training, visualization, and model management. Discover how Ultralytics, in collaboration with partners like [Weights & Biases](https://docs.ultralytics.com/integrations/weights-biases/), [Comet ML](https://docs.ultralytics.com/integrations/comet/), [Roboflow](https://docs.ultralytics.com/integrations/roboflow/), and [Intel OpenVINO](https://docs.ultralytics.com/integrations/openvino/), can optimize your AI workflow. Explore more at [Ultralytics Integrations](https://docs.ultralytics.com/integrations/).
|
||||
|
||||
<br>
|
||||
<a href="https://www.ultralytics.com/hub" target="_blank">
|
||||
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png" alt="Ultralytics active learning integrations"></a>
|
||||
<a href="https://docs.ultralytics.com/integrations/" target="_blank">
|
||||
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png" alt="Ultralytics active learning integrations">
|
||||
</a>
|
||||
<br>
|
||||
<br>
|
||||
|
||||
|
@ -173,88 +209,90 @@ Our key integrations with leading AI platforms extend the functionality of Ultra
|
|||
<a href="https://www.ultralytics.com/hub">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-ultralytics-hub.png" width="10%" alt="Ultralytics HUB logo"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
|
||||
<a href="https://docs.wandb.ai/guides/integrations/ultralytics/">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-wb.png" width="10%" alt="ClearML logo"></a>
|
||||
<a href="https://docs.ultralytics.com/integrations/weights-biases/">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-wb.png" width="10%" alt="Weights & Biases logo"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
|
||||
<a href="https://bit.ly/yolov8-readme-comet">
|
||||
<a href="https://docs.ultralytics.com/integrations/comet/">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-comet.png" width="10%" alt="Comet ML logo"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
|
||||
<a href="https://bit.ly/yolov5-neuralmagic">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-neuralmagic.png" width="10%" alt="NeuralMagic logo"></a>
|
||||
<a href="https://docs.ultralytics.com/integrations/neural-magic/">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-neuralmagic.png" width="10%" alt="Neural Magic logo"></a>
|
||||
</div>
|
||||
|
||||
| Ultralytics HUB 🚀 | W&B | Comet ⭐ NEW | Neural Magic |
|
||||
| :--------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: |
|
||||
| Streamline YOLO workflows: Label, train, and deploy effortlessly with [Ultralytics HUB](https://www.ultralytics.com/hub). Try now! | Track experiments, hyperparameters, and results with [Weights & Biases](https://docs.wandb.ai/guides/integrations/ultralytics/) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet) lets you save YOLOv5 models, resume training, and interactively visualize and debug predictions | Run YOLO11 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) |
|
||||
| Ultralytics HUB 🌟 | Weights & Biases | Comet | Neural Magic |
|
||||
| :----------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------: |
|
||||
| Streamline YOLO workflows: Label, train, and deploy effortlessly with [Ultralytics HUB](https://hub.ultralytics.com). Try now! | Track experiments, hyperparameters, and results with [Weights & Biases](https://docs.ultralytics.com/integrations/weights-biases/). | Free forever, [Comet ML](https://docs.ultralytics.com/integrations/comet/) lets you save YOLO models, resume training, and interactively visualize predictions. | Run YOLO inference up to 6x faster with [Neural Magic DeepSparse](https://docs.ultralytics.com/integrations/neural-magic/). |
|
||||
|
||||
## <div align="center">Ultralytics HUB</div>
|
||||
## ⭐ Ultralytics HUB
|
||||
|
||||
Experience seamless AI with [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐, the all-in-one solution for data visualization, YOLOv5 and YOLOv8 🚀 model training and deployment, without any coding. Transform images into actionable insights and bring your AI visions to life with ease using our cutting-edge platform and user-friendly [Ultralytics App](https://www.ultralytics.com/app-install). Start your journey for **Free** now!
|
||||
Experience seamless AI development with [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐, the ultimate platform for building, training, and deploying [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) models. Visualize datasets, train [YOLOv5](https://docs.ultralytics.com/models/yolov5/) and [YOLOv8](https://docs.ultralytics.com/models/yolov8/) 🚀 models, and deploy them to real-world applications without writing any code. Transform images into actionable insights using our cutting-edge tools and user-friendly [Ultralytics App](https://www.ultralytics.com/app-install). Start your journey for **Free** today!
|
||||
|
||||
<a align="center" href="https://www.ultralytics.com/hub" target="_blank">
|
||||
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
|
||||
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png" alt="Ultralytics HUB Platform Screenshot"></a>
|
||||
|
||||
## <div align="center">Why YOLOv5</div>
|
||||
## 🤔 Why YOLOv5?
|
||||
|
||||
YOLOv5 has been designed to be super easy to get started and simple to learn. We prioritize real-world results.
|
||||
YOLOv5 is designed for simplicity and ease of use. We prioritize real-world performance and accessibility.
|
||||
|
||||
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040763-93c22a27-347c-4e3c-847a-8094621d3f4e.png"></p>
|
||||
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040763-93c22a27-347c-4e3c-847a-8094621d3f4e.png" alt="YOLOv5 Performance Chart"></p>
|
||||
<details>
|
||||
<summary>YOLOv5-P5 640 Figure</summary>
|
||||
|
||||
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040757-ce0934a3-06a6-43dc-a979-2edbbd69ea0e.png"></p>
|
||||
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040757-ce0934a3-06a6-43dc-a979-2edbbd69ea0e.png" alt="YOLOv5 P5 640 Performance Chart"></p>
|
||||
</details>
|
||||
<details>
|
||||
<summary>Figure Notes</summary>
|
||||
|
||||
- **COCO AP val** denotes mAP@0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536.
|
||||
- **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/) V100 instance at batch-size 32.
|
||||
- **EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8.
|
||||
- **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
|
||||
- **COCO AP val** denotes the [mean Average Precision (mAP)](https://www.ultralytics.com/glossary/mean-average-precision-map) at [Intersection over Union (IoU)](https://www.ultralytics.com/glossary/intersection-over-union-iou) thresholds from 0.5 to 0.95, measured on the 5,000-image [COCO val2017 dataset](https://docs.ultralytics.com/datasets/detect/coco/) across various inference sizes (256 to 1536 pixels).
|
||||
- **GPU Speed** measures the average inference time per image on the [COCO val2017 dataset](https://docs.ultralytics.com/datasets/detect/coco/) using an [AWS p3.2xlarge V100 instance](https://aws.amazon.com/ec2/instance-types/p3/) with a batch size of 32.
|
||||
- **EfficientDet** data is sourced from the [google/automl repository](https://github.com/google/automl) at batch size 8.
|
||||
- **Reproduce** these results using the command: `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
|
||||
|
||||
</details>
|
||||
|
||||
### Pretrained Checkpoints
|
||||
|
||||
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | mAP<sup>val<br>50 | Speed<br><sup>CPU b1<br>(ms) | Speed<br><sup>V100 b1<br>(ms) | Speed<br><sup>V100 b32<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
|
||||
| ----------------------------------------------------------------------------------------------- | --------------------- | -------------------- | ----------------- | ---------------------------- | ----------------------------- | ------------------------------ | ------------------ | ---------------------- |
|
||||
| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** |
|
||||
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
|
||||
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
|
||||
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
|
||||
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
|
||||
| | | | | | | | | |
|
||||
| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
|
||||
| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 |
|
||||
| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
|
||||
| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
|
||||
| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)<br>+ [TTA] | 1280<br>1536 | 55.0<br>**55.8** | 72.7<br>**72.7** | 3136<br>- | 26.2<br>- | 19.4<br>- | 140.7<br>- | 209.8<br>- |
|
||||
This table shows the performance metrics for various YOLOv5 models trained on the COCO dataset.
|
||||
|
||||
| Model | Size<br><sup>(pixels) | mAP<sup>val<br>50-95 | mAP<sup>val<br>50 | Speed<br><sup>CPU b1<br>(ms) | Speed<br><sup>V100 b1<br>(ms) | Speed<br><sup>V100 b32<br>(ms) | Params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
|
||||
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------- | -------------------- | ----------------- | ---------------------------- | ----------------------------- | ------------------------------ | ------------------ | ---------------------- |
|
||||
| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** |
|
||||
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
|
||||
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
|
||||
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
|
||||
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
|
||||
| | | | | | | | | |
|
||||
| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
|
||||
| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 |
|
||||
| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
|
||||
| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
|
||||
| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)<br>+ [[TTA]](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/) | 1280<br>1536 | 55.0<br>**55.8** | 72.7<br>**72.7** | 3136<br>- | 26.2<br>- | 19.4<br>- | 140.7<br>- | 209.8<br>- |
|
||||
|
||||
<details>
|
||||
<summary>Table Notes</summary>
|
||||
|
||||
- All checkpoints are trained to 300 epochs with default settings. Nano and Small models use [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) hyps, all others use [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml).
|
||||
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.<br>Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
|
||||
- **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/) instance. NMS times (~1 ms/img) not included.<br>Reproduce by `python val.py --data coco.yaml --img 640 --task speed --batch 1`
|
||||
- **TTA** [Test Time Augmentation](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/) includes reflection and scale augmentations.<br>Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
|
||||
- All checkpoints were trained for 300 epochs using default settings. Nano (n) and Small (s) models use [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) hyperparameters, while Medium (m), Large (l), and Extra-Large (x) models use [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml).
|
||||
- **mAP<sup>val</sup>** values represent single-model, single-scale performance on the [COCO val2017 dataset](https://docs.ultralytics.com/datasets/detect/coco/).<br>Reproduce using: `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
|
||||
- **Speed** metrics are averaged over COCO val images using an [AWS p3.2xlarge V100 instance](https://aws.amazon.com/ec2/instance-types/p3/). Non-Maximum Suppression (NMS) time (~1 ms/image) is not included.<br>Reproduce using: `python val.py --data coco.yaml --img 640 --task speed --batch 1`
|
||||
- **TTA** ([Test Time Augmentation](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/)) includes reflection and scale augmentations for improved accuracy.<br>Reproduce using: `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
|
||||
|
||||
</details>
|
||||
|
||||
## <div align="center">Segmentation</div>
|
||||
## 🖼️ Segmentation
|
||||
|
||||
Our new YOLOv5 [release v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) instance segmentation models are the fastest and most accurate in the world, beating all current [SOTA benchmarks](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco). We've made them super simple to train, validate and deploy. See full details in our [Release Notes](https://github.com/ultralytics/yolov5/releases/v7.0) and visit our [YOLOv5 Segmentation Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) for quickstart tutorials.
|
||||
The YOLOv5 [release v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) introduced [instance segmentation](https://docs.ultralytics.com/tasks/segment/) models that achieve state-of-the-art performance. These models are designed for easy training, validation, and deployment. For full details, see the [Release Notes](https://github.com/ultralytics/yolov5/releases/v7.0) and explore the [YOLOv5 Segmentation Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) for quickstart examples.
|
||||
|
||||
<details>
|
||||
<summary>Segmentation Checkpoints</summary>
|
||||
|
||||
<div align="center">
|
||||
<a align="center" href="https://www.ultralytics.com/yolo" target="_blank">
|
||||
<img width="800" src="https://user-images.githubusercontent.com/61612323/204180385-84f3aca9-a5e9-43d8-a617-dda7ca12e54a.png"></a>
|
||||
<img width="800" src="https://user-images.githubusercontent.com/61612323/204180385-84f3aca9-a5e9-43d8-a617-dda7ca12e54a.png" alt="YOLOv5 Segmentation Performance Chart"></a>
|
||||
</div>
|
||||
|
||||
We trained YOLOv5 segmentations models on COCO for 300 epochs at image size 640 using A100 GPUs. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We ran all speed tests on Google [Colab Pro](https://colab.research.google.com/signup) notebooks for easy reproducibility.
|
||||
YOLOv5 segmentation models were trained on the [COCO dataset](https://docs.ultralytics.com/datasets/segment/coco/) for 300 epochs at an image size of 640 pixels using A100 GPUs. Models were exported to [ONNX](https://onnx.ai/) FP32 for CPU speed tests and [TensorRT](https://developer.nvidia.com/tensorrt) FP16 for GPU speed tests. All speed tests were conducted on Google [Colab Pro](https://colab.research.google.com/signup) notebooks for reproducibility.
|
||||
|
||||
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Train time<br><sup>300 epochs<br>A100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TRT A100<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
|
||||
| Model | Size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Train Time<br><sup>300 epochs<br>A100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TRT A100<br>(ms) | Params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
|
||||
| ------------------------------------------------------------------------------------------ | --------------------- | -------------------- | --------------------- | --------------------------------------------- | ------------------------------ | ------------------------------ | ------------------ | ---------------------- |
|
||||
| [YOLOv5n-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-seg.pt) | 640 | 27.6 | 23.4 | 80:17 | **62.7** | **1.2** | **2.0** | **7.1** |
|
||||
| [YOLOv5s-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-seg.pt) | 640 | 37.6 | 31.7 | 88:16 | 173.3 | 1.4 | 7.6 | 26.4 |
|
||||
|
@ -262,10 +300,10 @@ We trained YOLOv5 segmentations models on COCO for 300 epochs at image size 640
|
|||
| [YOLOv5l-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-seg.pt) | 640 | 49.0 | 39.9 | 66:43 (2x) | 857.4 | 2.9 | 47.9 | 147.7 |
|
||||
| [YOLOv5x-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-seg.pt) | 640 | **50.7** | **41.4** | 62:56 (3x) | 1579.2 | 4.5 | 88.8 | 265.7 |
|
||||
|
||||
- All checkpoints are trained to 300 epochs with SGD optimizer with `lr0=0.01` and `weight_decay=5e-5` at image size 640 and all default settings.<br>Runs logged to https://wandb.ai/glenn-jocher/YOLOv5_v70_official
|
||||
- **Accuracy** values are for single-model single-scale on COCO dataset.<br>Reproduce by `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt`
|
||||
- **Speed** averaged over 100 inference images using a [Colab Pro](https://colab.research.google.com/signup) A100 High-RAM instance. Values indicate inference speed only (NMS adds about 1ms per image). <br>Reproduce by `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1`
|
||||
- **Export** to ONNX at FP32 and TensorRT at FP16 done with `export.py`. <br>Reproduce by `python export.py --weights yolov5s-seg.pt --include engine --device 0 --half`
|
||||
- All checkpoints were trained for 300 epochs using the SGD optimizer with `lr0=0.01` and `weight_decay=5e-5` at an image size of 640 pixels, using default settings.<br>Training runs are logged at [https://wandb.ai/glenn-jocher/YOLOv5_v70_official](https://wandb.ai/glenn-jocher/YOLOv5_v70_official).
|
||||
- **Accuracy** values represent single-model, single-scale performance on the COCO dataset.<br>Reproduce using: `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt`
|
||||
- **Speed** metrics are averaged over 100 inference images using a [Colab Pro A100 High-RAM instance](https://colab.research.google.com/signup). Values indicate inference speed only (NMS adds approximately 1ms per image).<br>Reproduce using: `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1`
|
||||
- **Export** to ONNX (FP32) and TensorRT (FP16) was performed using `export.py`.<br>Reproduce using: `python export.py --weights yolov5s-seg.pt --include engine --device 0 --half`
|
||||
|
||||
</details>
|
||||
|
||||
|
@ -274,64 +312,68 @@ We trained YOLOv5 segmentations models on COCO for 300 epochs at image size 640
|
|||
|
||||
### Train
|
||||
|
||||
YOLOv5 segmentation training supports auto-download COCO128-seg segmentation dataset with `--data coco128-seg.yaml` argument and manual download of COCO-segments dataset with `bash data/scripts/get_coco.sh --train --val --segments` and then `python train.py --data coco.yaml`.
|
||||
YOLOv5 segmentation training supports automatic download of the [COCO128-seg dataset](https://docs.ultralytics.com/datasets/segment/coco8-seg/) via the `--data coco128-seg.yaml` argument. For the full [COCO-segments dataset](https://docs.ultralytics.com/datasets/segment/coco/), download it manually using `bash data/scripts/get_coco.sh --train --val --segments` and then train with `python train.py --data coco.yaml`.
|
||||
|
||||
```bash
|
||||
# Single-GPU
|
||||
# Train on a single GPU
|
||||
python segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640
|
||||
|
||||
# Multi-GPU DDP
|
||||
# Train using Multi-GPU Distributed Data Parallel (DDP)
|
||||
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 --device 0,1,2,3
|
||||
```
|
||||
|
||||
### Val
|
||||
|
||||
Validate YOLOv5s-seg mask mAP on COCO dataset:
|
||||
Validate the mask [mean Average Precision (mAP)](https://www.ultralytics.com/glossary/mean-average-precision-map) of YOLOv5s-seg on the COCO dataset:
|
||||
|
||||
```bash
|
||||
bash data/scripts/get_coco.sh --val --segments # download COCO val segments split (780MB, 5000 images)
|
||||
python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # validate
|
||||
# Download COCO validation segments split (780MB, 5000 images)
|
||||
bash data/scripts/get_coco.sh --val --segments
|
||||
|
||||
# Validate the model
|
||||
python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640
|
||||
```
|
||||
|
||||
### Predict
|
||||
|
||||
Use pretrained YOLOv5m-seg.pt to predict bus.jpg:
|
||||
Use the pretrained YOLOv5m-seg.pt model to perform segmentation on `bus.jpg`:
|
||||
|
||||
```bash
|
||||
# Run prediction
|
||||
python segment/predict.py --weights yolov5m-seg.pt --source data/images/bus.jpg
|
||||
```
|
||||
|
||||
```python
|
||||
model = torch.hub.load(
|
||||
"ultralytics/yolov5", "custom", "yolov5m-seg.pt"
|
||||
) # load from PyTorch Hub (WARNING: inference not yet supported)
|
||||
# Load model from PyTorch Hub (Note: Inference support might vary)
|
||||
model = torch.hub.load("ultralytics/yolov5", "custom", "yolov5m-seg.pt")
|
||||
```
|
||||
|
||||
|  |  |
|
||||
| ---------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
|
||||
|  |  |
|
||||
| :-----------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------: |
|
||||
|
||||
### Export
|
||||
|
||||
Export YOLOv5s-seg model to ONNX and TensorRT:
|
||||
Export the YOLOv5s-seg model to ONNX and TensorRT formats:
|
||||
|
||||
```bash
|
||||
# Export model
|
||||
python export.py --weights yolov5s-seg.pt --include onnx engine --img 640 --device 0
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## <div align="center">Classification</div>
|
||||
## 🏷️ Classification
|
||||
|
||||
YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases) brings support for classification model training, validation and deployment! See full details in our [Release Notes](https://github.com/ultralytics/yolov5/releases/v6.2) and visit our [YOLOv5 Classification Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb) for quickstart tutorials.
|
||||
YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases/v6.2) introduced support for [image classification](https://docs.ultralytics.com/tasks/classify/) model training, validation, and deployment. Check the [Release Notes](https://github.com/ultralytics/yolov5/releases/v6.2) for details and the [YOLOv5 Classification Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb) for quickstart guides.
|
||||
|
||||
<details>
|
||||
<summary>Classification Checkpoints</summary>
|
||||
|
||||
<br>
|
||||
|
||||
We trained YOLOv5-cls classification models on ImageNet for 90 epochs using a 4xA100 instance, and we trained ResNet and EfficientNet models alongside with the same default training settings to compare. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We ran all speed tests on Google [Colab Pro](https://colab.research.google.com/signup) for easy reproducibility.
|
||||
YOLOv5-cls classification models were trained on [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) for 90 epochs using a 4xA100 instance. [ResNet](https://arxiv.org/abs/1512.03385) and [EfficientNet](https://arxiv.org/abs/1905.11946) models were trained alongside under identical settings for comparison. Models were exported to [ONNX](https://onnx.ai/) FP32 (CPU speed tests) and [TensorRT](https://developer.nvidia.com/tensorrt) FP16 (GPU speed tests). All speed tests were run on Google [Colab Pro](https://colab.research.google.com/signup) for reproducibility.
|
||||
|
||||
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Training<br><sup>90 epochs<br>4xA100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TensorRT V100<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>@224 (B) |
|
||||
| Model | Size<br><sup>(pixels) | Acc<br><sup>top1 | Acc<br><sup>top5 | Training<br><sup>90 epochs<br>4xA100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TensorRT V100<br>(ms) | Params<br><sup>(M) | FLOPs<br><sup>@224 (B) |
|
||||
| -------------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | -------------------------------------------- | ------------------------------ | ----------------------------------- | ------------------ | ---------------------- |
|
||||
| [YOLOv5n-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-cls.pt) | 224 | 64.6 | 85.4 | 7:59 | **3.3** | **0.5** | **2.5** | **0.5** |
|
||||
| [YOLOv5s-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-cls.pt) | 224 | 71.5 | 90.2 | 8:09 | 6.6 | 0.6 | 5.4 | 1.4 |
|
||||
|
@ -352,10 +394,10 @@ We trained YOLOv5-cls classification models on ImageNet for 90 epochs using a 4x
|
|||
<details>
|
||||
<summary>Table Notes (click to expand)</summary>
|
||||
|
||||
- All checkpoints are trained to 90 epochs with SGD optimizer with `lr0=0.001` and `weight_decay=5e-5` at image size 224 and all default settings.<br>Runs logged to https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2
|
||||
- **Accuracy** values are for single-model single-scale on [ImageNet-1k](https://www.image-net.org/index.php) dataset.<br>Reproduce by `python classify/val.py --data ../datasets/imagenet --img 224`
|
||||
- **Speed** averaged over 100 inference images using a Google [Colab Pro](https://colab.research.google.com/signup) V100 High-RAM instance.<br>Reproduce by `python classify/val.py --data ../datasets/imagenet --img 224 --batch 1`
|
||||
- **Export** to ONNX at FP32 and TensorRT at FP16 done with `export.py`. <br>Reproduce by `python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224`
|
||||
- All checkpoints were trained for 90 epochs using the SGD optimizer with `lr0=0.001` and `weight_decay=5e-5` at an image size of 224 pixels, using default settings.<br>Training runs are logged at [https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2](https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2).
|
||||
- **Accuracy** values (top-1 and top-5) represent single-model, single-scale performance on the [ImageNet-1k dataset](https://docs.ultralytics.com/datasets/classify/imagenet/).<br>Reproduce using: `python classify/val.py --data ../datasets/imagenet --img 224`
|
||||
- **Speed** metrics are averaged over 100 inference images using a Google [Colab Pro V100 High-RAM instance](https://colab.research.google.com/signup).<br>Reproduce using: `python classify/val.py --data ../datasets/imagenet --img 224 --batch 1`
|
||||
- **Export** to ONNX (FP32) and TensorRT (FP16) was performed using `export.py`.<br>Reproduce using: `python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224`
|
||||
|
||||
</details>
|
||||
</details>
|
||||
|
@ -365,106 +407,107 @@ We trained YOLOv5-cls classification models on ImageNet for 90 epochs using a 4x
|
|||
|
||||
### Train
|
||||
|
||||
YOLOv5 classification training supports auto-download of MNIST, Fashion-MNIST, CIFAR10, CIFAR100, Imagenette, Imagewoof, and ImageNet datasets with the `--data` argument. To start training on MNIST for example use `--data mnist`.
|
||||
YOLOv5 classification training supports automatic download for datasets like [MNIST](https://docs.ultralytics.com/datasets/classify/mnist/), [Fashion-MNIST](https://docs.ultralytics.com/datasets/classify/fashion-mnist/), [CIFAR10](https://docs.ultralytics.com/datasets/classify/cifar10/), [CIFAR100](https://docs.ultralytics.com/datasets/classify/cifar100/), [Imagenette](https://docs.ultralytics.com/datasets/classify/imagenette/), [Imagewoof](https://docs.ultralytics.com/datasets/classify/imagewoof/), and [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) using the `--data` argument. For example, start training on MNIST with `--data mnist`.
|
||||
|
||||
```bash
|
||||
# Single-GPU
|
||||
# Train on a single GPU using CIFAR-100 dataset
|
||||
python classify/train.py --model yolov5s-cls.pt --data cifar100 --epochs 5 --img 224 --batch 128
|
||||
|
||||
# Multi-GPU DDP
|
||||
# Train using Multi-GPU DDP on ImageNet dataset
|
||||
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3
|
||||
```
|
||||
|
||||
### Val
|
||||
|
||||
Validate YOLOv5m-cls accuracy on ImageNet-1k dataset:
|
||||
Validate the accuracy of the YOLOv5m-cls model on the ImageNet-1k validation dataset:
|
||||
|
||||
```bash
|
||||
bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images)
|
||||
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate
|
||||
# Download ImageNet validation split (6.3GB, 50,000 images)
|
||||
bash data/scripts/get_imagenet.sh --val
|
||||
|
||||
# Validate the model
|
||||
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224
|
||||
```
|
||||
|
||||
### Predict
|
||||
|
||||
Use pretrained YOLOv5s-cls.pt to predict bus.jpg:
|
||||
Use the pretrained YOLOv5s-cls.pt model to classify the image `bus.jpg`:
|
||||
|
||||
```bash
|
||||
# Run prediction
|
||||
python classify/predict.py --weights yolov5s-cls.pt --source data/images/bus.jpg
|
||||
```
|
||||
|
||||
```python
|
||||
model = torch.hub.load("ultralytics/yolov5", "custom", "yolov5s-cls.pt") # load from PyTorch Hub
|
||||
# Load model from PyTorch Hub
|
||||
model = torch.hub.load("ultralytics/yolov5", "custom", "yolov5s-cls.pt")
|
||||
```
|
||||
|
||||
### Export
|
||||
|
||||
Export a group of trained YOLOv5s-cls, ResNet and EfficientNet models to ONNX and TensorRT:
|
||||
Export trained YOLOv5s-cls, ResNet50, and EfficientNet_b0 models to ONNX and TensorRT formats:
|
||||
|
||||
```bash
|
||||
# Export models
|
||||
python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --include onnx engine --img 224
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## <div align="center">Environments</div>
|
||||
## ☁️ Environments
|
||||
|
||||
Get started in seconds with our verified environments. Click each icon below for details.
|
||||
Get started quickly with our pre-configured environments. Click the icons below for setup details.
|
||||
|
||||
<div align="center">
|
||||
<a href="https://bit.ly/yolov5-paperspace-notebook">
|
||||
<a href="https://bit.ly/yolov5-paperspace-notebook" title="Run on Paperspace Gradient">
|
||||
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-gradient.png" width="10%" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
|
||||
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb">
|
||||
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb" title="Open in Google Colab">
|
||||
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-colab-small.png" width="10%" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
|
||||
<a href="https://www.kaggle.com/models/ultralytics/yolov5">
|
||||
<a href="https://www.kaggle.com/models/ultralytics/yolov5" title="Open in Kaggle">
|
||||
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-kaggle-small.png" width="10%" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
|
||||
<a href="https://hub.docker.com/r/ultralytics/yolov5">
|
||||
<a href="https://hub.docker.com/r/ultralytics/yolov5" title="Pull Docker Image">
|
||||
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-docker-small.png" width="10%" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
|
||||
<a href="https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/">
|
||||
<a href="https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/" title="AWS Quickstart Guide">
|
||||
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-aws-small.png" width="10%" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
|
||||
<a href="https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/">
|
||||
<a href="https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/" title="GCP Quickstart Guide">
|
||||
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-gcp-small.png" width="10%" /></a>
|
||||
</div>
|
||||
|
||||
## <div align="center">Contribute</div>
|
||||
## 🤝 Contribute
|
||||
|
||||
We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see our [Contributing Guide](https://docs.ultralytics.com/help/contributing/) to get started, and fill out the [YOLOv5 Survey](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experiences. Thank you to all our contributors!
|
||||
We welcome your contributions! Making YOLOv5 accessible and effective is a community effort. Please see our [Contributing Guide](https://docs.ultralytics.com/help/contributing/) to get started. Share your feedback through the [YOLOv5 Survey](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey). Thank you to all our contributors for making YOLOv5 better!
|
||||
|
||||
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->
|
||||
[](https://github.com/ultralytics/yolov5/graphs/contributors)
|
||||
|
||||
<a href="https://github.com/ultralytics/yolov5/graphs/contributors">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png" /></a>
|
||||
## 📜 License
|
||||
|
||||
## <div align="center">License</div>
|
||||
Ultralytics provides two licensing options to meet different needs:
|
||||
|
||||
Ultralytics offers two licensing options to accommodate diverse use cases:
|
||||
- **AGPL-3.0 License**: An [OSI-approved](https://opensource.org/license/agpl-v3) open-source license ideal for academic research, personal projects, and testing. It promotes open collaboration and knowledge sharing. See the [LICENSE](https://github.com/ultralytics/yolov5/blob/master/LICENSE) file for details.
|
||||
- **Enterprise License**: Tailored for commercial applications, this license allows seamless integration of Ultralytics software and AI models into commercial products and services, bypassing the open-source requirements of AGPL-3.0. For commercial use cases, please contact us via [Ultralytics Licensing](https://www.ultralytics.com/license).
|
||||
|
||||
- **AGPL-3.0 License**: This [OSI-approved](https://opensource.org/license) open-source license is ideal for students and enthusiasts, promoting open collaboration and knowledge sharing. See the [LICENSE](https://github.com/ultralytics/yolov5/blob/master/LICENSE) file for more details.
|
||||
- **Enterprise License**: Designed for commercial use, this license permits seamless integration of Ultralytics software and AI models into commercial goods and services, bypassing the open-source requirements of AGPL-3.0. If your scenario involves embedding our solutions into a commercial offering, reach out through [Ultralytics Licensing](https://www.ultralytics.com/license).
|
||||
## 📧 Contact
|
||||
|
||||
## <div align="center">Contact</div>
|
||||
|
||||
For YOLOv5 bug reports and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues), and join our [Discord](https://discord.com/invite/ultralytics) community for questions and discussions!
|
||||
For bug reports and feature requests related to YOLOv5, please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For general questions, discussions, and community support, join our [Discord server](https://discord.com/invite/ultralytics)!
|
||||
|
||||
<br>
|
||||
<div align="center">
|
||||
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
||||
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="Ultralytics LinkedIn"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
||||
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="Ultralytics Twitter"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
||||
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="Ultralytics YouTube"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
||||
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="3%" alt="Ultralytics TikTok"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
||||
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="3%" alt="Ultralytics BiliBili"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
||||
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="3%" alt="Ultralytics Discord"></a>
|
||||
</div>
|
||||
|
||||
[tta]: https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation
|
||||
|
|
443
README.zh-CN.md
443
README.zh-CN.md
|
@ -7,95 +7,103 @@
|
|||
[中文](https://docs.ultralytics.com/zh) | [한국어](https://docs.ultralytics.com/ko) | [日本語](https://docs.ultralytics.com/ja) | [Русский](https://docs.ultralytics.com/ru) | [Deutsch](https://docs.ultralytics.com/de) | [Français](https://docs.ultralytics.com/fr) | [Español](https://docs.ultralytics.com/es) | [Português](https://docs.ultralytics.com/pt) | [Türkçe](https://docs.ultralytics.com/tr) | [Tiếng Việt](https://docs.ultralytics.com/vi) | [العربية](https://docs.ultralytics.com/ar)
|
||||
|
||||
<div>
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
|
||||
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv5 Citation"></a>
|
||||
<a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
|
||||
<a href="https://discord.com/invite/ultralytics"><img alt="Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a> <a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a> <a href="https://reddit.com/r/ultralytics"><img alt="Ultralytics Reddit" src="https://img.shields.io/reddit/subreddit-subscribers/ultralytics?style=flat&logo=reddit&logoColor=white&label=Reddit&color=blue"></a>
|
||||
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI 测试"></a>
|
||||
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv5 引用"></a>
|
||||
<a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker 拉取次数"></a>
|
||||
<a href="https://discord.com/invite/ultralytics"><img alt="Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a> <a href="https://community.ultralytics.com/"><img alt="Ultralytics 论坛" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a> <a href="https://reddit.com/r/ultralytics"><img alt="Ultralytics Reddit" src="https://img.shields.io/reddit/subreddit-subscribers/ultralytics?style=flat&logo=reddit&logoColor=white&label=Reddit&color=blue"></a>
|
||||
<br>
|
||||
<a href="https://bit.ly/yolov5-paperspace-notebook"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"></a>
|
||||
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
|
||||
<a href="https://www.kaggle.com/models/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
|
||||
<a href="https://bit.ly/yolov5-paperspace-notebook"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="在 Gradient 上运行"></a>
|
||||
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="在 Colab 中打开"></a>
|
||||
<a href="https://www.kaggle.com/models/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="在 Kaggle 中打开"></a>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
YOLOv5 🚀 是全球最受喜爱的视觉 AI,代表了 <a href="https://www.ultralytics.com/">Ultralytics</a> 在未来视觉 AI 方法上的开源研究成果,这些成果融合了经过数千小时研发的经验和最佳实践。
|
||||
Ultralytics YOLOv5 🚀 是由 [Ultralytics](https://www.ultralytics.com/) 开发的尖端、达到业界顶尖水平(SOTA)的计算机视觉模型。基于 [PyTorch](https://pytorch.org/) 框架,YOLOv5 以其易用性、速度和准确性而闻名。它融合了广泛研究和开发的见解与最佳实践,使其成为各种视觉 AI 任务的热门选择,包括[目标检测](https://docs.ultralytics.com/tasks/detect/)、[图像分割](https://docs.ultralytics.com/tasks/segment/)和[图像分类](https://docs.ultralytics.com/tasks/classify/)。
|
||||
|
||||
我们希望这里提供的资源能帮助您充分发挥 YOLOv5 的优势。请查阅 YOLOv5 的 <a href="https://docs.ultralytics.com/yolov5/">文档</a> 了解详情,如需支持请在 <a href="https://github.com/ultralytics/yolov5/issues/new/choose">GitHub</a> 上提交问题,或加入我们的 <a href="https://discord.com/invite/ultralytics">Discord</a> 社区进行提问和讨论!
|
||||
我们希望这里的资源能帮助您充分利用 YOLOv5。请浏览 [YOLOv5 文档](https://docs.ultralytics.com/yolov5/)获取详细信息,在 [GitHub](https://github.com/ultralytics/yolov5/issues/new/choose) 上提出 issue 以获得支持,并加入我们的 [Discord 社区](https://discord.com/invite/ultralytics)进行提问和讨论!
|
||||
|
||||
若需申请企业许可证,请在 [Ultralytics Licensing](https://www.ultralytics.com/license) 完成相关表单。
|
||||
如需申请企业许可证,请填写 [Ultralytics 授权许可](https://www.ultralytics.com/license) 表格。
|
||||
|
||||
<div align="center">
|
||||
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="Ultralytics GitHub"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
||||
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="2%" alt="Ultralytics LinkedIn"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
||||
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="2%" alt="Ultralytics Twitter"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
||||
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="2%" alt="Ultralytics YouTube"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
||||
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="2%" alt="Ultralytics TikTok"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
||||
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="2%" alt="Ultralytics BiliBili"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
|
||||
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="2%" alt="Ultralytics Discord"></a>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
<br>
|
||||
|
||||
## <div align="center">YOLO11 🚀 新品发布</div>
|
||||
## 🚀 YOLO11:下一代进化
|
||||
|
||||
我们很高兴地宣布推出 Ultralytics YOLO11 🚀——我们最先进(SOTA)视觉模型的最新成果!现已在 **[GitHub](https://github.com/ultralytics/ultralytics)** 上发布,YOLO11 延续了我们在速度、准确性和易用性方面的优秀传统。不论您是处理目标检测、图像分割还是图像分类,YOLO11 都能提供多样应用场景下卓越的性能和灵活性。
|
||||
我们激动地宣布推出 **Ultralytics YOLO11** 🚀,这是我们业界顶尖(SOTA)视觉模型的最新进展!YOLO11 现已在 [Ultralytics YOLO GitHub 仓库](https://github.com/ultralytics/ultralytics)发布,它继承了我们速度快、精度高和易于使用的传统。无论您是处理[目标检测](https://docs.ultralytics.com/tasks/detect/)、[实例分割](https://docs.ultralytics.com/tasks/segment/)、[姿态估计](https://docs.ultralytics.com/tasks/pose/)、[图像分类](https://docs.ultralytics.com/tasks/classify/)还是[旋转目标检测 (OBB)](https://docs.ultralytics.com/tasks/obb/),YOLO11 都能提供在多样化应用中脱颖而出所需的性能和多功能性。
|
||||
|
||||
今天就开始体验,释放 YOLO11 的全部潜力吧!请访问 [Ultralytics 文档](https://docs.ultralytics.com/) 获取全面的指南和资源:
|
||||
立即开始,释放 YOLO11 的全部潜力!访问 [Ultralytics 文档](https://docs.ultralytics.com/)获取全面的指南和资源:
|
||||
|
||||
[](https://badge.fury.io/py/ultralytics) [](https://www.pepy.tech/projects/ultralytics)
|
||||
|
||||
```bash
|
||||
# 安装 ultralytics 包
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
<div align="center">
|
||||
<a href="https://www.ultralytics.com/yolo" target="_blank">
|
||||
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/refs/heads/main/yolo/performance-comparison.png"></a>
|
||||
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/refs/heads/main/yolo/performance-comparison.png" alt="Ultralytics YOLO 性能比较"></a>
|
||||
</div>
|
||||
|
||||
## <div align="center">文档</div>
|
||||
## 📚 文档
|
||||
|
||||
请参阅 [YOLOv5 文档](https://docs.ultralytics.com/yolov5/) 获取关于训练、测试和部署的完整指南。下方提供了快速入门示例。
|
||||
请参阅 [YOLOv5 文档](https://docs.ultralytics.com/yolov5/),了解有关训练、测试和部署的完整文档。请参阅下方的快速入门示例。
|
||||
|
||||
<details open>
|
||||
<summary>安装</summary>
|
||||
|
||||
克隆仓库并在 [**Python>=3.8.0**](https://www.python.org/) 环境中安装 [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) 文件中的依赖,包括 [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/)。
|
||||
克隆仓库并在 [**Python>=3.8.0**](https://www.python.org/) 环境中安装依赖项。确保您已安装 [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/)。
|
||||
|
||||
```bash
|
||||
git clone https://github.com/ultralytics/yolov5 # 克隆仓库
|
||||
# 克隆 YOLOv5 仓库
|
||||
git clone https://github.com/ultralytics/yolov5
|
||||
|
||||
# 导航到克隆的目录
|
||||
cd yolov5
|
||||
pip install -r requirements.txt # 安装依赖
|
||||
|
||||
# 安装所需的包
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details open>
|
||||
<summary>推理</summary>
|
||||
<summary>使用 PyTorch Hub 进行推理</summary>
|
||||
|
||||
YOLOv5 [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/) 推理示例。[Models](https://github.com/ultralytics/yolov5/tree/master/models) 会自动从最新的 YOLOv5 [发行版](https://github.com/ultralytics/yolov5/releases) 下载。
|
||||
通过 [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/) 使用 YOLOv5 进行推理。[模型](https://github.com/ultralytics/yolov5/tree/master/models) 会自动从最新的 YOLOv5 [发布版本](https://github.com/ultralytics/yolov5/releases)下载。
|
||||
|
||||
```python
|
||||
import torch
|
||||
|
||||
# 加载 YOLOv5 模型(可选:yolov5n, yolov5s, yolov5m, yolov5l, yolov5x)
|
||||
model = torch.hub.load("ultralytics/yolov5", "yolov5s")
|
||||
# 加载 YOLOv5 模型(选项:yolov5n, yolov5s, yolov5m, yolov5l, yolov5x)
|
||||
model = torch.hub.load("ultralytics/yolov5", "yolov5s") # 默认:yolov5s
|
||||
|
||||
# 输入源(URL、文件、PIL、OpenCV、numpy 数组或列表)
|
||||
img = "https://ultralytics.com/images/zidane.jpg"
|
||||
# 定义输入图像源(URL、本地文件、PIL 图像、OpenCV 帧、numpy 数组或列表)
|
||||
img = "https://ultralytics.com/images/zidane.jpg" # 示例图像
|
||||
|
||||
# 执行推理(自动处理批量、调整大小、归一化)
|
||||
# 执行推理(自动处理批处理、调整大小、归一化)
|
||||
results = model(img)
|
||||
|
||||
# 处理结果(可选:.print(), .show(), .save(), .crop(), .pandas())
|
||||
results.print()
|
||||
# 处理结果(选项:.print(), .show(), .save(), .crop(), .pandas())
|
||||
results.print() # 将结果打印到控制台
|
||||
results.show() # 在窗口中显示结果
|
||||
results.save() # 将结果保存到 runs/detect/exp
|
||||
```
|
||||
|
||||
</details>
|
||||
|
@ -103,19 +111,38 @@ results.print()
|
|||
<details>
|
||||
<summary>使用 detect.py 进行推理</summary>
|
||||
|
||||
`detect.py` 在多种来源上运行推理,自动从最新的 YOLOv5 [发行版](https://github.com/ultralytics/yolov5/releases) 下载 [models](https://github.com/ultralytics/yolov5/tree/master/models),并将结果保存至 `runs/detect`。
|
||||
`detect.py` 脚本在各种来源上运行推理。它会自动从最新的 YOLOv5 [发布版本](https://github.com/ultralytics/yolov5/releases)下载[模型](https://github.com/ultralytics/yolov5/tree/master/models),并将结果保存到 `runs/detect` 目录。
|
||||
|
||||
```bash
|
||||
python detect.py --weights yolov5s.pt --source 0 # 摄像头
|
||||
python detect.py --weights yolov5s.pt --source img.jpg # 图片
|
||||
python detect.py --weights yolov5s.pt --source vid.mp4 # 视频
|
||||
python detect.py --weights yolov5s.pt --source screen # 截图
|
||||
python detect.py --weights yolov5s.pt --source path/ # 目录
|
||||
python detect.py --weights yolov5s.pt --source list.txt # 图片列表
|
||||
python detect.py --weights yolov5s.pt --source list.streams # 流列表
|
||||
python detect.py --weights yolov5s.pt --source 'path/*.jpg' # glob 通配符
|
||||
python detect.py --weights yolov5s.pt --source 'https://youtu.be/LNwODJXcvt4' # YouTube
|
||||
python detect.py --weights yolov5s.pt --source 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP 流
|
||||
# 使用网络摄像头运行推理
|
||||
python detect.py --weights yolov5s.pt --source 0
|
||||
|
||||
# 对本地图像文件运行推理
|
||||
python detect.py --weights yolov5s.pt --source img.jpg
|
||||
|
||||
# 对本地视频文件运行推理
|
||||
python detect.py --weights yolov5s.pt --source vid.mp4
|
||||
|
||||
# 对屏幕截图运行推理
|
||||
python detect.py --weights yolov5s.pt --source screen
|
||||
|
||||
# 对图像目录运行推理
|
||||
python detect.py --weights yolov5s.pt --source path/to/images/
|
||||
|
||||
# 对列出图像路径的文本文件运行推理
|
||||
python detect.py --weights yolov5s.pt --source list.txt
|
||||
|
||||
# 对列出流 URL 的文本文件运行推理
|
||||
python detect.py --weights yolov5s.pt --source list.streams
|
||||
|
||||
# 使用 glob 模式对图像运行推理
|
||||
python detect.py --weights yolov5s.pt --source 'path/to/*.jpg'
|
||||
|
||||
# 对 YouTube 视频 URL 运行推理
|
||||
python detect.py --weights yolov5s.pt --source 'https://youtu.be/LNwODJXcvt4'
|
||||
|
||||
# 对 RTSP、RTMP 或 HTTP 流运行推理
|
||||
python detect.py --weights yolov5s.pt --source 'rtsp://example.com/media.mp4'
|
||||
```
|
||||
|
||||
</details>
|
||||
|
@ -123,49 +150,58 @@ python detect.py --weights yolov5s.pt --source 'rtsp://example.com/media.mp4' #
|
|||
<details>
|
||||
<summary>训练</summary>
|
||||
|
||||
以下命令重现了 YOLOv5 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) 数据集的结果。[Models](https://github.com/ultralytics/yolov5/tree/master/models) 和 [datasets](https://github.com/ultralytics/yolov5/tree/master/data) 会自动从最新的 YOLOv5 [发行版](https://github.com/ultralytics/yolov5/releases) 下载。使用 V100 GPU 训练 YOLOv5n/s/m/l/x 的时间分别为 1/2/4/6/8 天([多 GPU](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/) 可大幅加快训练速度)。建议使用最大的 `--batch-size`,或传入 `--batch-size -1` 以使用 YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092)。下面显示的批量大小基于 V100-16GB。
|
||||
以下命令演示了如何复现 YOLOv5 在 [COCO 数据集](https://docs.ultralytics.com/datasets/detect/coco/)上的结果。[模型](https://github.com/ultralytics/yolov5/tree/master/models)和[数据集](https://github.com/ultralytics/yolov5/tree/master/data)都会自动从最新的 YOLOv5 [发布版本](https://github.com/ultralytics/yolov5/releases)下载。YOLOv5n/s/m/l/x 的训练时间在单个 [NVIDIA V100 GPU](https://www.nvidia.com/en-us/data-center/v100/) 上大约需要 1/2/4/6/8 天。使用[多 GPU 训练](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/)可以显著减少训练时间。请使用硬件允许的最大 `--batch-size`,或使用 `--batch-size -1` 以启用 YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092)。下面显示的批处理大小适用于 V100-16GB GPU。
|
||||
|
||||
```bash
|
||||
# 在 COCO 上训练 YOLOv5n 300 个周期
|
||||
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128
|
||||
|
||||
# 在 COCO 上训练 YOLOv5s 300 个周期
|
||||
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5s.yaml --batch-size 64
|
||||
|
||||
# 在 COCO 上训练 YOLOv5m 300 个周期
|
||||
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5m.yaml --batch-size 40
|
||||
|
||||
# 在 COCO 上训练 YOLOv5l 300 个周期
|
||||
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5l.yaml --batch-size 24
|
||||
|
||||
# 在 COCO 上训练 YOLOv5x 300 个周期
|
||||
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5x.yaml --batch-size 16
|
||||
```
|
||||
|
||||
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
|
||||
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png" alt="YOLOv5 训练结果">
|
||||
|
||||
</details>
|
||||
|
||||
<details open>
|
||||
<summary>教程</summary>
|
||||
|
||||
- [训练自定义数据](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/) 🚀 强烈推荐
|
||||
- [获得最佳训练效果的技巧](https://docs.ultralytics.com/guides/model-training-tips/) ☘️
|
||||
- [多 GPU 训练](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/)
|
||||
- [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/) 🌟 新版
|
||||
- [TFLite, ONNX, CoreML, TensorRT 导出](https://docs.ultralytics.com/yolov5/tutorials/model_export/) 🚀
|
||||
- [NVIDIA Jetson 平台部署](https://docs.ultralytics.com/yolov5/tutorials/running_on_jetson_nano/) 🌟 新版
|
||||
- [测试时数据增强 (TTA)](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/)
|
||||
- [模型集成](https://docs.ultralytics.com/yolov5/tutorials/model_ensembling/)
|
||||
- [模型修剪/稀疏](https://docs.ultralytics.com/yolov5/tutorials/model_pruning_and_sparsity/)
|
||||
- [超参数进化](https://docs.ultralytics.com/yolov5/tutorials/hyperparameter_evolution/)
|
||||
- [冻结层进行迁移学习](https://docs.ultralytics.com/yolov5/tutorials/transfer_learning_with_frozen_layers/)
|
||||
- [架构概述](https://docs.ultralytics.com/yolov5/tutorials/architecture_description/) 🌟 新版
|
||||
- [Ultralytics HUB 进行训练和部署 YOLO](https://www.ultralytics.com/hub) 🚀 强烈推荐
|
||||
- [ClearML 日志记录](https://docs.ultralytics.com/yolov5/tutorials/clearml_logging_integration/)
|
||||
- [YOLOv5 与 Neural Magic 的 Deepsparse](https://docs.ultralytics.com/yolov5/tutorials/neural_magic_pruning_quantization/)
|
||||
- [Comet 日志记录](https://docs.ultralytics.com/yolov5/tutorials/comet_logging_integration/) 🌟 新版
|
||||
- **[训练自定义数据](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/)** 🚀 **推荐**:学习如何在您自己的数据集上训练 YOLOv5。
|
||||
- **[获得最佳训练结果的技巧](https://docs.ultralytics.com/guides/model-training-tips/)** ☘️:利用专家技巧提升模型性能。
|
||||
- **[多 GPU 训练](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/)**:使用多个 GPU 加速训练。
|
||||
- **[PyTorch Hub 集成](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/)** 🌟 **新增**:使用 PyTorch Hub 轻松加载模型。
|
||||
- **[模型导出 (TFLite, ONNX, CoreML, TensorRT)](https://docs.ultralytics.com/yolov5/tutorials/model_export/)** 🚀:将您的模型转换为各种部署格式,如 [ONNX](https://onnx.ai/) 或 [TensorRT](https://developer.nvidia.com/tensorrt)。
|
||||
- **[NVIDIA Jetson 部署](https://docs.ultralytics.com/yolov5/tutorials/running_on_jetson_nano/)** 🌟 **新增**:在 [NVIDIA Jetson](https://developer.nvidia.com/embedded-computing) 设备上部署 YOLOv5。
|
||||
- **[测试时增强 (TTA)](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/)**:使用 TTA 提高预测准确性。
|
||||
- **[模型集成](https://docs.ultralytics.com/yolov5/tutorials/model_ensembling/)**:组合多个模型以获得更好的性能。
|
||||
- **[模型剪枝/稀疏化](https://docs.ultralytics.com/yolov5/tutorials/model_pruning_and_sparsity/)**:优化模型的大小和速度。
|
||||
- **[超参数进化](https://docs.ultralytics.com/yolov5/tutorials/hyperparameter_evolution/)**:自动找到最佳训练超参数。
|
||||
- **[使用冻结层的迁移学习](https://docs.ultralytics.com/yolov5/tutorials/transfer_learning_with_frozen_layers/)**:使用[迁移学习](https://www.ultralytics.com/glossary/transfer-learning)高效地将预训练模型应用于新任务。
|
||||
- **[架构摘要](https://docs.ultralytics.com/yolov5/tutorials/architecture_description/)** 🌟 **新增**:了解 YOLOv5 模型架构。
|
||||
- **[Ultralytics HUB 训练](https://www.ultralytics.com/hub)** 🚀 **推荐**:使用 Ultralytics HUB 训练和部署 YOLO 模型。
|
||||
- **[ClearML 日志记录](https://docs.ultralytics.com/yolov5/tutorials/clearml_logging_integration/)**:与 [ClearML](https://clear.ml/) 集成以进行实验跟踪。
|
||||
- **[Neural Magic DeepSparse 集成](https://docs.ultralytics.com/yolov5/tutorials/neural_magic_pruning_quantization/)**:使用 DeepSparse 加速推理。
|
||||
- **[Comet 日志记录](https://docs.ultralytics.com/yolov5/tutorials/comet_logging_integration/)** 🌟 **新增**:使用 [Comet ML](https://www.comet.com/) 记录实验。
|
||||
|
||||
</details>
|
||||
|
||||
## <div align="center">集成</div>
|
||||
## 🧩 集成
|
||||
|
||||
我们与领先 AI 平台的深度集成拓展了 Ultralytics 解决方案的功能,提升了数据集标注、训练、可视化和模型管理等任务的效率。了解 Ultralytics 如何与 [W&B](https://docs.wandb.ai/guides/integrations/ultralytics/)、[Comet](https://bit.ly/yolov8-readme-comet)、[Roboflow](https://roboflow.com/?ref=ultralytics) 以及 [OpenVINO](https://docs.ultralytics.com/integrations/openvino/) 合作,优化您的 AI 工作流程。
|
||||
我们与领先 AI 平台的关键集成扩展了 Ultralytics 产品的功能,增强了诸如数据集标注、训练、可视化和模型管理等任务。了解 Ultralytics 如何与 [Weights & Biases](https://docs.ultralytics.com/integrations/weights-biases/)、[Comet ML](https://docs.ultralytics.com/integrations/comet/)、[Roboflow](https://docs.ultralytics.com/integrations/roboflow/) 和 [Intel OpenVINO](https://docs.ultralytics.com/integrations/openvino/) 等合作伙伴协作,优化您的 AI 工作流程。在 [Ultralytics 集成](https://docs.ultralytics.com/integrations/) 探索更多信息。
|
||||
|
||||
<br>
|
||||
<a href="https://www.ultralytics.com/hub" target="_blank">
|
||||
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png" alt="Ultralytics 主动学习集成"></a>
|
||||
<a href="https://docs.ultralytics.com/integrations/" target="_blank">
|
||||
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png" alt="Ultralytics 主动学习集成">
|
||||
</a>
|
||||
<br>
|
||||
<br>
|
||||
|
||||
|
@ -173,298 +209,305 @@ python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5x.yaml --
|
|||
<a href="https://www.ultralytics.com/hub">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-ultralytics-hub.png" width="10%" alt="Ultralytics HUB logo"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
|
||||
<a href="https://docs.wandb.ai/guides/integrations/ultralytics/">
|
||||
<a href="https://docs.ultralytics.com/integrations/weights-biases/">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-wb.png" width="10%" alt="Weights & Biases logo"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
|
||||
<a href="https://bit.ly/yolov8-readme-comet">
|
||||
<a href="https://docs.ultralytics.com/integrations/comet/">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-comet.png" width="10%" alt="Comet ML logo"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
|
||||
<a href="https://bit.ly/yolov5-neuralmagic">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-neuralmagic.png" width="10%" alt="NeuralMagic logo"></a>
|
||||
<a href="https://docs.ultralytics.com/integrations/neural-magic/">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-neuralmagic.png" width="10%" alt="Neural Magic logo"></a>
|
||||
</div>
|
||||
|
||||
| Ultralytics HUB 🚀 | W&B | Comet ⭐ 新版 | Neural Magic |
|
||||
| :----------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------: |
|
||||
| 简化 YOLO 工作流程:使用 [Ultralytics HUB](https://www.ultralytics.com/hub) 轻松标注、训练和部署。立即体验! | 使用 [Weights & Biases](https://docs.wandb.ai/guides/integrations/ultralytics/) 跟踪实验、超参数和结果。 | 永久免费,[Comet](https://bit.ly/yolov5-readme-comet) 允许您保存 YOLOv5 模型、恢复训练,并以交互方式可视化和调试预测。 | 使用 [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) 使 YOLO11 推理速度提升最高 6 倍。 |
|
||||
| Ultralytics HUB 🌟 | Weights & Biases | Comet | Neural Magic |
|
||||
| :------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------: |
|
||||
| 简化 YOLO 工作流程:使用 [Ultralytics HUB](https://hub.ultralytics.com) 轻松标注、训练和部署。立即试用! | 使用 [Weights & Biases](https://docs.ultralytics.com/integrations/weights-biases/) 跟踪实验、超参数和结果。 | 永久免费的 [Comet ML](https://docs.ultralytics.com/integrations/comet/) 让您保存 YOLO 模型、恢复训练并交互式地可视化预测。 | 使用 [Neural Magic DeepSparse](https://docs.ultralytics.com/integrations/neural-magic/) 将 YOLO 推理速度提高多达 6 倍。 |
|
||||
|
||||
## <div align="center">Ultralytics HUB</div>
|
||||
## ⭐ Ultralytics HUB
|
||||
|
||||
体验无缝 AI 体验,使用 [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐ ——这是一个无需编写代码即可进行数据可视化、YOLOv5 与 YOLOv8 🚀 模型训练和部署的一体化解决方案。借助我们前沿的平台和用户友好的 [Ultralytics App](https://www.ultralytics.com/app-install),您能轻松将图像转化为可操作的洞察并让您的 AI 创想成真。立即开始您的【免费】之旅吧!
|
||||
通过 [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐ 体验无缝的 AI 开发,这是构建、训练和部署[计算机视觉](https://www.ultralytics.com/glossary/computer-vision-cv)模型的终极平台。可视化数据集,训练 [YOLOv5](https://docs.ultralytics.com/models/yolov5/) 和 [YOLOv8](https://docs.ultralytics.com/models/yolov8/) 🚀 模型,并将它们部署到实际应用中,无需编写任何代码。使用我们尖端的工具和用户友好的 [Ultralytics App](https://www.ultralytics.com/app-install) 将图像转化为可操作的见解。今天就**免费**开始您的旅程吧!
|
||||
|
||||
<a align="center" href="https://www.ultralytics.com/hub" target="_blank">
|
||||
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
|
||||
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png" alt="Ultralytics HUB 平台截图"></a>
|
||||
|
||||
## <div align="center">为何选择 YOLOv5</div>
|
||||
## 🤔 为何选择 YOLOv5?
|
||||
|
||||
YOLOv5 的设计初衷是让入门变得极为简单且易于学习。我们专注于真实世界的结果。
|
||||
YOLOv5 的设计旨在简单易用。我们优先考虑实际性能和可访问性。
|
||||
|
||||
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040763-93c22a27-347c-4e3c-847a-8094621d3f4e.png"></p>
|
||||
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040763-93c22a27-347c-4e3c-847a-8094621d3f4e.png" alt="YOLOv5 性能图表"></p>
|
||||
<details>
|
||||
<summary>YOLOv5-P5 640 图示</summary>
|
||||
<summary>YOLOv5-P5 640 图表</summary>
|
||||
|
||||
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040757-ce0934a3-06a6-43dc-a979-2edbbd69ea0e.png"></p>
|
||||
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040757-ce0934a3-06a6-43dc-a979-2edbbd69ea0e.png" alt="YOLOv5 P5 640 性能图表"></p>
|
||||
</details>
|
||||
<details>
|
||||
<summary>图示说明</summary>
|
||||
<summary>图表说明</summary>
|
||||
|
||||
- **COCO AP val** 表示在 5000 张 [COCO val2017](http://cocodataset.org) 图像数据集上、推理尺寸从 256 到 1536 不同情况下测量的 mAP@0.5:0.95 指标。
|
||||
- **GPU Speed** 测量在 [COCO val2017](http://cocodataset.org) 数据集上使用 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/) V100 实例,以批量大小 32 计算的平均每张图推理时间。
|
||||
- **EfficientDet** 数据来自 [google/automl](https://github.com/google/automl),批量大小为 8。
|
||||
- **重现** 命令:`python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
|
||||
- **COCO AP val** 表示在 [交并比 (IoU)](https://www.ultralytics.com/glossary/intersection-over-union-iou) 阈值从 0.5 到 0.95 范围内的[平均精度均值 (mAP)](https://www.ultralytics.com/glossary/mean-average-precision-map),在包含 5000 张图像的 [COCO val2017 数据集](https://docs.ultralytics.com/datasets/detect/coco/)上,使用各种推理尺寸(256 到 1536 像素)测量得出。
|
||||
- **GPU Speed** 使用批处理大小为 32 的 [AWS p3.2xlarge V100 实例](https://aws.amazon.com/ec2/instance-types/p3/),测量在 [COCO val2017 数据集](https://docs.ultralytics.com/datasets/detect/coco/)上每张图像的平均推理时间。
|
||||
- **EfficientDet** 数据来源于 [google/automl 仓库](https://github.com/google/automl),批处理大小为 8。
|
||||
- **复现**这些结果请使用命令:`python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
|
||||
|
||||
</details>
|
||||
|
||||
### 预训练检查点
|
||||
### 预训练权重
|
||||
|
||||
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | mAP<sup>val<br>50 | Speed<br><sup>CPU b1<br>(ms) | Speed<br><sup>V100 b1<br>(ms) | Speed<br><sup>V100 b32<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
|
||||
| ----------------------------------------------------------------------------------------------- | --------------------- | -------------------- | ----------------- | ---------------------------- | ----------------------------- | ------------------------------ | ------------------ | ---------------------- |
|
||||
| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** |
|
||||
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
|
||||
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
|
||||
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
|
||||
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
|
||||
| | | | | | | | | |
|
||||
| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
|
||||
| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 |
|
||||
| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
|
||||
| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
|
||||
| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)<br>+ [TTA] | 1280<br>1536 | 55.0<br>**55.8** | 72.7<br>**72.7** | 3136<br>- | 26.2<br>- | 19.4<br>- | 140.7<br>- | 209.8<br>- |
|
||||
此表显示了在 COCO 数据集上训练的各种 YOLOv5 模型的性能指标。
|
||||
|
||||
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>val<br>50-95 | mAP<sup>val<br>50 | 速度<br><sup>CPU b1<br>(毫秒) | 速度<br><sup>V100 b1<br>(毫秒) | 速度<br><sup>V100 b32<br>(毫秒) | 参数<br><sup>(M) | FLOPs<br><sup>@640 (B) |
|
||||
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------- | -------------------- | ----------------- | ----------------------------- | ------------------------------ | ------------------------------- | ---------------- | ---------------------- |
|
||||
| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** |
|
||||
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
|
||||
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
|
||||
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
|
||||
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
|
||||
| | | | | | | | | |
|
||||
| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
|
||||
| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 |
|
||||
| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
|
||||
| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
|
||||
| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)<br>+ [[TTA]](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/) | 1280<br>1536 | 55.0<br>**55.8** | 72.7<br>**72.7** | 3136<br>- | 26.2<br>- | 19.4<br>- | 140.7<br>- | 209.8<br>- |
|
||||
|
||||
<details>
|
||||
<summary>表格说明</summary>
|
||||
|
||||
- 所有检查点均使用默认设置训练 300 个周期。Nano 与 Small 模型使用 [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) 超参数,其它模型使用 [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml)。
|
||||
- **mAP<sup>val</sup>** 指标为在 [COCO val2017](http://cocodataset.org) 数据集上单模型单尺度计算的结果。<br>重现命令:`python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
|
||||
- **Speed** 为在 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/) 实例中基于 COCO val 图像平均推理时间(批量为 1)。不包含 NMS 时间(约 1 ms/图)。<br>重现命令:`python val.py --data coco.yaml --img 640 --task speed --batch 1`
|
||||
- **TTA** [测试时数据增强](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/) 包括反射翻转和缩放增强。<br>重现命令:`python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
|
||||
- 所有预训练权重均使用默认设置训练了 300 个周期。Nano (n) 和 Small (s) 模型使用 [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) 超参数,而 Medium (m)、Large (l) 和 Extra-Large (x) 模型使用 [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml)。
|
||||
- **mAP<sup>val</sup>** 值表示在 [COCO val2017 数据集](https://docs.ultralytics.com/datasets/detect/coco/)上的单模型、单尺度性能。<br>复现请使用:`python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
|
||||
- **速度**指标是在 [AWS p3.2xlarge V100 实例](https://aws.amazon.com/ec2/instance-types/p3/)上对 COCO val 图像进行平均测量的。不包括非极大值抑制 (NMS) 时间(约 1 毫秒/图像)。<br>复现请使用:`python val.py --data coco.yaml --img 640 --task speed --batch 1`
|
||||
- **TTA** ([测试时增强](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/)) 包括反射和尺度增强以提高准确性。<br>复现请使用:`python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
|
||||
|
||||
</details>
|
||||
|
||||
## <div align="center">分割</div>
|
||||
## 🖼️ 分割
|
||||
|
||||
我们全新推出的 YOLOv5 [release v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) 实例分割模型是目前全球最快且最精准的,其性能超越了所有现有 [SOTA 基准](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco)。我们将其训练、验证和部署过程简化到了极致。详细信息请参阅我们的 [发行说明](https://github.com/ultralytics/yolov5/releases/v7.0) ,同时访问我们的 [YOLOv5 分割 Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) 获取快速入门教程。
|
||||
YOLOv5 [v7.0 版本](https://github.com/ultralytics/yolov5/releases/v7.0) 引入了[实例分割](https://docs.ultralytics.com/tasks/segment/)模型,达到了业界顶尖的性能。这些模型设计用于轻松训练、验证和部署。有关完整详细信息,请参阅[发布说明](https://github.com/ultralytics/yolov5/releases/v7.0),并探索 [YOLOv5 分割 Colab 笔记本](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb)以获取快速入门示例。
|
||||
|
||||
<details>
|
||||
<summary>分割检查点</summary>
|
||||
<summary>分割预训练权重</summary>
|
||||
|
||||
<div align="center">
|
||||
<a align="center" href="https://www.ultralytics.com/yolo" target="_blank">
|
||||
<img width="800" src="https://user-images.githubusercontent.com/61612323/204180385-84f3aca9-a5e9-43d8-a617-dda7ca12e54a.png"></a>
|
||||
<img width="800" src="https://user-images.githubusercontent.com/61612323/204180385-84f3aca9-a5e9-43d8-a617-dda7ca12e54a.png" alt="YOLOv5 分割性能图表"></a>
|
||||
</div>
|
||||
|
||||
我们在 A100 GPU 上以图像尺寸 640 对 COCO 数据集训练了 YOLOv5 分割模型共 300 个周期。我们将所有模型导出为 ONNX FP32 以进行 CPU 速度测试,及 TensorRT FP16 以进行 GPU 速度测试。所有速度测试均在 Google [Colab Pro](https://colab.research.google.com/signup) 笔记本上进行,以便于结果重现。
|
||||
YOLOv5 分割模型在 [COCO 数据集](https://docs.ultralytics.com/datasets/segment/coco/)上使用 A100 GPU 以 640 像素的图像大小训练了 300 个周期。模型导出为 [ONNX](https://onnx.ai/) FP32 用于 CPU 速度测试,导出为 [TensorRT](https://developer.nvidia.com/tensorrt) FP16 用于 GPU 速度测试。所有速度测试均在 Google [Colab Pro](https://colab.research.google.com/signup) 笔记本上进行,以确保可复现性。
|
||||
|
||||
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Train time<br><sup>300 epochs<br>A100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TRT A100<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
|
||||
| ------------------------------------------------------------------------------------------ | --------------------- | -------------------- | --------------------- | --------------------------------------------- | ------------------------------ | ------------------------------ | ------------------ | ---------------------- |
|
||||
| [YOLOv5n-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-seg.pt) | 640 | 27.6 | 23.4 | 80:17 | **62.7** | **1.2** | **2.0** | **7.1** |
|
||||
| [YOLOv5s-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-seg.pt) | 640 | 37.6 | 31.7 | 88:16 | 173.3 | 1.4 | 7.6 | 26.4 |
|
||||
| [YOLOv5m-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-seg.pt) | 640 | 45.0 | 37.1 | 108:36 | 427.0 | 2.2 | 22.0 | 70.8 |
|
||||
| [YOLOv5l-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-seg.pt) | 640 | 49.0 | 39.9 | 66:43 (2x) | 857.4 | 2.9 | 47.9 | 147.7 |
|
||||
| [YOLOv5x-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-seg.pt) | 640 | **50.7** | **41.4** | 62:56 (3x) | 1579.2 | 4.5 | 88.8 | 265.7 |
|
||||
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | 训练时间<br><sup>300 周期<br>A100 (小时) | 速度<br><sup>ONNX CPU<br>(毫秒) | 速度<br><sup>TRT A100<br>(毫秒) | 参数<br><sup>(M) | FLOPs<br><sup>@640 (B) |
|
||||
| ------------------------------------------------------------------------------------------ | ------------------- | -------------------- | --------------------- | ---------------------------------------- | ------------------------------- | ------------------------------- | ---------------- | ---------------------- |
|
||||
| [YOLOv5n-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-seg.pt) | 640 | 27.6 | 23.4 | 80:17 | **62.7** | **1.2** | **2.0** | **7.1** |
|
||||
| [YOLOv5s-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-seg.pt) | 640 | 37.6 | 31.7 | 88:16 | 173.3 | 1.4 | 7.6 | 26.4 |
|
||||
| [YOLOv5m-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-seg.pt) | 640 | 45.0 | 37.1 | 108:36 | 427.0 | 2.2 | 22.0 | 70.8 |
|
||||
| [YOLOv5l-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-seg.pt) | 640 | 49.0 | 39.9 | 66:43 (2x) | 857.4 | 2.9 | 47.9 | 147.7 |
|
||||
| [YOLOv5x-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-seg.pt) | 640 | **50.7** | **41.4** | 62:56 (3x) | 1579.2 | 4.5 | 88.8 | 265.7 |
|
||||
|
||||
- 所有检查点均使用 SGD 优化器(`lr0=0.01`,`weight_decay=5e-5`)以图像尺寸 640 训练 300 个周期,且均采用默认设置。<br>训练日志记录于 https://wandb.ai/glenn-jocher/YOLOv5_v70_official
|
||||
- **准确度** 为在 COCO 数据集上单模型单尺度的结果。<br>重现命令:`python segment/val.py --data coco.yaml --weights yolov5s-seg.pt`
|
||||
- **速度** 为在 Google [Colab Pro](https://colab.research.google.com/signup) A100 高内存实例上,针对 100 张推理图像计算的平均推理速度。数值仅表示推理速度(NMS 每图约增加 1ms)。<br>重现命令:`python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1`
|
||||
- **导出** 为 ONNX FP32 及 TensorRT FP16 均通过 `export.py` 执行。<br>重现命令:`python export.py --weights yolov5s-seg.pt --include engine --device 0 --half`
|
||||
- 所有预训练权重均使用 SGD 优化器,`lr0=0.01` 和 `weight_decay=5e-5`,在 640 像素的图像大小下,使用默认设置训练了 300 个周期。<br>训练运行记录在 [https://wandb.ai/glenn-jocher/YOLOv5_v70_official](https://wandb.ai/glenn-jocher/YOLOv5_v70_official)。
|
||||
- **准确度**值表示在 COCO 数据集上的单模型、单尺度性能。<br>复现请使用:`python segment/val.py --data coco.yaml --weights yolov5s-seg.pt`
|
||||
- **速度**指标是在 [Colab Pro A100 High-RAM 实例](https://colab.research.google.com/signup)上对 100 张推理图像进行平均测量的。值仅表示推理速度(NMS 约增加 1 毫秒/图像)。<br>复现请使用:`python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1`
|
||||
- **导出**到 ONNX (FP32) 和 TensorRT (FP16) 是使用 `export.py` 完成的。<br>复现请使用:`python export.py --weights yolov5s-seg.pt --include engine --device 0 --half`
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>分割使用示例 <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/segment/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a></summary>
|
||||
<summary>分割使用示例 <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/segment/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="在 Colab 中打开"></a></summary>
|
||||
|
||||
### 训练
|
||||
|
||||
YOLOv5 分割训练支持通过 `--data coco128-seg.yaml` 参数自动下载 COCO128-seg 分割数据集,也支持通过执行 `bash data/scripts/get_coco.sh --train --val --segments` 手动下载 COCO-segments 数据集,然后运行 `python train.py --data coco.yaml`。
|
||||
YOLOv5 分割训练支持通过 `--data coco128-seg.yaml` 参数自动下载 [COCO128-seg 数据集](https://docs.ultralytics.com/datasets/segment/coco8-seg/)。对于完整的 [COCO-segments 数据集](https://docs.ultralytics.com/datasets/segment/coco/),请使用 `bash data/scripts/get_coco.sh --train --val --segments` 手动下载,然后使用 `python train.py --data coco.yaml` 进行训练。
|
||||
|
||||
```bash
|
||||
# 单 GPU 训练
|
||||
# 在单个 GPU 上训练
|
||||
python segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640
|
||||
|
||||
# 多 GPU DDP 训练
|
||||
# 使用多 GPU 分布式数据并行 (DDP) 进行训练
|
||||
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 --device 0,1,2,3
|
||||
```
|
||||
|
||||
### 验证
|
||||
|
||||
在 COCO 数据集上验证 YOLOv5s-seg 的 mask mAP:
|
||||
在 COCO 数据集上验证 YOLOv5s-seg 的掩码[平均精度均值 (mAP)](https://www.ultralytics.com/glossary/mean-average-precision-map):
|
||||
|
||||
```bash
|
||||
bash data/scripts/get_coco.sh --val --segments # 下载 COCO val 分割集(780MB,5000 张图)
|
||||
python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # 验证
|
||||
# 下载 COCO 验证分割集 (780MB, 5000 张图像)
|
||||
bash data/scripts/get_coco.sh --val --segments
|
||||
|
||||
# 验证模型
|
||||
python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640
|
||||
```
|
||||
|
||||
### 预测
|
||||
|
||||
使用预训练的 YOLOv5m-seg.pt 对 bus.jpg 进行预测:
|
||||
使用预训练的 YOLOv5m-seg.pt 模型对 `bus.jpg` 执行分割:
|
||||
|
||||
```bash
|
||||
# 运行预测
|
||||
python segment/predict.py --weights yolov5m-seg.pt --source data/images/bus.jpg
|
||||
```
|
||||
|
||||
```python
|
||||
model = torch.hub.load(
|
||||
"ultralytics/yolov5", "custom", "yolov5m-seg.pt"
|
||||
) # 从 PyTorch Hub 加载(注意:目前推理尚未支持)
|
||||
# 从 PyTorch Hub 加载模型(注意:推理支持可能有所不同)
|
||||
model = torch.hub.load("ultralytics/yolov5", "custom", "yolov5m-seg.pt")
|
||||
```
|
||||
|
||||
|  |  |
|
||||
| ---------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
|
||||
|  |  |
|
||||
| :-----------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------: |
|
||||
|
||||
### 导出
|
||||
|
||||
将 YOLOv5s-seg 模型导出为 ONNX 与 TensorRT 格式:
|
||||
将 YOLOv5s-seg 模型导出为 ONNX 和 TensorRT 格式:
|
||||
|
||||
```bash
|
||||
# 导出模型
|
||||
python export.py --weights yolov5s-seg.pt --include onnx engine --img 640 --device 0
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## <div align="center">分类</div>
|
||||
## 🏷️ 分类
|
||||
|
||||
YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases) 新增了对分类模型的训练、验证和部署支持!详细信息请参阅我们的 [发行说明](https://github.com/ultralytics/yolov5/releases/v6.2) ,同时访问我们的 [YOLOv5 分类 Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb) 获取快速入门教程。
|
||||
YOLOv5 [v6.2 版本](https://github.com/ultralytics/yolov5/releases/v6.2) 引入了对[图像分类](https://docs.ultralytics.com/tasks/classify/)模型训练、验证和部署的支持。请查看[发布说明](https://github.com/ultralytics/yolov5/releases/v6.2)了解详细信息,并参阅 [YOLOv5 分类 Colab 笔记本](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb)获取快速入门指南。
|
||||
|
||||
<details>
|
||||
<summary>分类检查点</summary>
|
||||
<summary>分类预训练权重</summary>
|
||||
|
||||
<br>
|
||||
|
||||
我们在 ImageNet 数据集上训练了 YOLOv5-cls 分类模型,共训练 90 个周期,使用 4 个 A100 实例进行训练。同时为了对比,我们还训练了 ResNet 和 EfficientNet 模型,均采用相同的默认训练设置。所有模型均导出为 ONNX FP32 以进行 CPU 速度测试,并导出为 TensorRT FP16 以进行 GPU 速度测试。所有速度测试均在 Google [Colab Pro](https://colab.research.google.com/signup) 上进行,以方便结果重现。
|
||||
YOLOv5-cls 分类模型在 [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) 上使用 4xA100 实例训练了 90 个周期。[ResNet](https://arxiv.org/abs/1512.03385) 和 [EfficientNet](https://arxiv.org/abs/1905.11946) 模型在相同设置下一起训练以进行比较。模型导出为 [ONNX](https://onnx.ai/) FP32(用于 CPU 速度测试)和 [TensorRT](https://developer.nvidia.com/tensorrt) FP16(用于 GPU 速度测试)。所有速度测试均在 Google [Colab Pro](https://colab.research.google.com/signup) 上运行,以确保可复现性。
|
||||
|
||||
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Training<br><sup>90 epochs<br>4xA100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TensorRT V100<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>@224 (B) |
|
||||
| -------------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | -------------------------------------------- | ------------------------------ | ----------------------------------- | ------------------ | ---------------------- |
|
||||
| [YOLOv5n-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-cls.pt) | 224 | 64.6 | 85.4 | 7:59 | **3.3** | **0.5** | **2.5** | **0.5** |
|
||||
| [YOLOv5s-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-cls.pt) | 224 | 71.5 | 90.2 | 8:09 | 6.6 | 0.6 | 5.4 | 1.4 |
|
||||
| [YOLOv5m-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-cls.pt) | 224 | 75.9 | 92.9 | 10:06 | 15.5 | 0.9 | 12.9 | 3.9 |
|
||||
| [YOLOv5l-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-cls.pt) | 224 | 78.0 | 94.0 | 11:56 | 26.9 | 1.4 | 26.5 | 8.5 |
|
||||
| [YOLOv5x-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-cls.pt) | 224 | **79.0** | **94.4** | 15:04 | 54.3 | 1.8 | 48.1 | 15.9 |
|
||||
| | | | | | | | | |
|
||||
| [ResNet18](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet18.pt) | 224 | 70.3 | 89.5 | **6:47** | 11.2 | 0.5 | 11.7 | 3.7 |
|
||||
| [ResNet34](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet34.pt) | 224 | 73.9 | 91.8 | 8:33 | 20.6 | 0.9 | 21.8 | 7.4 |
|
||||
| [ResNet50](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet50.pt) | 224 | 76.8 | 93.4 | 11:10 | 23.4 | 1.0 | 25.6 | 8.5 |
|
||||
| [ResNet101](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet101.pt) | 224 | 78.5 | 94.3 | 17:10 | 42.1 | 1.9 | 44.5 | 15.9 |
|
||||
| | | | | | | | | |
|
||||
| [EfficientNet_b0](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b0.pt) | 224 | 75.1 | 92.4 | 13:03 | 12.5 | 1.3 | 5.3 | 1.0 |
|
||||
| [EfficientNet_b1](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b1.pt) | 224 | 76.4 | 93.2 | 17:04 | 14.9 | 1.6 | 7.8 | 1.5 |
|
||||
| [EfficientNet_b2](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b2.pt) | 224 | 76.6 | 93.4 | 17:10 | 15.9 | 1.6 | 9.1 | 1.7 |
|
||||
| [EfficientNet_b3](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b3.pt) | 224 | 77.7 | 94.0 | 19:19 | 18.9 | 1.9 | 12.2 | 2.4 |
|
||||
| 模型 | 尺寸<br><sup>(像素) | 准确率<br><sup>top1 | 准确率<br><sup>top5 | 训练<br><sup>90 周期<br>4xA100 (小时) | 速度<br><sup>ONNX CPU<br>(毫秒) | 速度<br><sup>TensorRT V100<br>(毫秒) | 参数<br><sup>(M) | FLOPs<br><sup>@224 (B) |
|
||||
| -------------------------------------------------------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------------------------- | ------------------------------- | ------------------------------------ | ---------------- | ---------------------- |
|
||||
| [YOLOv5n-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-cls.pt) | 224 | 64.6 | 85.4 | 7:59 | **3.3** | **0.5** | **2.5** | **0.5** |
|
||||
| [YOLOv5s-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-cls.pt) | 224 | 71.5 | 90.2 | 8:09 | 6.6 | 0.6 | 5.4 | 1.4 |
|
||||
| [YOLOv5m-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-cls.pt) | 224 | 75.9 | 92.9 | 10:06 | 15.5 | 0.9 | 12.9 | 3.9 |
|
||||
| [YOLOv5l-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-cls.pt) | 224 | 78.0 | 94.0 | 11:56 | 26.9 | 1.4 | 26.5 | 8.5 |
|
||||
| [YOLOv5x-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-cls.pt) | 224 | **79.0** | **94.4** | 15:04 | 54.3 | 1.8 | 48.1 | 15.9 |
|
||||
| | | | | | | | | |
|
||||
| [ResNet18](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet18.pt) | 224 | 70.3 | 89.5 | **6:47** | 11.2 | 0.5 | 11.7 | 3.7 |
|
||||
| [ResNet34](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet34.pt) | 224 | 73.9 | 91.8 | 8:33 | 20.6 | 0.9 | 21.8 | 7.4 |
|
||||
| [ResNet50](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet50.pt) | 224 | 76.8 | 93.4 | 11:10 | 23.4 | 1.0 | 25.6 | 8.5 |
|
||||
| [ResNet101](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet101.pt) | 224 | 78.5 | 94.3 | 17:10 | 42.1 | 1.9 | 44.5 | 15.9 |
|
||||
| | | | | | | | | |
|
||||
| [EfficientNet_b0](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b0.pt) | 224 | 75.1 | 92.4 | 13:03 | 12.5 | 1.3 | 5.3 | 1.0 |
|
||||
| [EfficientNet_b1](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b1.pt) | 224 | 76.4 | 93.2 | 17:04 | 14.9 | 1.6 | 7.8 | 1.5 |
|
||||
| [EfficientNet_b2](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b2.pt) | 224 | 76.6 | 93.4 | 17:10 | 15.9 | 1.6 | 9.1 | 1.7 |
|
||||
| [EfficientNet_b3](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b3.pt) | 224 | 77.7 | 94.0 | 19:19 | 18.9 | 1.9 | 12.2 | 2.4 |
|
||||
|
||||
<details>
|
||||
<summary>表格说明 (点击展开)</summary>
|
||||
<summary>表格说明(点击展开)</summary>
|
||||
|
||||
- 所有检查点均使用默认设置,在图像尺寸 224 下以 SGD 优化器(`lr0=0.001`,`weight_decay=5e-5`)训练 90 个周期。训练日志记录于 https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2
|
||||
- **准确率** 为在 [ImageNet-1k](https://www.image-net.org/index.php) 数据集上单模型单尺度计算的结果。<br>重现命令:`python classify/val.py --data ../datasets/imagenet --img 224`
|
||||
- **速度** 为在 Google [Colab Pro](https://colab.research.google.com/signup) V100 高内存实例上,针对 100 张推理图像计算的平均推理速度。<br>重现命令:`python classify/val.py --data ../datasets/imagenet --img 224 --batch 1`
|
||||
- **导出** 为 ONNX FP32 及 TensorRT FP16 均通过 `export.py` 执行。<br>重现命令:`python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224`
|
||||
- 所有预训练权重均使用 SGD 优化器,`lr0=0.001` 和 `weight_decay=5e-5`,在 224 像素的图像大小下,使用默认设置训练了 90 个周期。<br>训练运行记录在 [https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2](https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2)。
|
||||
- **准确度**值(top-1 和 top-5)表示在 [ImageNet-1k 数据集](https://docs.ultralytics.com/datasets/classify/imagenet/)上的单模型、单尺度性能。<br>复现请使用:`python classify/val.py --data ../datasets/imagenet --img 224`
|
||||
- **速度**指标是在 Google [Colab Pro V100 High-RAM 实例](https://colab.research.google.com/signup)上对 100 张推理图像进行平均测量的。<br>复现请使用:`python classify/val.py --data ../datasets/imagenet --img 224 --batch 1`
|
||||
- **导出**到 ONNX (FP32) 和 TensorRT (FP16) 是使用 `export.py` 完成的。<br>复现请使用:`python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224`
|
||||
|
||||
</details>
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>分类使用示例 <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/classify/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a></summary>
|
||||
<summary>分类使用示例 <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/classify/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="在 Colab 中打开"></a></summary>
|
||||
|
||||
### 训练
|
||||
|
||||
YOLOv5 分类训练支持通过 `--data` 参数自动下载 MNIST、Fashion-MNIST、CIFAR10、CIFAR100、Imagenette、Imagewoof 和 ImageNet 数据集。例如,启动 MNIST 训练只需使用 `--data mnist`。
|
||||
YOLOv5 分类训练支持使用 `--data` 参数自动下载诸如 [MNIST](https://docs.ultralytics.com/datasets/classify/mnist/)、[Fashion-MNIST](https://docs.ultralytics.com/datasets/classify/fashion-mnist/)、[CIFAR10](https://docs.ultralytics.com/datasets/classify/cifar10/)、[CIFAR100](https://docs.ultralytics.com/datasets/classify/cifar100/)、[Imagenette](https://docs.ultralytics.com/datasets/classify/imagenette/)、[Imagewoof](https://docs.ultralytics.com/datasets/classify/imagewoof/) 和 [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) 等数据集。例如,使用 `--data mnist` 开始在 MNIST 上训练。
|
||||
|
||||
```bash
|
||||
# 单 GPU 训练
|
||||
# 使用 CIFAR-100 数据集在单个 GPU 上训练
|
||||
python classify/train.py --model yolov5s-cls.pt --data cifar100 --epochs 5 --img 224 --batch 128
|
||||
|
||||
# 多 GPU DDP 训练
|
||||
# 在 ImageNet 数据集上使用多 GPU DDP 进行训练
|
||||
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3
|
||||
```
|
||||
|
||||
### 验证
|
||||
|
||||
在 ImageNet-1k 数据集上验证 YOLOv5m-cls 的准确率:
|
||||
在 ImageNet-1k 验证数据集上验证 YOLOv5m-cls 模型的准确性:
|
||||
|
||||
```bash
|
||||
bash data/scripts/get_imagenet.sh --val # 下载 ImageNet 验证集(6.3G,50000 张图)
|
||||
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # 验证
|
||||
# 下载 ImageNet 验证集 (6.3GB, 50,000 张图像)
|
||||
bash data/scripts/get_imagenet.sh --val
|
||||
|
||||
# 验证模型
|
||||
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224
|
||||
```
|
||||
|
||||
### 预测
|
||||
|
||||
使用预训练的 YOLOv5s-cls.pt 对 bus.jpg 进行预测:
|
||||
使用预训练的 YOLOv5s-cls.pt 模型对图像 `bus.jpg` 进行分类:
|
||||
|
||||
```bash
|
||||
# 运行预测
|
||||
python classify/predict.py --weights yolov5s-cls.pt --source data/images/bus.jpg
|
||||
```
|
||||
|
||||
```python
|
||||
model = torch.hub.load("ultralytics/yolov5", "custom", "yolov5s-cls.pt") # 从 PyTorch Hub 加载
|
||||
# 从 PyTorch Hub 加载模型
|
||||
model = torch.hub.load("ultralytics/yolov5", "custom", "yolov5s-cls.pt")
|
||||
```
|
||||
|
||||
### 导出
|
||||
|
||||
将一组训练好的 YOLOv5s-cls、ResNet 和 EfficientNet 模型导出为 ONNX 与 TensorRT 格式:
|
||||
将训练好的 YOLOv5s-cls、ResNet50 和 EfficientNet_b0 模型导出为 ONNX 和 TensorRT 格式:
|
||||
|
||||
```bash
|
||||
# 导出模型
|
||||
python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --include onnx engine --img 224
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## <div align="center">环境</div>
|
||||
## ☁️ 环境
|
||||
|
||||
只需几秒钟即可开始使用我们的经过验证的环境。点击下方每个图标了解详情。
|
||||
使用我们预配置的环境快速开始。点击下面的图标查看设置详情。
|
||||
|
||||
<div align="center">
|
||||
<a href="https://bit.ly/yolov5-paperspace-notebook">
|
||||
<a href="https://bit.ly/yolov5-paperspace-notebook" title="在 Paperspace Gradient 上运行">
|
||||
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-gradient.png" width="10%" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
|
||||
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb">
|
||||
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb" title="在 Google Colab 中打开">
|
||||
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-colab-small.png" width="10%" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
|
||||
<a href="https://www.kaggle.com/models/ultralytics/yolov5">
|
||||
<a href="https://www.kaggle.com/models/ultralytics/yolov5" title="在 Kaggle 中打开">
|
||||
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-kaggle-small.png" width="10%" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
|
||||
<a href="https://hub.docker.com/r/ultralytics/yolov5">
|
||||
<a href="https://hub.docker.com/r/ultralytics/yolov5" title="拉取 Docker 镜像">
|
||||
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-docker-small.png" width="10%" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
|
||||
<a href="https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/">
|
||||
<a href="https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/" title="AWS 快速入门指南">
|
||||
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-aws-small.png" width="10%" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
|
||||
<a href="https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/">
|
||||
<a href="https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/" title="GCP 快速入门指南">
|
||||
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-gcp-small.png" width="10%" /></a>
|
||||
</div>
|
||||
|
||||
## <div align="center">贡献</div>
|
||||
## 🤝 贡献
|
||||
|
||||
我们非常欢迎您的反馈!我们希望让贡献 YOLOv5 的过程变得简单且透明。请参阅我们的 [贡献指南](https://docs.ultralytics.com/help/contributing/) 开始贡献,并填写 [YOLOv5 调查问卷](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) 告诉我们您的体验。感谢所有贡献者!
|
||||
我们欢迎您的贡献!让 YOLOv5 变得易于访问和有效是社区的共同努力。请参阅我们的[贡献指南](https://docs.ultralytics.com/help/contributing/)开始。通过 [YOLOv5 调查](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey)分享您的反馈。感谢所有为使 YOLOv5 变得更好而做出贡献的人!
|
||||
|
||||
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->
|
||||
[](https://github.com/ultralytics/yolov5/graphs/contributors)
|
||||
|
||||
<a href="https://github.com/ultralytics/yolov5/graphs/contributors">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png" /></a>
|
||||
## 📜 许可证
|
||||
|
||||
## <div align="center">许可证</div>
|
||||
Ultralytics 提供两种许可选项以满足不同需求:
|
||||
|
||||
Ultralytics 提供两种许可证选项以满足不同的使用场景:
|
||||
- **AGPL-3.0 许可证**:一种 [OSI 批准的](https://opensource.org/license/agpl-v3)开源许可证,非常适合学术研究、个人项目和测试。它促进开放协作和知识共享。详情请参阅 [LICENSE](https://github.com/ultralytics/yolov5/blob/master/LICENSE) 文件。
|
||||
- **企业许可证**:专为商业应用量身定制,此许可证允许将 Ultralytics 软件和 AI 模型无缝集成到商业产品和服务中,绕过 AGPL-3.0 的开源要求。对于商业用例,请通过 [Ultralytics 授权许可](https://www.ultralytics.com/license)联系我们。
|
||||
|
||||
- **AGPL-3.0 许可证**:这种 [OSI 认可](https://opensource.org/license) 的开源许可证适合学生和爱好者,旨在促进开放协作与知识共享。详见 [LICENSE](https://github.com/ultralytics/yolov5/blob/master/LICENSE) 文件获取更多细节。
|
||||
- **企业许可证**:专为商业用途设计,该许可证允许将 Ultralytics 软件和 AI 模型无缝集成到商业产品和服务中,从而规避 AGPL-3.0 的开源要求。如果您的使用场景涉及将我们的解决方案嵌入商业产品,请通过 [Ultralytics Licensing](https://www.ultralytics.com/license) 与我们联系。
|
||||
## 📧 联系
|
||||
|
||||
## <div align="center">联系方式</div>
|
||||
|
||||
有关 YOLOv5 的 Bug 报告和功能请求,请访问 [GitHub Issues](https://github.com/ultralytics/yolov5/issues),与此同时欢迎加入我们的 [Discord](https://discord.com/invite/ultralytics) 社区进行交流和讨论!
|
||||
对于与 YOLOv5 相关的错误报告和功能请求,请访问 [GitHub Issues](https://github.com/ultralytics/yolov5/issues)。对于一般问题、讨论和社区支持,请加入我们的 [Discord 服务器](https://discord.com/invite/ultralytics)!
|
||||
|
||||
<br>
|
||||
<div align="center">
|
||||
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
||||
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="Ultralytics LinkedIn"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
||||
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="Ultralytics Twitter"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
||||
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="Ultralytics YouTube"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
||||
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="3%" alt="Ultralytics TikTok"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
||||
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="3%" alt="Ultralytics BiliBili"></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
|
||||
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="3%" alt="Ultralytics Discord"></a>
|
||||
</div>
|
||||
|
||||
[tta]: https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
#!/bin/bash
|
||||
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
|
||||
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
|
||||
|
||||
# Download latest models from https://github.com/ultralytics/yolov5/releases
|
||||
# Example usage: bash data/scripts/download_weights.sh
|
||||
# parent
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
#!/bin/bash
|
||||
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
|
||||
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
|
||||
|
||||
# Download COCO 2017 dataset http://cocodataset.org
|
||||
# Example usage: bash data/scripts/get_coco.sh
|
||||
# parent
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
#!/bin/bash
|
||||
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
|
||||
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
|
||||
|
||||
# Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017)
|
||||
# Example usage: bash data/scripts/get_coco128.sh
|
||||
# parent
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
#!/bin/bash
|
||||
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
|
||||
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
|
||||
|
||||
# Download ILSVRC2012 ImageNet dataset https://image-net.org
|
||||
# Example usage: bash data/scripts/get_imagenet.sh
|
||||
# parent
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
#!/bin/bash
|
||||
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
|
||||
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
|
||||
|
||||
# Download ILSVRC2012 ImageNet dataset https://image-net.org
|
||||
# Example usage: bash data/scripts/get_imagenet.sh
|
||||
# parent
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
#!/bin/bash
|
||||
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
|
||||
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
|
||||
|
||||
# Download ILSVRC2012 ImageNet dataset https://image-net.org
|
||||
# Example usage: bash data/scripts/get_imagenet.sh
|
||||
# parent
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
#!/bin/bash
|
||||
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
|
||||
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
|
||||
|
||||
# Download ILSVRC2012 ImageNet dataset https://image-net.org
|
||||
# Example usage: bash data/scripts/get_imagenet.sh
|
||||
# parent
|
||||
|
|
|
@ -510,7 +510,7 @@ def export_paddle(model, im, file, metadata, prefix=colorstr("PaddlePaddle:")):
|
|||
$ pip install paddlepaddle x2paddle
|
||||
```
|
||||
"""
|
||||
check_requirements(("paddlepaddle", "x2paddle"))
|
||||
check_requirements(("paddlepaddle>=3.0.0", "x2paddle"))
|
||||
import x2paddle
|
||||
from x2paddle.convert import pytorch2paddle
|
||||
|
||||
|
|
|
@ -648,20 +648,32 @@ class DetectMultiBackend(nn.Module):
|
|||
stride, names = int(meta["stride"]), meta["names"]
|
||||
elif tfjs: # TF.js
|
||||
raise NotImplementedError("ERROR: YOLOv5 TF.js inference is not supported")
|
||||
elif paddle: # PaddlePaddle
|
||||
# PaddlePaddle
|
||||
elif paddle:
|
||||
LOGGER.info(f"Loading {w} for PaddlePaddle inference...")
|
||||
check_requirements("paddlepaddle-gpu" if cuda else "paddlepaddle")
|
||||
check_requirements("paddlepaddle-gpu" if cuda else "paddlepaddle>=3.0.0")
|
||||
import paddle.inference as pdi
|
||||
|
||||
if not Path(w).is_file(): # if not *.pdmodel
|
||||
w = next(Path(w).rglob("*.pdmodel")) # get *.pdmodel file from *_paddle_model dir
|
||||
weights = Path(w).with_suffix(".pdiparams")
|
||||
config = pdi.Config(str(w), str(weights))
|
||||
w = Path(w)
|
||||
if w.is_dir():
|
||||
model_file = next(w.rglob("*.json"), None)
|
||||
params_file = next(w.rglob("*.pdiparams"), None)
|
||||
elif w.suffix == ".pdiparams":
|
||||
model_file = w.with_name("model.json")
|
||||
params_file = w
|
||||
else:
|
||||
raise ValueError(f"Invalid model path {w}. Provide model directory or a .pdiparams file.")
|
||||
|
||||
if not (model_file and params_file and model_file.is_file() and params_file.is_file()):
|
||||
raise FileNotFoundError(f"Model files not found in {w}. Both .json and .pdiparams files are required.")
|
||||
|
||||
config = pdi.Config(str(model_file), str(params_file))
|
||||
if cuda:
|
||||
config.enable_use_gpu(memory_pool_init_size_mb=2048, device_id=0)
|
||||
predictor = pdi.create_predictor(config)
|
||||
input_handle = predictor.get_input_handle(predictor.get_input_names()[0])
|
||||
output_names = predictor.get_output_names()
|
||||
|
||||
elif triton: # NVIDIA Triton Inference Server
|
||||
LOGGER.info(f"Using {w} as Triton Inference Server...")
|
||||
check_requirements("tritonclient[all]")
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
|
||||
|
||||
# AWS EC2 instance startup 'MIME' script https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
|
||||
# This script will run on every instance restart, not only on first start
|
||||
# --- DO NOT COPY ABOVE COMMENTS WHEN PASTING INTO USERDATA ---
|
||||
|
|
|
@ -1,4 +1,6 @@
|
|||
#!/bin/bash
|
||||
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
|
||||
|
||||
# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
|
||||
# This script will run only once on first instance start (for a re-start script see mime.sh)
|
||||
# /home/ubuntu (ubuntu) or /home/ec2-user (amazon-linux) is working dir
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
|
||||
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
|
||||
|
||||
# Builds ultralytics/yolov5:latest image on DockerHub https://hub.docker.com/r/ultralytics/yolov5
|
||||
# Image is CUDA-optimized for YOLOv5 single/multi-GPU training and inference
|
||||
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
|
||||
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
|
||||
|
||||
# Builds ultralytics/yolov5:latest-arm64 image on DockerHub https://hub.docker.com/r/ultralytics/yolov5
|
||||
# Image is aarch64-compatible for Apple M1 and other ARM architectures i.e. Jetson Nano and Raspberry Pi
|
||||
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
|
||||
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
|
||||
|
||||
# Builds ultralytics/yolov5:latest-cpu image on DockerHub https://hub.docker.com/r/ultralytics/yolov5
|
||||
# Image is CPU-optimized for ONNX, OpenVINO and PyTorch YOLOv5 deployments
|
||||
|
||||
|
|
|
@ -1,30 +1,36 @@
|
|||
# Flask REST API
|
||||
<a href="https://www.ultralytics.com/"><img src="https://raw.githubusercontent.com/ultralytics/assets/main/logo/Ultralytics_Logotype_Original.svg" width="320" alt="Ultralytics logo"></a>
|
||||
|
||||
[REST](https://en.wikipedia.org/wiki/Representational_state_transfer) [API](https://en.wikipedia.org/wiki/API)s are commonly used to expose Machine Learning (ML) models to other services. This folder contains an example REST API created using Flask to expose the YOLOv5s model from [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/).
|
||||
# Flask REST API for YOLOv5
|
||||
|
||||
## Requirements
|
||||
[Representational State Transfer (REST)](https://en.wikipedia.org/wiki/Representational_state_transfer) [Application Programming Interfaces (APIs)](https://en.wikipedia.org/wiki/API) provide a standardized way to expose [Machine Learning (ML)](https://www.ultralytics.com/glossary/machine-learning-ml) models for use by other services or applications. This directory contains an example REST API built with the [Flask](https://flask.palletsprojects.com/en/stable/) web framework to serve the [Ultralytics YOLOv5s](https://docs.ultralytics.com/models/yolov5/) model, loaded directly from [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/). This setup allows you to easily integrate YOLOv5 [object detection](https://docs.ultralytics.com/tasks/detect/) capabilities into your web applications or microservices, aligning with common [model deployment options](https://docs.ultralytics.com/guides/model-deployment-options/).
|
||||
|
||||
[Flask](https://palletsprojects.com/projects/flask/) is required. Install with:
|
||||
## 💻 Requirements
|
||||
|
||||
The primary requirement is the [Flask](https://flask.palletsprojects.com/en/stable/) web framework. You can install it using pip:
|
||||
|
||||
```shell
|
||||
$ pip install Flask
|
||||
pip install Flask
|
||||
```
|
||||
|
||||
## Run
|
||||
You will also need `torch` and `yolov5`. These are implicitly handled by the script when it loads the model from PyTorch Hub. Ensure you have a functioning Python environment set up.
|
||||
|
||||
After Flask installation run:
|
||||
## ▶️ Run the API
|
||||
|
||||
Once Flask is installed, you can start the API server using the following command:
|
||||
|
||||
```shell
|
||||
$ python3 restapi.py --port 5000
|
||||
python restapi.py --port 5000
|
||||
```
|
||||
|
||||
Then use [curl](https://curl.se/) to perform a request:
|
||||
The server will begin listening on the specified port (defaulting to 5000). You can then send inference requests to the API endpoint using tools like [curl](https://curl.se/) or any other HTTP client.
|
||||
|
||||
To test the API with a local image file (e.g., `zidane.jpg` located in the `yolov5/data/images` directory relative to the script):
|
||||
|
||||
```shell
|
||||
$ curl -X POST -F image=@zidane.jpg 'http://localhost:5000/v1/object-detection/yolov5s'
|
||||
curl -X POST -F image=@../data/images/zidane.jpg 'http://localhost:5000/v1/object-detection/yolov5s'
|
||||
```
|
||||
|
||||
The model inference results are returned as a JSON response:
|
||||
The API processes the submitted image using the YOLOv5s model and returns the detection results in [JSON](https://www.json.org/json-en.html) format. Each object within the JSON array represents a detected item, including its class ID, confidence score, normalized [bounding box](https://www.ultralytics.com/glossary/bounding-box) coordinates (`xcenter`, `ycenter`, `width`, `height`), and class name.
|
||||
|
||||
```json
|
||||
[
|
||||
|
@ -67,4 +73,8 @@ The model inference results are returned as a JSON response:
|
|||
]
|
||||
```
|
||||
|
||||
An example python script to perform inference using [requests](https://docs.python-requests.org/en/master/) is given in `example_request.py`
|
||||
An example Python script, `example_request.py`, is included to demonstrate how to perform inference using the popular [requests](https://requests.readthedocs.io/en/latest/) library. This script offers a straightforward method for interacting with the running API programmatically.
|
||||
|
||||
## 🤝 Contribute
|
||||
|
||||
Contributions to enhance this Flask API example are highly encouraged! Whether you're interested in adding support for different YOLO models, improving error handling, or implementing new features, please feel free to fork the repository, apply your changes, and submit a pull request. For more comprehensive contribution guidelines, please refer to the main [Ultralytics YOLOv5 repository](https://github.com/ultralytics/yolov5) and the general [Ultralytics documentation](https://docs.ultralytics.com/).
|
||||
|
|
|
@ -1,222 +1,224 @@
|
|||
# ClearML Integration
|
||||
<a href="https://www.ultralytics.com/"><img src="https://raw.githubusercontent.com/ultralytics/assets/main/logo/Ultralytics_Logotype_Original.svg" width="320" alt="Ultralytics logo"></a>
|
||||
|
||||
# ClearML Integration with Ultralytics YOLOv5
|
||||
|
||||
<img align="center" src="https://github.com/thepycoder/clearml_screenshots/raw/main/logos_dark.png#gh-light-mode-only" alt="Clear|ML"><img align="center" src="https://github.com/thepycoder/clearml_screenshots/raw/main/logos_light.png#gh-dark-mode-only" alt="Clear|ML">
|
||||
|
||||
## About ClearML
|
||||
## ℹ️ About ClearML
|
||||
|
||||
[ClearML](https://clear.ml/) is an [open-source](https://github.com/clearml/clearml) toolbox designed to save you time ⏱️.
|
||||
[ClearML](https://clear.ml/) is an [open-source](https://github.com/clearml/clearml) MLOps platform designed to streamline your machine learning workflow and save valuable time ⏱️. Integrating ClearML with [Ultralytics YOLOv5](https://docs.ultralytics.com/models/yolov5/) allows you to leverage a powerful suite of tools:
|
||||
|
||||
🔨 Track every YOLOv5 training run in the <b>experiment manager</b>
|
||||
- **Experiment Management:** 🔨 Track every YOLOv5 [training run](https://docs.ultralytics.com/modes/train/), including parameters, metrics, and outputs. See the [Ultralytics ClearML integration guide](https://docs.ultralytics.com/integrations/clearml/) for more details.
|
||||
- **Data Versioning:** 🔧 Version and easily access your custom training data using the integrated ClearML Data Versioning Tool, similar to concepts in [DVC integration](https://docs.ultralytics.com/integrations/dvc/).
|
||||
- **Remote Execution:** 🔦 [Remotely train and monitor](https://docs.ultralytics.com/hub/cloud-training/) your YOLOv5 models using ClearML Agent.
|
||||
- **Hyperparameter Optimization:** 🔬 Achieve optimal [Mean Average Precision (mAP)](https://docs.ultralytics.com/guides/yolo-performance-metrics/) using ClearML's [Hyperparameter Optimization](https://docs.ultralytics.com/guides/hyperparameter-tuning/) capabilities.
|
||||
- **Model Deployment:** 🔭 Turn your trained YOLOv5 model into an API with just a few commands using ClearML Serving, complementing [Ultralytics deployment options](https://docs.ultralytics.com/guides/model-deployment-options/).
|
||||
|
||||
🔧 Version and easily access your custom training data with the integrated ClearML <b>Data Versioning Tool</b>
|
||||
|
||||
🔦 <b>Remotely train and monitor</b> your YOLOv5 training runs using ClearML Agent
|
||||
|
||||
🔬 Get the very best mAP using ClearML <b>Hyperparameter Optimization</b>
|
||||
|
||||
🔭 Turn your newly trained <b>YOLOv5 model into an API</b> with just a few commands using ClearML Serving
|
||||
|
||||
And so much more. It's up to you how many of these tools you want to use, you can stick to the experiment manager, or chain them all together into an impressive pipeline!
|
||||
You can choose to use only the experiment manager or combine multiple tools into a comprehensive [MLOps](https://www.ultralytics.com/glossary/machine-learning-operations-mlops) pipeline.
|
||||
|
||||

|
||||
|
||||
## 🦾 Setting Things Up
|
||||
|
||||
To keep track of your experiments and/or data, ClearML needs to communicate to a server. You have 2 options to get one:
|
||||
ClearML requires communication with a server to track experiments and data. You have two main options:
|
||||
|
||||
Either sign up for free to the [ClearML Hosted Service](https://clear.ml/) or you can set up your own server, see [here](https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server). Even the server is open-source, so even if you're dealing with sensitive data, you should be good to go!
|
||||
1. **ClearML Hosted Service:** Sign up for a free account at [app.clear.ml](https://app.clear.ml/).
|
||||
2. **Self-Hosted Server:** Set up your own ClearML server. Find instructions [here](https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server). The server is also open-source, ensuring data privacy.
|
||||
|
||||
1. Install the `clearml` python package:
|
||||
Follow these steps to get started:
|
||||
|
||||
```bash
|
||||
pip install clearml
|
||||
```
|
||||
1. Install the `clearml` Python package:
|
||||
|
||||
2. Connect the ClearML SDK to the server by [creating credentials](https://app.clear.ml/settings/workspace-configuration) (go right top to Settings -> Workspace -> Create new credentials), then execute the command below and follow the instructions:
|
||||
```bash
|
||||
pip install clearml
|
||||
```
|
||||
|
||||
```bash
|
||||
clearml-init
|
||||
```
|
||||
_Note: This package is included in the `requirements.txt` of YOLOv5._
|
||||
|
||||
That's it! You're done 😎
|
||||
2. Connect the ClearML SDK to your server. [Create credentials](https://app.clear.ml/settings/workspace-configuration) (Settings -> Workspace -> Create new credentials), then run the following command and follow the prompts:
|
||||
```bash
|
||||
clearml-init
|
||||
```
|
||||
|
||||
That's it! You're ready to integrate ClearML with your YOLOv5 projects 😎. For a general Ultralytics setup, see the [Quickstart Guide](https://docs.ultralytics.com/quickstart/).
|
||||
|
||||
## 🚀 Training YOLOv5 With ClearML
|
||||
|
||||
To enable ClearML experiment tracking, simply install the ClearML pip package.
|
||||
|
||||
```bash
|
||||
pip install clearml
|
||||
```
|
||||
|
||||
This will enable integration with the YOLOv5 training script. Every training run from now on, will be captured and stored by the ClearML experiment manager.
|
||||
|
||||
If you want to change the `project_name` or `task_name`, use the `--project` and `--name` arguments of the `train.py` script, by default the project will be called `YOLOv5` and the task `Training`. PLEASE NOTE: ClearML uses `/` as a delimiter for subprojects, so be careful when using `/` in your project name!
|
||||
ClearML experiment tracking is automatically enabled when the `clearml` package is installed. Every YOLOv5 [training run](https://docs.ultralytics.com/modes/train/) will be captured and stored in the ClearML experiment manager.
|
||||
|
||||
To customize the project or task name in ClearML, use the `--project` and `--name` arguments when running `train.py`. By default, the project is `YOLOv5` and the task is `Training`. Note that ClearML uses `/` as a delimiter for subprojects.
|
||||
|
||||
**Example Training Command:**
|
||||
|
||||
```bash
|
||||
# Train YOLOv5s on COCO128 dataset for 3 epochs
|
||||
python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
|
||||
```
|
||||
|
||||
or with custom project and task name:
|
||||
**Example with Custom Project and Task Names:**
|
||||
|
||||
```bash
|
||||
python train.py --project my_project --name my_training --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
|
||||
# Train with custom names
|
||||
python train.py --project my_yolo_project --name experiment_001 --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
|
||||
```
|
||||
|
||||
This will capture:
|
||||
ClearML automatically captures comprehensive information about your training run:
|
||||
|
||||
- Source code + uncommitted changes
|
||||
- Installed packages
|
||||
- (Hyper)parameters
|
||||
- Model files (use `--save-period n` to save a checkpoint every n epochs)
|
||||
- Console output
|
||||
- Scalars (mAP_0.5, mAP_0.5:0.95, precision, recall, losses, learning rates, ...)
|
||||
- General info such as machine details, runtime, creation date etc.
|
||||
- All produced plots such as label correlogram and confusion matrix
|
||||
- Source code and uncommitted changes
|
||||
- Installed Python packages
|
||||
- Hyperparameters and configuration settings
|
||||
- Model checkpoints (use `--save-period n` to save every `n` epochs)
|
||||
- Console output logs
|
||||
- Performance metrics (mAP_0.5, mAP_0.5:0.95, [precision, recall](https://docs.ultralytics.com/guides/yolo-performance-metrics/), [losses](https://docs.ultralytics.com/reference/utils/loss/), [learning rates](https://www.ultralytics.com/glossary/learning-rate), etc.)
|
||||
- System details (machine specs, runtime, creation date)
|
||||
- Generated plots (e.g., label correlogram, [confusion matrix](https://www.ultralytics.com/glossary/confusion-matrix))
|
||||
- Images with bounding boxes per epoch
|
||||
- Mosaic per epoch
|
||||
- Mosaic augmentation previews per epoch
|
||||
- Validation images per epoch
|
||||
- ...
|
||||
|
||||
That's a lot right? 🤯 Now, we can visualize all of this information in the ClearML UI to get an overview of our training progress. Add custom columns to the table view (such as e.g. mAP_0.5) so you can easily sort on the best performing model. Or select multiple experiments and directly compare them!
|
||||
|
||||
There even more we can do with all of this information, like hyperparameter optimization and remote execution, so keep reading if you want to see how that works!
|
||||
This wealth of information 🤯 can be visualized in the ClearML UI. You can customize table views, sort experiments by metrics like mAP, and directly compare multiple runs. This detailed tracking enables advanced features like hyperparameter optimization and remote execution.
|
||||
|
||||
## 🔗 Dataset Version Management
|
||||
|
||||
Versioning your data separately from your code is generally a good idea and makes it easy to acquire the latest version too. This repository supports supplying a dataset version ID, and it will make sure to get the data if it's not there yet. Next to that, this workflow also saves the used dataset ID as part of the task parameters, so you will always know for sure which data was used in which experiment!
|
||||
Versioning your [datasets](https://docs.ultralytics.com/datasets/) separately from code is crucial for reproducibility and collaboration. ClearML's Data Versioning Tool helps manage this process. YOLOv5 supports using ClearML dataset version IDs, automatically downloading the data if needed. The dataset ID used is saved as a task parameter, ensuring you always know which data version was used for each experiment.
|
||||
|
||||

|
||||
|
||||
### Prepare Your Dataset
|
||||
|
||||
The YOLOv5 repository supports a number of different datasets by using yaml files containing their information. By default datasets are downloaded to the `../datasets` folder in relation to the repository root folder. So if you downloaded the `coco128` dataset using the link in the yaml or with the scripts provided by yolov5, you get this folder structure:
|
||||
YOLOv5 uses [YAML](https://www.ultralytics.com/glossary/yaml) files to define dataset configurations. By default, datasets are expected in the `../datasets` directory relative to the repository root. For example, the [COCO128 dataset](https://docs.ultralytics.com/datasets/detect/coco128/) structure looks like this:
|
||||
|
||||
```
|
||||
..
|
||||
|_ yolov5
|
||||
|_ datasets
|
||||
|_ coco128
|
||||
|_ images
|
||||
|_ labels
|
||||
|_ LICENSE
|
||||
|_ README.txt
|
||||
../
|
||||
├── yolov5/ # Your YOLOv5 repository clone
|
||||
└── datasets/
|
||||
└── coco128/
|
||||
├── images/
|
||||
├── labels/
|
||||
├── LICENSE
|
||||
└── README.txt
|
||||
```
|
||||
|
||||
But this can be any dataset you wish. Feel free to use your own, as long as you keep to this folder structure.
|
||||
Ensure your custom dataset follows a similar structure.
|
||||
|
||||
Next, ⚠️**copy the corresponding yaml file to the root of the dataset folder**⚠️. This yaml files contains the information ClearML will need to properly use the dataset. You can make this yourself too, of course, just follow the structure of the example yamls.
|
||||
|
||||
Basically we need the following keys: `path`, `train`, `test`, `val`, `nc`, `names`.
|
||||
Next, ⚠️**copy the corresponding dataset `.yaml` file into the root of your dataset folder**⚠️. This file contains essential information (`path`, `train`, `test`, `val`, `nc`, `names`) that ClearML needs.
|
||||
|
||||
```
|
||||
..
|
||||
|_ yolov5
|
||||
|_ datasets
|
||||
|_ coco128
|
||||
|_ images
|
||||
|_ labels
|
||||
|_ coco128.yaml # <---- HERE!
|
||||
|_ LICENSE
|
||||
|_ README.txt
|
||||
../
|
||||
└── datasets/
|
||||
└── coco128/
|
||||
├── images/
|
||||
├── labels/
|
||||
├── coco128.yaml # <---- Place the YAML file here!
|
||||
├── LICENSE
|
||||
└── README.txt
|
||||
```
|
||||
|
||||
### Upload Your Dataset
|
||||
|
||||
To get this dataset into ClearML as a versioned dataset, go to the dataset root folder and run the following command:
|
||||
Navigate to your dataset's root directory in the terminal and use the `clearml-data` CLI tool to upload it:
|
||||
|
||||
```bash
|
||||
cd coco128
|
||||
clearml-data sync --project YOLOv5 --name coco128 --folder .
|
||||
cd ../datasets/coco128
|
||||
clearml-data sync --project YOLOv5_Datasets --name coco128 --folder .
|
||||
```
|
||||
|
||||
The command `clearml-data sync` is actually a shorthand command. You could also run these commands one after the other:
|
||||
Alternatively, you can use the following commands:
|
||||
|
||||
```bash
|
||||
# Optionally add --parent <parent_dataset_id> if you want to base
|
||||
# this version on another dataset version, so no duplicate files are uploaded!
|
||||
clearml-data create --name coco128 --project YOLOv5
|
||||
# Create a new dataset entry in ClearML
|
||||
clearml-data create --project YOLOv5_Datasets --name coco128
|
||||
|
||||
# Add the dataset files (use '.' for the current directory)
|
||||
clearml-data add --files .
|
||||
|
||||
# Finalize and upload the dataset version
|
||||
clearml-data close
|
||||
```
|
||||
|
||||
### Run Training Using A ClearML Dataset
|
||||
_Tip: Use `--parent <parent_dataset_id>` with `clearml-data create` to link versions and avoid re-uploading unchanged files._
|
||||
|
||||
Now that you have a ClearML dataset, you can very simply use it to train custom YOLOv5 🚀 models!
|
||||
### Run Training Using a ClearML Dataset
|
||||
|
||||
Once your dataset is versioned in ClearML, you can easily use it for training by providing the dataset ID via the `--data` argument with the `clearml://` prefix:
|
||||
|
||||
```bash
|
||||
python train.py --img 640 --batch 16 --epochs 3 --data clearml:// yolov5s.pt --cache YOUR_DATASET_ID --weights
|
||||
# Replace YOUR_DATASET_ID with the actual ID from ClearML
|
||||
python train.py --img 640 --batch 16 --epochs 3 --data clearml://YOUR_DATASET_ID --weights yolov5s.pt --cache
|
||||
```
|
||||
|
||||
## 👀 Hyperparameter Optimization
|
||||
|
||||
Now that we have our experiments and data versioned, it's time to take a look at what we can build on top!
|
||||
With experiments and data versioned, you can leverage ClearML for [Hyperparameter Optimization (HPO)](https://docs.ultralytics.com/guides/hyperparameter-tuning/). Since ClearML captures all necessary information (code, packages, environment), experiments are fully reproducible. ClearML's HPO tools clone an existing experiment, modify its hyperparameters, and automatically rerun it.
|
||||
|
||||
Using the code information, installed packages and environment details, the experiment itself is now **completely reproducible**. In fact, ClearML allows you to clone an experiment and even change its parameters. We can then just rerun it with these new parameters automatically, this is basically what HPO does!
|
||||
|
||||
To **run hyperparameter optimization locally**, we've included a pre-made script for you. Just make sure a training task has been run at least once, so it is in the ClearML experiment manager, we will essentially clone it and change its hyperparameters.
|
||||
|
||||
You'll need to fill in the ID of this `template task` in the script found at `utils/loggers/clearml/hpo.py` and then just run it :) You can change `task.execute_locally()` to `task.execute()` to put it in a ClearML queue and have a remote agent work on it instead.
|
||||
To run HPO locally, use the provided script `utils/loggers/clearml/hpo.py`. You'll need the ID of a previously run training task (the "template task") to clone. Update the script with this ID and run it.
|
||||
|
||||
```bash
|
||||
# To use optuna, install it first, otherwise you can change the optimizer to just be RandomSearch
|
||||
pip install optuna
|
||||
# Install Optuna for advanced optimization strategies (optional)
|
||||
# pip install optuna
|
||||
|
||||
# Run the HPO script
|
||||
python utils/loggers/clearml/hpo.py
|
||||
```
|
||||
|
||||

|
||||
The script uses [Optuna](https://optuna.org/) by default if installed; otherwise, it falls back to `RandomSearch`. You can modify `task.execute_locally()` to `task.execute()` in the script to enqueue the HPO tasks for a remote ClearML agent.
|
||||
|
||||
## 🤯 Remote Execution (advanced)
|
||||

|
||||
|
||||
Running HPO locally is really handy, but what if we want to run our experiments on a remote machine instead? Maybe you have access to a very powerful GPU machine on-site, or you have some budget to use cloud GPUs. This is where the ClearML Agent comes into play. Check out what the agent can do here:
|
||||
## 🤯 Remote Execution (Advanced)
|
||||
|
||||
- [YouTube video](https://www.youtube.com/watch?v=MX3BrXnaULs&feature=youtu.be)
|
||||
- [Documentation](https://clear.ml/docs/latest/docs/clearml_agent)
|
||||
ClearML Agent allows you to execute experiments on remote machines (e.g., powerful on-site servers, cloud GPUs like [AWS](https://aws.amazon.com/), [GCP](https://cloud.google.com/), or [Azure](https://azure.microsoft.com/)). The agent listens to task queues, reproduces the experiment environment, runs the task, and reports results back to the ClearML server.
|
||||
|
||||
In short: every experiment tracked by the experiment manager contains enough information to reproduce it on a different machine (installed packages, uncommitted changes etc.). So a ClearML agent does just that: it listens to a queue for incoming tasks and when it finds one, it recreates the environment and runs it while still reporting scalars, plots etc. to the experiment manager.
|
||||
Learn more about ClearML Agent:
|
||||
|
||||
You can turn any machine (a cloud VM, a local GPU machine, your own laptop ... ) into a ClearML agent by simply running:
|
||||
- [YouTube Introduction](https://www.youtube.com/watch?v=MX3BrXnaULs)
|
||||
- [Official Documentation](https://clear.ml/docs/latest/docs/clearml_agent)
|
||||
|
||||
Turn any machine into a ClearML agent by running:
|
||||
|
||||
```bash
|
||||
clearml-agent daemon --queue QUEUES_TO_LISTEN_TO [--docker]
|
||||
# Replace QUEUES_TO_LISTEN_TO with the name(s) of your queue(s)
|
||||
clearml-agent daemon --queue QUEUES_TO_LISTEN_TO [--docker] # Use --docker to run in a Docker container
|
||||
```
|
||||
|
||||
### Cloning, Editing And Enqueuing
|
||||
### Cloning, Editing, and Enqueuing Tasks
|
||||
|
||||
With our agent running, we can give it some work. Remember from the HPO section that we can clone a task and edit the hyperparameters? We can do that from the interface too!
|
||||
You can manage remote execution directly from the ClearML web UI:
|
||||
|
||||
🪄 Clone the experiment by right-clicking it
|
||||
1. **Clone:** Right-click an existing experiment to clone it.
|
||||
2. **Edit:** Modify hyperparameters or other settings as needed in the cloned task.
|
||||
3. **Enqueue:** Right-click the modified task and select "Enqueue" to assign it to a specific queue for an agent to pick up.
|
||||
|
||||
🎯 Edit the hyperparameters to what you wish them to be
|
||||

|
||||
|
||||
⏳ Enqueue the task to any of the queues by right-clicking it
|
||||
### Executing a Task Remotely via Code
|
||||
|
||||

|
||||
|
||||
### Executing A Task Remotely
|
||||
|
||||
Now you can clone a task like we explained above, or simply mark your current script by adding `task.execute_remotely()` and on execution it will be put into a queue, for the agent to start working on!
|
||||
|
||||
To run the YOLOv5 training script remotely, all you have to do is add this line to the training.py script after the clearml logger has been instantiated:
|
||||
Alternatively, you can modify your training script to automatically enqueue tasks for remote execution. Add `task.execute_remotely()` after the ClearML logger is initialized in `train.py`:
|
||||
|
||||
```python
|
||||
# ...
|
||||
# Loggers
|
||||
data_dict = None
|
||||
# Inside train.py, after logger initialization...
|
||||
if RANK in {-1, 0}:
|
||||
loggers = Loggers(save_dir, weights, opt, hyp, LOGGER) # loggers instance
|
||||
# Initialize loggers
|
||||
loggers = Loggers(save_dir, weights, opt, hyp, LOGGER)
|
||||
|
||||
# Check if ClearML logger is active and enqueue the task
|
||||
if loggers.clearml:
|
||||
loggers.clearml.task.execute_remotely(queue="my_queue") # <------ ADD THIS LINE
|
||||
# Data_dict is either None is user did not choose for ClearML dataset or is filled in by ClearML
|
||||
# Specify the queue name for the remote agent
|
||||
loggers.clearml.task.execute_remotely(queue_name="my_remote_queue") # <------ ADD THIS LINE
|
||||
# data_dict might be populated by ClearML if using a ClearML dataset
|
||||
data_dict = loggers.clearml.data_dict
|
||||
# ...
|
||||
```
|
||||
|
||||
When running the training script after this change, python will run the script up until that line, after which it will package the code and send it to the queue instead!
|
||||
Running the script with this modification will package the code and its environment and send it to the specified queue, rather than executing locally.
|
||||
|
||||
### Autoscaling workers
|
||||
### Autoscaling Workers
|
||||
|
||||
ClearML comes with autoscalers too! This tool will automatically spin up new remote machines in the cloud of your choice (AWS, GCP, Azure) and turn them into ClearML agents for you whenever there are experiments detected in the queue. Once the tasks are processed, the autoscaler will automatically shut down the remote machines, and you stop paying!
|
||||
ClearML also provides Autoscalers that automatically manage cloud resources (AWS, GCP, Azure). They spin up new virtual machines and configure them as ClearML agents when tasks appear in a queue, then shut them down when the queue is empty, optimizing cost.
|
||||
|
||||
Check out the autoscalers getting started video below.
|
||||
Watch the Autoscalers getting started video:
|
||||
|
||||
[](https://youtu.be/j4XVMAaUt3E)
|
||||
[](https://youtu.be/j4XVMAaUt3E)
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
Contributions to enhance the ClearML integration are welcome! Please see the [Ultralytics Contributing Guide](https://docs.ultralytics.com/help/contributing/) for more information on how to get involved.
|
||||
|
|
|
@ -1,104 +1,114 @@
|
|||
<a href="https://www.ultralytics.com/"><img src="https://raw.githubusercontent.com/ultralytics/assets/main/logo/Ultralytics_Logotype_Original.svg" width="320" alt="Ultralytics logo"></a>
|
||||
|
||||
<img src="https://cdn.comet.ml/img/notebook_logo.png">
|
||||
|
||||
# YOLOv5 with Comet
|
||||
# Using Ultralytics YOLOv5 with Comet
|
||||
|
||||
This guide will cover how to use YOLOv5 with [Comet](https://bit.ly/yolov5-readme-comet2)
|
||||
Welcome to the guide on integrating [Ultralytics YOLOv5](https://github.com/ultralytics/yolov5) with [Comet](https://www.comet.com/site/)! Comet provides powerful tools for experiment tracking, model management, and visualization, enhancing your [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) workflow. This document details how to leverage Comet to monitor training, log results, manage datasets, and optimize hyperparameters for your YOLOv5 models.
|
||||
|
||||
# About Comet
|
||||
## 🧪 About Comet
|
||||
|
||||
Comet builds tools that help data scientists, engineers, and team leaders accelerate and optimize machine learning and deep learning models.
|
||||
[Comet](https://www.comet.com/site/) builds tools that help data scientists, engineers, and team leaders accelerate and optimize machine learning and [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models.
|
||||
|
||||
Track and visualize model metrics in real time, save your hyperparameters, datasets, and model checkpoints, and visualize your model predictions with [Comet Custom Panels](https://www.comet.com/docs/v2/guides/comet-dashboard/code-panels/about-panels/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github)! Comet makes sure you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes!
|
||||
Track and visualize model metrics in real-time, save your [hyperparameters](https://docs.ultralytics.com/guides/hyperparameter-tuning/), datasets, and model checkpoints, and visualize your model predictions with Comet Custom Panels! Comet ensures you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes. Find more information in the [Comet Documentation](https://www.comet.com/docs/v2/).
|
||||
|
||||
# Getting Started
|
||||
## 🚀 Getting Started
|
||||
|
||||
## Install Comet
|
||||
Follow these steps to set up Comet for your YOLOv5 projects.
|
||||
|
||||
### Install Comet
|
||||
|
||||
Install the necessary [Python package](https://pypi.org/project/comet-ml/) using pip:
|
||||
|
||||
```shell
|
||||
pip install comet_ml
|
||||
```
|
||||
|
||||
## Configure Comet Credentials
|
||||
### Configure Comet Credentials
|
||||
|
||||
There are two ways to configure Comet with YOLOv5.
|
||||
You can configure Comet in two ways:
|
||||
|
||||
You can either set your credentials through environment variables
|
||||
1. **Environment Variables:** Set your credentials directly in your environment.
|
||||
|
||||
**Environment Variables**
|
||||
```shell
|
||||
export COMET_API_KEY=<Your Comet API Key>
|
||||
export COMET_PROJECT_NAME=<Your Comet Project Name> # Defaults to 'yolov5' if not set
|
||||
```
|
||||
|
||||
```shell
|
||||
export COMET_API_KEY=<Your Comet API Key>
|
||||
export COMET_PROJECT_NAME=<Your Comet Project Name> # This will default to 'yolov5'
|
||||
```
|
||||
Find your API key in your [Comet Account Settings](https://www.comet.com/).
|
||||
|
||||
Or create a `.comet.config` file in your working directory and set your credentials there.
|
||||
2. **Configuration File:** Create a `.comet.config` file in your working directory with the following content:
|
||||
```ini
|
||||
[comet]
|
||||
api_key=<Your Comet API Key>
|
||||
project_name=<Your Comet Project Name> # Defaults to 'yolov5' if not set
|
||||
```
|
||||
|
||||
**Comet Configuration File**
|
||||
### Run the Training Script
|
||||
|
||||
```
|
||||
[comet]
|
||||
api_key=<Your Comet API Key>
|
||||
project_name=<Your Comet Project Name> # This will default to 'yolov5'
|
||||
```
|
||||
|
||||
## Run the Training Script
|
||||
Execute the YOLOv5 [training script](https://docs.ultralytics.com/modes/train/). Comet will automatically start logging your run.
|
||||
|
||||
```shell
|
||||
# Train YOLOv5s on COCO128 for 5 epochs
|
||||
python train.py --img 640 --batch 16 --epochs 5 --data coco128.yaml --weights yolov5s.pt
|
||||
```
|
||||
|
||||
That's it! Comet will automatically log your hyperparameters, command line arguments, training and validation metrics. You can visualize and analyze your runs in the Comet UI
|
||||
That's it! Comet automatically logs hyperparameters, command-line arguments, and training/validation metrics. Visualize and analyze your runs in the Comet UI. For more details on training, see the [Ultralytics Training documentation](https://docs.ultralytics.com/modes/train/).
|
||||
|
||||
<img width="1920" alt="yolo-ui" src="https://user-images.githubusercontent.com/26833433/202851203-164e94e1-2238-46dd-91f8-de020e9d6b41.png">
|
||||
<img width="1920" alt="Comet UI showing YOLOv5 training metrics" src="https://user-images.githubusercontent.com/26833433/202851203-164e94e1-2238-46dd-91f8-de020e9d6b41.png">
|
||||
|
||||
# Try out an Example!
|
||||
## ✨ Try an Example!
|
||||
|
||||
Check out an example of a [completed run here](https://www.comet.com/examples/comet-example-yolov5/a0e29e0e9b984e4a822db2a62d0cb357?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step&utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github)
|
||||
Explore a completed YOLOv5 training run tracked with Comet:
|
||||
|
||||
Or better yet, try it out yourself in this Colab Notebook
|
||||
- **[View Example Run on Comet](https://www.comet.com/examples/comet-example-yolov5/a0e29e0e9b984e4a822db2a62d0cb357?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step&utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme)**
|
||||
|
||||
Run the example yourself using this [Google Colab](https://colab.research.google.com/) Notebook:
|
||||
|
||||
[](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-training/yolov5/notebooks/Comet_and_YOLOv5.ipynb)
|
||||
|
||||
# Log automatically
|
||||
## 📊 Automatic Logging
|
||||
|
||||
By default, Comet will log the following items
|
||||
Comet automatically logs the following information by default:
|
||||
|
||||
## Metrics
|
||||
### Metrics
|
||||
|
||||
- Box Loss, Object Loss, Classification Loss for the training and validation data
|
||||
- mAP_0.5, mAP_0.5:0.95 metrics for the validation data.
|
||||
- Precision and Recall for the validation data
|
||||
- **Losses:** Box Loss, Object Loss, Classification Loss (Training and Validation).
|
||||
- **Performance:** [mAP@0.5](https://www.ultralytics.com/glossary/mean-average-precision-map), mAP@0.5:0.95 (Validation). Learn more about these metrics in the [YOLO Performance Metrics guide](https://docs.ultralytics.com/guides/yolo-performance-metrics/).
|
||||
- **[Precision](https://www.ultralytics.com/glossary/precision) and [Recall](https://www.ultralytics.com/glossary/recall):** Validation data metrics.
|
||||
|
||||
## Parameters
|
||||
### Parameters
|
||||
|
||||
- Model Hyperparameters
|
||||
- All parameters passed through the command line options
|
||||
- **Model Hyperparameters:** Configuration used for the model.
|
||||
- **Command Line Arguments:** All arguments passed via the [CLI](https://docs.ultralytics.com/usage/cli/).
|
||||
|
||||
## Visualizations
|
||||
### Visualizations
|
||||
|
||||
- Confusion Matrix of the model predictions on the validation data
|
||||
- Plots for the PR and F1 curves across all classes
|
||||
- Correlogram of the Class Labels
|
||||
- **[Confusion Matrix](https://www.ultralytics.com/glossary/confusion-matrix):** Model predictions on validation data, useful for understanding classification performance ([Wikipedia definition](https://en.wikipedia.org/wiki/Confusion_matrix)).
|
||||
- **Curves:** PR and F1 curves across all classes.
|
||||
- **Label Correlogram:** Correlation visualization of class labels.
|
||||
|
||||
# Configure Comet Logging
|
||||
## ⚙️ Advanced Configuration
|
||||
|
||||
Comet can be configured to log additional data either through command line flags passed to the training script or through environment variables.
|
||||
Customize Comet's logging behavior using command-line flags or environment variables.
|
||||
|
||||
```shell
|
||||
export COMET_MODE=online # Set whether to run Comet in 'online' or 'offline' mode. Defaults to online
|
||||
export COMET_MODEL_NAME=<your model name> #Set the name for the saved model. Defaults to yolov5
|
||||
export COMET_LOG_CONFUSION_MATRIX=false # Set to disable logging a Comet Confusion Matrix. Defaults to true
|
||||
export COMET_MAX_IMAGE_UPLOADS=<number of allowed images to upload to Comet> # Controls how many total image predictions to log to Comet. Defaults to 100.
|
||||
export COMET_LOG_PER_CLASS_METRICS=true # Set to log evaluation metrics for each detected class at the end of training. Defaults to false
|
||||
export COMET_DEFAULT_CHECKPOINT_FILENAME=<your checkpoint filename> # Set this if you would like to resume training from a different checkpoint. Defaults to 'last.pt'
|
||||
export COMET_LOG_BATCH_LEVEL_METRICS=true # Set this if you would like to log training metrics at the batch level. Defaults to false.
|
||||
export COMET_LOG_PREDICTIONS=true # Set this to false to disable logging model predictions
|
||||
# Environment Variables for Comet Configuration
|
||||
export COMET_MODE=online # 'online' or 'offline'. Default: online
|
||||
export COMET_MODEL_NAME=<your_model_name> # Name for the saved model. Default: yolov5
|
||||
export COMET_LOG_CONFUSION_MATRIX=false # Disable confusion matrix logging. Default: true
|
||||
export COMET_MAX_IMAGE_UPLOADS=<number> # Max prediction images to log. Default: 100
|
||||
export COMET_LOG_PER_CLASS_METRICS=true # Log metrics per class. Default: false
|
||||
export COMET_DEFAULT_CHECKPOINT_FILENAME=<checkpoint_file.pt> # Checkpoint for resuming. Default: 'last.pt'
|
||||
export COMET_LOG_BATCH_LEVEL_METRICS=true # Log training metrics per batch. Default: false
|
||||
export COMET_LOG_PREDICTIONS=true # Disable prediction logging if set to false. Default: true
|
||||
```
|
||||
|
||||
## Logging Checkpoints with Comet
|
||||
Refer to the [Comet documentation](https://www.comet.com/docs/v2/) for more configuration options.
|
||||
|
||||
Logging Models to Comet is disabled by default. To enable it, pass the `save-period` argument to the training script. This will save the logged checkpoints to Comet based on the interval value provided by `save-period`
|
||||
### Logging Checkpoints with Comet
|
||||
|
||||
Model checkpoint logging to Comet is disabled by default. Enable it using the `--save-period` argument during training. This saves checkpoints to Comet at the specified epoch interval.
|
||||
|
||||
```shell
|
||||
python train.py \
|
||||
|
@ -107,18 +117,18 @@ python train.py \
|
|||
--epochs 5 \
|
||||
--data coco128.yaml \
|
||||
--weights yolov5s.pt \
|
||||
--save-period 1
|
||||
--save-period 1 # Save checkpoint every epoch
|
||||
```
|
||||
|
||||
## Logging Model Predictions
|
||||
Checkpoints will appear in the "Assets & Artifacts" tab of your Comet experiment. Learn more about model management in the [Comet Model Registry documentation](https://www.comet.com/docs/v2/guides/model-registry/).
|
||||
|
||||
By default, model predictions (images, ground truth labels and bounding boxes) will be logged to Comet.
|
||||
### Logging Model Predictions
|
||||
|
||||
You can control the frequency of logged predictions and the associated images by passing the `bbox_interval` command line argument. Predictions can be visualized using Comet's Object Detection Custom Panel. This frequency corresponds to every Nth batch of data per epoch. In the example below, we are logging every 2nd batch of data for each epoch.
|
||||
By default, model predictions (images, ground truth labels, [bounding boxes](https://www.ultralytics.com/glossary/bounding-box)) for the validation set are logged. Control the logging frequency using the `--bbox_interval` argument, which specifies logging every Nth batch per epoch.
|
||||
|
||||
**Note:** The YOLOv5 validation dataloader will default to a batch size of 32, so you will have to set the logging frequency accordingly.
|
||||
**Note:** The YOLOv5 validation dataloader defaults to a batch size of 32. Adjust `--bbox_interval` accordingly.
|
||||
|
||||
Here is an [example project using the Panel](https://www.comet.com/examples/comet-example-yolov5?shareable=YcwMiJaZSXfcEXpGOHDD12vA1&utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github)
|
||||
Visualize predictions using Comet's Object Detection Custom Panel. See an [example project using the Panel here](https://www.comet.com/examples/comet-example-yolov5?shareable=YcwMiJaZSXfcEXpGOHDD12vA1&utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme).
|
||||
|
||||
```shell
|
||||
python train.py \
|
||||
|
@ -127,12 +137,12 @@ python train.py \
|
|||
--epochs 5 \
|
||||
--data coco128.yaml \
|
||||
--weights yolov5s.pt \
|
||||
--bbox_interval 2
|
||||
--bbox_interval 2 # Log predictions every 2nd validation batch per epoch
|
||||
```
|
||||
|
||||
### Controlling the number of Prediction Images logged to Comet
|
||||
#### Controlling the Number of Prediction Images
|
||||
|
||||
When logging predictions from YOLOv5, Comet will log the images associated with each set of predictions. By default a maximum of 100 validation images are logged. You can increase or decrease this number using the `COMET_MAX_IMAGE_UPLOADS` environment variable.
|
||||
Adjust the maximum number of validation images logged using the `COMET_MAX_IMAGE_UPLOADS` environment variable.
|
||||
|
||||
```shell
|
||||
env COMET_MAX_IMAGE_UPLOADS=200 python train.py \
|
||||
|
@ -141,12 +151,12 @@ env COMET_MAX_IMAGE_UPLOADS=200 python train.py \
|
|||
--epochs 5 \
|
||||
--data coco128.yaml \
|
||||
--weights yolov5s.pt \
|
||||
--bbox_interval 1
|
||||
--bbox_interval 1 # Log every batch
|
||||
```
|
||||
|
||||
### Logging Class Level Metrics
|
||||
|
||||
Use the `COMET_LOG_PER_CLASS_METRICS` environment variable to log mAP, precision, recall, f1 for each class.
|
||||
Enable logging of mAP, precision, recall, and F1-score for each class using the `COMET_LOG_PER_CLASS_METRICS` environment variable.
|
||||
|
||||
```shell
|
||||
env COMET_LOG_PER_CLASS_METRICS=true python train.py \
|
||||
|
@ -157,11 +167,13 @@ env COMET_LOG_PER_CLASS_METRICS=true python train.py \
|
|||
--weights yolov5s.pt
|
||||
```
|
||||
|
||||
## Uploading a Dataset to Comet Artifacts
|
||||
## 💾 Dataset Management with Comet Artifacts
|
||||
|
||||
If you would like to store your data using [Comet Artifacts](https://www.comet.com/docs/v2/guides/data-management/using-artifacts/#learn-more?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github), you can do so using the `upload_dataset` flag.
|
||||
Use [Comet Artifacts](https://www.comet.com/docs/v2/guides/data-management/artifacts/) to version, store, and manage your datasets.
|
||||
|
||||
The dataset be organized in the way described in the [YOLOv5 documentation](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/). The dataset config `yaml` file must follow the same format as that of the `coco128.yaml` file.
|
||||
### Uploading a Dataset
|
||||
|
||||
Upload your dataset using the `--upload_dataset` flag. Ensure your dataset follows the structure described in the [Ultralytics Datasets documentation](https://docs.ultralytics.com/datasets/) and the dataset config [YAML](https://www.ultralytics.com/glossary/yaml) file matches the format of `coco128.yaml` (see the [COCO128 dataset docs](https://docs.ultralytics.com/datasets/detect/coco128/)).
|
||||
|
||||
```shell
|
||||
python train.py \
|
||||
|
@ -170,25 +182,32 @@ python train.py \
|
|||
--epochs 5 \
|
||||
--data coco128.yaml \
|
||||
--weights yolov5s.pt \
|
||||
--upload_dataset
|
||||
--upload_dataset # Upload the dataset specified in coco128.yaml
|
||||
```
|
||||
|
||||
You can find the uploaded dataset in the Artifacts tab in your Comet Workspace <img width="1073" alt="artifact-1" src="https://user-images.githubusercontent.com/7529846/186929193-162718bf-ec7b-4eb9-8c3b-86b3763ef8ea.png">
|
||||
View the uploaded dataset in the Artifacts tab of your Comet Workspace.
|
||||
<img width="1073" alt="Comet Artifacts tab showing uploaded dataset" src="https://user-images.githubusercontent.com/7529846/186929193-162718bf-ec7b-4eb9-8c3b-86b3763ef8ea.png">
|
||||
|
||||
You can preview the data directly in the Comet UI. <img width="1082" alt="artifact-2" src="https://user-images.githubusercontent.com/7529846/186929215-432c36a9-c109-4eb0-944b-84c2786590d6.png">
|
||||
Preview data directly in the Comet UI.
|
||||
<img width="1082" alt="Comet UI previewing dataset images" src="https://user-images.githubusercontent.com/7529846/186929215-432c36a9-c109-4eb0-944b-84c2786590d6.png">
|
||||
|
||||
Artifacts are versioned and also support adding metadata about the dataset. Comet will automatically log the metadata from your dataset `yaml` file <img width="963" alt="artifact-3" src="https://user-images.githubusercontent.com/7529846/186929256-9d44d6eb-1a19-42de-889a-bcbca3018f2e.png">
|
||||
Artifacts are versioned and support metadata. Comet automatically logs metadata from your dataset `yaml` file.
|
||||
<img width="963" alt="Comet Artifact metadata view" src="https://user-images.githubusercontent.com/7529846/186929256-9d44d6eb-1a19-42de-889a-bcbca3018f2e.png">
|
||||
|
||||
### Using a saved Artifact
|
||||
### Using a Saved Artifact
|
||||
|
||||
If you would like to use a dataset from Comet Artifacts, set the `path` variable in your dataset `yaml` file to point to the following Artifact resource URL.
|
||||
To use a dataset stored in Comet Artifacts, update the `path` in your dataset `yaml` file to the Artifact resource URL:
|
||||
|
||||
```
|
||||
# contents of artifact.yaml file
|
||||
path: "comet://<workspace name>/<artifact name>:<artifact version or alias>"
|
||||
```yaml
|
||||
# contents of artifact.yaml
|
||||
path: "comet://<workspace_name>/<artifact_name>:<artifact_version_or_alias>"
|
||||
train: images/train # Adjust subdirectory if needed
|
||||
val: images/val # Adjust subdirectory if needed
|
||||
|
||||
# Other dataset configurations...
|
||||
```
|
||||
|
||||
Then pass this file to your training script in the following way
|
||||
Then, pass this configuration file to your training script:
|
||||
|
||||
```shell
|
||||
python train.py \
|
||||
|
@ -199,35 +218,36 @@ python train.py \
|
|||
--weights yolov5s.pt
|
||||
```
|
||||
|
||||
Artifacts also allow you to track the lineage of data as it flows through your Experimentation workflow. Here you can see a graph that shows you all the experiments that have used your uploaded dataset. <img width="1391" alt="artifact-4" src="https://user-images.githubusercontent.com/7529846/186929264-4c4014fa-fe51-4f3c-a5c5-f6d24649b1b4.png">
|
||||
Artifacts track data lineage, showing which experiments used specific dataset versions.
|
||||
<img width="1391" alt="Comet Artifact lineage graph" src="https://user-images.githubusercontent.com/7529846/186929264-4c4014fa-fe51-4f3c-a5c5-f6d24649b1b4.png">
|
||||
|
||||
## Resuming a Training Run
|
||||
## 🔄 Resuming Training Runs
|
||||
|
||||
If your training run is interrupted for any reason, e.g. disrupted internet connection, you can resume the run using the `resume` flag and the Comet Run Path.
|
||||
If a training run is interrupted (e.g., due to connection issues), you can resume it using the `--resume` flag with the Comet Run Path (`comet://<your_workspace>/<your_project>/<experiment_id>`).
|
||||
|
||||
The Run Path has the following format `comet://<your workspace name>/<your project name>/<experiment id>`.
|
||||
|
||||
This will restore the run to its state before the interruption, which includes restoring the model from a checkpoint, restoring all hyperparameters and training arguments and downloading Comet dataset Artifacts if they were used in the original run. The resumed run will continue logging to the existing Experiment in the Comet UI
|
||||
This restores the model state, hyperparameters, arguments, and downloads necessary Artifacts, continuing logging to the existing Comet Experiment. Learn more about [resuming runs in the Comet documentation](https://www.comet.com/docs/v2/guides/experiment-logging/resume-experiment/).
|
||||
|
||||
```shell
|
||||
python train.py \
|
||||
--resume "comet://<your run path>"
|
||||
--resume "comet://<your_workspace>/<your_project>/<experiment_id>"
|
||||
```
|
||||
|
||||
## Hyperparameter Search with the Comet Optimizer
|
||||
## 🔍 Hyperparameter Optimization (HPO)
|
||||
|
||||
YOLOv5 is also integrated with Comet's Optimizer, making is simple to visualize hyperparameter sweeps in the Comet UI.
|
||||
YOLOv5 integrates with the [Comet Optimizer](https://www.comet.com/docs/v2/guides/hyperparameter-optimization/) for easy hyperparameter sweeps and visualization. This helps in finding the best set of parameters for your model, a process often referred to as [Hyperparameter Tuning](https://docs.ultralytics.com/guides/hyperparameter-tuning/).
|
||||
|
||||
### Configuring an Optimizer Sweep
|
||||
|
||||
To configure the Comet Optimizer, you will have to create a JSON file with the information about the sweep. An example file has been provided in `utils/loggers/comet/optimizer_config.json`
|
||||
Create a [JSON](https://www.ultralytics.com/glossary/json) configuration file defining the sweep parameters, search strategy, and objective metric. An example is provided at `utils/loggers/comet/optimizer_config.json`.
|
||||
|
||||
Run the sweep using the `hpo.py` script:
|
||||
|
||||
```shell
|
||||
python utils/loggers/comet/hpo.py \
|
||||
--comet_optimizer_config "utils/loggers/comet/optimizer_config.json"
|
||||
```
|
||||
|
||||
The `hpo.py` script accepts the same arguments as `train.py`. If you wish to pass additional arguments to your sweep simply add them after the script.
|
||||
The `hpo.py` script accepts the same arguments as `train.py`. Pass additional fixed arguments for the sweep:
|
||||
|
||||
```shell
|
||||
python utils/loggers/comet/hpo.py \
|
||||
|
@ -238,13 +258,21 @@ python utils/loggers/comet/hpo.py \
|
|||
|
||||
### Running a Sweep in Parallel
|
||||
|
||||
Execute multiple sweep trials concurrently using the `comet optimizer` command:
|
||||
|
||||
```shell
|
||||
comet optimizer -j <set number of workers> utils/loggers/comet/hpo.py \
|
||||
utils/loggers/comet/optimizer_config.json"
|
||||
comet optimizer -j \
|
||||
utils/loggers/comet/hpo.py < num_workers > utils/loggers/comet/optimizer_config.json
|
||||
```
|
||||
|
||||
### Visualizing Results
|
||||
Replace `<num_workers>` with the desired number of parallel processes.
|
||||
|
||||
Comet provides a number of ways to visualize the results of your sweep. Take a look at a [project with a completed sweep here](https://www.comet.com/examples/comet-example-yolov5/view/PrlArHGuuhDTKC1UuBmTtOSXD/panels?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github)
|
||||
### Visualizing HPO Results
|
||||
|
||||
<img width="1626" alt="hyperparameter-yolo" src="https://user-images.githubusercontent.com/7529846/186914869-7dc1de14-583f-4323-967b-c9a66a29e495.png">
|
||||
Comet offers various visualizations for analyzing sweep results, such as parallel coordinate plots and parameter importance plots. Explore a [project with a completed sweep here](https://www.comet.com/examples/comet-example-yolov5/view/PrlArHGuuhDTKC1UuBmTtOSXD/panels?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme).
|
||||
|
||||
<img width="1626" alt="Comet HPO visualization" src="https://user-images.githubusercontent.com/7529846/186914869-7dc1de14-583f-4323-967b-c9a66a29e495.png">
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
Contributions to enhance the YOLOv5-Comet integration are welcome! Please see the [Ultralytics Contributing Guide](https://docs.ultralytics.com/help/contributing/) for more information on how to get involved. Thank you for helping improve this integration!
|
||||
|
|
Loading…
Reference in New Issue