Update YOLOv5 READMEs (#13554)

* Update YOLOv5 READMEs

* Update YOLOv5 READMEs

* Auto-format by https://ultralytics.com/actions

---------

Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
pull/13555/head
Glenn Jocher 2025-03-28 21:58:22 +01:00 committed by GitHub
parent e9ab205ef6
commit 1953d47255
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 317 additions and 274 deletions

108
README.md
View File

@ -18,7 +18,7 @@
</div>
<br>
Ultralytics YOLOv5 🚀 is a cutting-edge, state-of-the-art (SOTA) computer vision model developed by [Ultralytics](https://www.ultralytics.com/). Based on the PyTorch framework, YOLOv5 is renowned for its ease of use, speed, and accuracy. It incorporates insights and best practices from extensive research and development, making it a popular choice for a wide range of vision AI tasks, including [object detection](https://docs.ultralytics.com/tasks/detect/), [image segmentation](https://docs.ultralytics.com/tasks/segment/), and [image classification](https://docs.ultralytics.com/tasks/classify/).
Ultralytics YOLOv5 🚀 is a cutting-edge, state-of-the-art (SOTA) computer vision model developed by [Ultralytics](https://www.ultralytics.com/). Based on the [PyTorch](https://pytorch.org/) framework, YOLOv5 is renowned for its ease of use, speed, and accuracy. It incorporates insights and best practices from extensive research and development, making it a popular choice for a wide range of vision AI tasks, including [object detection](https://docs.ultralytics.com/tasks/detect/), [image segmentation](https://docs.ultralytics.com/tasks/segment/), and [image classification](https://docs.ultralytics.com/tasks/classify/).
We hope the resources here help you get the most out of YOLOv5. Please browse the [YOLOv5 Docs](https://docs.ultralytics.com/yolov5/) for detailed information, raise an issue on [GitHub](https://github.com/ultralytics/yolov5/issues/new/choose) for support, and join our [Discord community](https://discord.com/invite/ultralytics) for questions and discussions!
@ -26,17 +26,17 @@ To request an Enterprise License, please complete the form at [Ultralytics Licen
<div align="center">
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="Ultralytics GitHub"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="2%" alt="Ultralytics LinkedIn"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="2%" alt="Ultralytics Twitter"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="2%" alt="Ultralytics YouTube"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="2%" alt="Ultralytics TikTok"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="2%" alt="Ultralytics BiliBili"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="2%" alt="Ultralytics Discord"></a>
</div>
@ -68,7 +68,7 @@ See the [YOLOv5 Docs](https://docs.ultralytics.com/yolov5/) for full documentati
<details open>
<summary>Install</summary>
Clone the repository and install dependencies from [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a [**Python>=3.8.0**](https://www.python.org/) environment. Ensure you have [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/) installed.
Clone the repository and install dependencies in a [**Python>=3.8.0**](https://www.python.org/) environment. Ensure you have [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/) installed.
```bash
# Clone the YOLOv5 repository
@ -150,7 +150,7 @@ python detect.py --weights yolov5s.pt --source 'rtsp://example.com/media.mp4'
<details>
<summary>Training</summary>
The commands below demonstrate how to reproduce YOLOv5 [COCO dataset](https://docs.ultralytics.com/datasets/detect/coco/) results. Both [models](https://github.com/ultralytics/yolov5/tree/master/models) and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) are downloaded automatically from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases). Training times for YOLOv5n/s/m/l/x are approximately 1/2/4/6/8 days on a single V100 GPU. Using [Multi-GPU training](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/) can significantly reduce training time. Use the largest `--batch-size` your hardware allows, or use `--batch-size -1` for YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092). The batch sizes shown below are for V100-16GB GPUs.
The commands below demonstrate how to reproduce YOLOv5 [COCO dataset](https://docs.ultralytics.com/datasets/detect/coco/) results. Both [models](https://github.com/ultralytics/yolov5/tree/master/models) and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) are downloaded automatically from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases). Training times for YOLOv5n/s/m/l/x are approximately 1/2/4/6/8 days on a single [NVIDIA V100 GPU](https://www.nvidia.com/en-us/data-center/v100/). Using [Multi-GPU training](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/) can significantly reduce training time. Use the largest `--batch-size` your hardware allows, or use `--batch-size -1` for YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092). The batch sizes shown below are for V100-16GB GPUs.
```bash
# Train YOLOv5n on COCO for 300 epochs
@ -180,24 +180,24 @@ python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5x.yaml --
- **[Tips for Best Training Results](https://docs.ultralytics.com/guides/model-training-tips/)** ☘️: Improve your model's performance with expert tips.
- **[Multi-GPU Training](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/)**: Speed up training using multiple GPUs.
- **[PyTorch Hub Integration](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/)** 🌟 **NEW**: Easily load models using PyTorch Hub.
- **[Model Export (TFLite, ONNX, CoreML, TensorRT)](https://docs.ultralytics.com/yolov5/tutorials/model_export/)** 🚀: Convert your models to various deployment formats.
- **[NVIDIA Jetson Deployment](https://docs.ultralytics.com/yolov5/tutorials/running_on_jetson_nano/)** 🌟 **NEW**: Deploy YOLOv5 on NVIDIA Jetson devices.
- **[Model Export (TFLite, ONNX, CoreML, TensorRT)](https://docs.ultralytics.com/yolov5/tutorials/model_export/)** 🚀: Convert your models to various deployment formats like [ONNX](https://onnx.ai/) or [TensorRT](https://developer.nvidia.com/tensorrt).
- **[NVIDIA Jetson Deployment](https://docs.ultralytics.com/yolov5/tutorials/running_on_jetson_nano/)** 🌟 **NEW**: Deploy YOLOv5 on [NVIDIA Jetson](https://developer.nvidia.com/embedded-computing) devices.
- **[Test-Time Augmentation (TTA)](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/)**: Enhance prediction accuracy with TTA.
- **[Model Ensembling](https://docs.ultralytics.com/yolov5/tutorials/model_ensembling/)**: Combine multiple models for better performance.
- **[Model Pruning/Sparsity](https://docs.ultralytics.com/yolov5/tutorials/model_pruning_and_sparsity/)**: Optimize models for size and speed.
- **[Hyperparameter Evolution](https://docs.ultralytics.com/yolov5/tutorials/hyperparameter_evolution/)**: Automatically find the best training hyperparameters.
- **[Transfer Learning with Frozen Layers](https://docs.ultralytics.com/yolov5/tutorials/transfer_learning_with_frozen_layers/)**: Adapt pretrained models to new tasks efficiently.
- **[Transfer Learning with Frozen Layers](https://docs.ultralytics.com/yolov5/tutorials/transfer_learning_with_frozen_layers/)**: Adapt pretrained models to new tasks efficiently using [transfer learning](https://www.ultralytics.com/glossary/transfer-learning).
- **[Architecture Summary](https://docs.ultralytics.com/yolov5/tutorials/architecture_description/)** 🌟 **NEW**: Understand the YOLOv5 model architecture.
- **[Ultralytics HUB Training](https://www.ultralytics.com/hub)** 🚀 **RECOMMENDED**: Train and deploy YOLO models using Ultralytics HUB.
- **[ClearML Logging](https://docs.ultralytics.com/yolov5/tutorials/clearml_logging_integration/)**: Integrate with ClearML for experiment tracking.
- **[ClearML Logging](https://docs.ultralytics.com/yolov5/tutorials/clearml_logging_integration/)**: Integrate with [ClearML](https://clear.ml/) for experiment tracking.
- **[Neural Magic DeepSparse Integration](https://docs.ultralytics.com/yolov5/tutorials/neural_magic_pruning_quantization/)**: Accelerate inference with DeepSparse.
- **[Comet Logging](https://docs.ultralytics.com/yolov5/tutorials/comet_logging_integration/)** 🌟 **NEW**: Log experiments using Comet ML.
- **[Comet Logging](https://docs.ultralytics.com/yolov5/tutorials/comet_logging_integration/)** 🌟 **NEW**: Log experiments using [Comet ML](https://www.comet.com/).
</details>
## 🛠️ Integrations
Explore Ultralytics' key integrations with leading AI platforms. These collaborations enhance capabilities for dataset labeling, training, visualization, and model management. Discover how Ultralytics works with [Weights & Biases (W&B)](https://docs.wandb.ai/guides/integrations/ultralytics/), [Comet ML](https://bit.ly/yolov5-readme-comet), [Roboflow](https://roboflow.com/?ref=ultralytics), and [Intel OpenVINO](https://docs.ultralytics.com/integrations/openvino/) to optimize your AI workflows.
Explore Ultralytics' key integrations with leading AI platforms. These collaborations enhance capabilities for [dataset labeling](https://www.ultralytics.com/glossary/data-labeling), training, visualization, and [model management](https://www.ultralytics.com/blog/streamline-custom-vision-ai-ops). Discover how Ultralytics works with [Weights & Biases (W&B)](https://docs.wandb.ai/guides/integrations/ultralytics/), [Comet ML](https://bit.ly/yolov5-readme-comet), [Roboflow](https://roboflow.com/?ref=ultralytics), and [Intel OpenVINO](https://docs.ultralytics.com/integrations/openvino/) to optimize your AI workflows.
<br>
<a href="https://www.ultralytics.com/hub" target="_blank">
@ -225,7 +225,7 @@ Explore Ultralytics' key integrations with leading AI platforms. These collabora
## ⭐ Ultralytics HUB
Experience seamless AI development with [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐, the ultimate platform for building, training, and deploying computer vision models. Visualize datasets, train YOLOv5 and YOLOv8 🚀 models, and deploy them to real-world applications without writing any code. Transform images into actionable insights using our cutting-edge tools and user-friendly [Ultralytics App](https://www.ultralytics.com/app-install). Start your journey for **Free** today!
Experience seamless AI development with [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐, the ultimate platform for building, training, and deploying [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) models. Visualize datasets, train [YOLOv5](https://docs.ultralytics.com/models/yolov5/) and [YOLOv8](https://docs.ultralytics.com/models/yolov8/) 🚀 models, and deploy them to real-world applications without writing any code. Transform images into actionable insights using our cutting-edge tools and user-friendly [Ultralytics App](https://www.ultralytics.com/app-install). Start your journey for **Free** today!
<a align="center" href="https://www.ultralytics.com/hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png" alt="Ultralytics HUB Platform Screenshot"></a>
@ -243,8 +243,8 @@ YOLOv5 is designed for simplicity and ease of use. We prioritize real-world perf
<details>
<summary>Figure Notes</summary>
- **COCO AP val** denotes the mean Average Precision (mAP) at IoU thresholds from 0.5 to 0.95, measured on the 5,000-image [COCO val2017 dataset](http://cocodataset.org) across various inference sizes (256 to 1536 pixels).
- **GPU Speed** measures the average inference time per image on the [COCO val2017 dataset](http://cocodataset.org) using an [AWS p3.2xlarge V100 instance](https://aws.amazon.com/ec2/instance-types/p4/) with a batch size of 32.
- **COCO AP val** denotes the [mean Average Precision (mAP)](https://www.ultralytics.com/glossary/mean-average-precision-map) at [Intersection over Union (IoU)](https://www.ultralytics.com/glossary/intersection-over-union-iou) thresholds from 0.5 to 0.95, measured on the 5,000-image [COCO val2017 dataset](https://docs.ultralytics.com/datasets/detect/coco/) across various inference sizes (256 to 1536 pixels).
- **GPU Speed** measures the average inference time per image on the [COCO val2017 dataset](https://docs.ultralytics.com/datasets/detect/coco/) using an [AWS p3.2xlarge V100 instance](https://aws.amazon.com/ec2/instance-types/p3/) with a batch size of 32.
- **EfficientDet** data is sourced from the [google/automl repository](https://github.com/google/automl) at batch size 8.
- **Reproduce** these results using the command: `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
@ -254,33 +254,33 @@ YOLOv5 is designed for simplicity and ease of use. We prioritize real-world perf
This table shows the performance metrics for various YOLOv5 models trained on the COCO dataset.
| Model | Size<br><sup>(pixels) | mAP<sup>val<br>50-95 | mAP<sup>val<br>50 | Speed<br><sup>CPU b1<br>(ms) | Speed<br><sup>V100 b1<br>(ms) | Speed<br><sup>V100 b32<br>(ms) | Params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
| ------------------------------------------------------------------------------------------------------ | --------------------- | -------------------- | ----------------- | ---------------------------- | ----------------------------- | ------------------------------ | ------------------ | ---------------------- |
| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** |
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
| | | | | | | | | |
| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 |
| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)<br>+ [[TTA]][tta] | 1280<br>1536 | 55.0<br>**55.8** | 72.7<br>**72.7** | 3136<br>- | 26.2<br>- | 19.4<br>- | 140.7<br>- | 209.8<br>- |
| Model | Size<br><sup>(pixels) | mAP<sup>val<br>50-95 | mAP<sup>val<br>50 | Speed<br><sup>CPU b1<br>(ms) | Speed<br><sup>V100 b1<br>(ms) | Speed<br><sup>V100 b32<br>(ms) | Params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------- | -------------------- | ----------------- | ---------------------------- | ----------------------------- | ------------------------------ | ------------------ | ---------------------- |
| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** |
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
| | | | | | | | | |
| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 |
| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)<br>+ [[TTA]](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/) | 1280<br>1536 | 55.0<br>**55.8** | 72.7<br>**72.7** | 3136<br>- | 26.2<br>- | 19.4<br>- | 140.7<br>- | 209.8<br>- |
<details>
<summary>Table Notes</summary>
- All checkpoints were trained for 300 epochs using default settings. Nano (n) and Small (s) models use [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) hyperparameters, while Medium (m), Large (l), and Extra-Large (x) models use [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml).
- **mAP<sup>val</sup>** values represent single-model, single-scale performance on the [COCO val2017 dataset](http://cocodataset.org).<br>Reproduce using: `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
- **Speed** metrics are averaged over COCO val images using an [AWS p3.2xlarge V100 instance](https://aws.amazon.com/ec2/instance-types/p4/). Non-Maximum Suppression (NMS) time (~1 ms/image) is not included.<br>Reproduce using: `python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **mAP<sup>val</sup>** values represent single-model, single-scale performance on the [COCO val2017 dataset](https://docs.ultralytics.com/datasets/detect/coco/).<br>Reproduce using: `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
- **Speed** metrics are averaged over COCO val images using an [AWS p3.2xlarge V100 instance](https://aws.amazon.com/ec2/instance-types/p3/). Non-Maximum Suppression (NMS) time (~1 ms/image) is not included.<br>Reproduce using: `python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **TTA** ([Test Time Augmentation](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/)) includes reflection and scale augmentations for improved accuracy.<br>Reproduce using: `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
</details>
## 🖼️ Segmentation
The YOLOv5 [release v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) introduced instance segmentation models that achieve state-of-the-art performance. These models are designed for easy training, validation, and deployment. For full details, see the [Release Notes](https://github.com/ultralytics/yolov5/releases/v7.0) and explore the [YOLOv5 Segmentation Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) for quickstart examples.
The YOLOv5 [release v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) introduced [instance segmentation](https://docs.ultralytics.com/tasks/segment/) models that achieve state-of-the-art performance. These models are designed for easy training, validation, and deployment. For full details, see the [Release Notes](https://github.com/ultralytics/yolov5/releases/v7.0) and explore the [YOLOv5 Segmentation Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) for quickstart examples.
<details>
<summary>Segmentation Checkpoints</summary>
@ -290,7 +290,7 @@ The YOLOv5 [release v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) i
<img width="800" src="https://user-images.githubusercontent.com/61612323/204180385-84f3aca9-a5e9-43d8-a617-dda7ca12e54a.png" alt="YOLOv5 Segmentation Performance Chart"></a>
</div>
YOLOv5 segmentation models were trained on the COCO dataset for 300 epochs at an image size of 640 pixels using A100 GPUs. Models were exported to ONNX FP32 for CPU speed tests and TensorRT FP16 for GPU speed tests. All speed tests were conducted on Google [Colab Pro](https://colab.research.google.com/signup) notebooks for reproducibility.
YOLOv5 segmentation models were trained on the [COCO dataset](https://docs.ultralytics.com/datasets/segment/coco/) for 300 epochs at an image size of 640 pixels using A100 GPUs. Models were exported to [ONNX](https://onnx.ai/) FP32 for CPU speed tests and [TensorRT](https://developer.nvidia.com/tensorrt) FP16 for GPU speed tests. All speed tests were conducted on Google [Colab Pro](https://colab.research.google.com/signup) notebooks for reproducibility.
| Model | Size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Train Time<br><sup>300 epochs<br>A100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TRT A100<br>(ms) | Params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
| ------------------------------------------------------------------------------------------ | --------------------- | -------------------- | --------------------- | --------------------------------------------- | ------------------------------ | ------------------------------ | ------------------ | ---------------------- |
@ -312,7 +312,7 @@ YOLOv5 segmentation models were trained on the COCO dataset for 300 epochs at an
### Train
YOLOv5 segmentation training supports automatic download of the COCO128-seg dataset via the `--data coco128-seg.yaml` argument. For the full COCO-segments dataset, download it manually using `bash data/scripts/get_coco.sh --train --val --segments` and then train with `python train.py --data coco.yaml`.
YOLOv5 segmentation training supports automatic download of the [COCO128-seg dataset](https://docs.ultralytics.com/datasets/segment/coco8-seg/) via the `--data coco128-seg.yaml` argument. For the full [COCO-segments dataset](https://docs.ultralytics.com/datasets/segment/coco/), download it manually using `bash data/scripts/get_coco.sh --train --val --segments` and then train with `python train.py --data coco.yaml`.
```bash
# Train on a single GPU
@ -324,7 +324,7 @@ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train
### Val
Validate the mask mean Average Precision (mAP) of YOLOv5s-seg on the COCO dataset:
Validate the mask [mean Average Precision (mAP)](https://www.ultralytics.com/glossary/mean-average-precision-map) of YOLOv5s-seg on the COCO dataset:
```bash
# Download COCO validation segments split (780MB, 5000 images)
@ -364,14 +364,14 @@ python export.py --weights yolov5s-seg.pt --include onnx engine --img 640 --devi
## 🏷️ Classification
YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases/v6.2) introduced support for image classification model training, validation, and deployment. Check the [Release Notes](https://github.com/ultralytics/yolov5/releases/v6.2) for details and the [YOLOv5 Classification Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb) for quickstart guides.
YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases/v6.2) introduced support for [image classification](https://docs.ultralytics.com/tasks/classify/) model training, validation, and deployment. Check the [Release Notes](https://github.com/ultralytics/yolov5/releases/v6.2) for details and the [YOLOv5 Classification Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb) for quickstart guides.
<details>
<summary>Classification Checkpoints</summary>
<br>
YOLOv5-cls classification models were trained on ImageNet for 90 epochs using a 4xA100 instance. ResNet and EfficientNet models were trained alongside under identical settings for comparison. Models were exported to ONNX FP32 (CPU speed tests) and TensorRT FP16 (GPU speed tests). All speed tests were run on Google [Colab Pro](https://colab.research.google.com/signup) for reproducibility.
YOLOv5-cls classification models were trained on [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) for 90 epochs using a 4xA100 instance. [ResNet](https://arxiv.org/abs/1512.03385) and [EfficientNet](https://arxiv.org/abs/1905.11946) models were trained alongside under identical settings for comparison. Models were exported to [ONNX](https://onnx.ai/) FP32 (CPU speed tests) and [TensorRT](https://developer.nvidia.com/tensorrt) FP16 (GPU speed tests). All speed tests were run on Google [Colab Pro](https://colab.research.google.com/signup) for reproducibility.
| Model | Size<br><sup>(pixels) | Acc<br><sup>top1 | Acc<br><sup>top5 | Training<br><sup>90 epochs<br>4xA100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TensorRT V100<br>(ms) | Params<br><sup>(M) | FLOPs<br><sup>@224 (B) |
| -------------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | -------------------------------------------- | ------------------------------ | ----------------------------------- | ------------------ | ---------------------- |
@ -395,7 +395,7 @@ YOLOv5-cls classification models were trained on ImageNet for 90 epochs using a
<summary>Table Notes (click to expand)</summary>
- All checkpoints were trained for 90 epochs using the SGD optimizer with `lr0=0.001` and `weight_decay=5e-5` at an image size of 224 pixels, using default settings.<br>Training runs are logged at [https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2](https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2).
- **Accuracy** values (top-1 and top-5) represent single-model, single-scale performance on the [ImageNet-1k dataset](https://www.image-net.org/index.php).<br>Reproduce using: `python classify/val.py --data ../datasets/imagenet --img 224`
- **Accuracy** values (top-1 and top-5) represent single-model, single-scale performance on the [ImageNet-1k dataset](https://docs.ultralytics.com/datasets/classify/imagenet/).<br>Reproduce using: `python classify/val.py --data ../datasets/imagenet --img 224`
- **Speed** metrics are averaged over 100 inference images using a Google [Colab Pro V100 High-RAM instance](https://colab.research.google.com/signup).<br>Reproduce using: `python classify/val.py --data ../datasets/imagenet --img 224 --batch 1`
- **Export** to ONNX (FP32) and TensorRT (FP16) was performed using `export.py`.<br>Reproduce using: `python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224`
@ -407,7 +407,7 @@ YOLOv5-cls classification models were trained on ImageNet for 90 epochs using a
### Train
YOLOv5 classification training supports automatic download for datasets like MNIST, Fashion-MNIST, CIFAR10, CIFAR100, Imagenette, Imagewoof, and ImageNet using the `--data` argument. For example, start training on MNIST with `--data mnist`.
YOLOv5 classification training supports automatic download for datasets like [MNIST](https://docs.ultralytics.com/datasets/classify/mnist/), [Fashion-MNIST](https://docs.ultralytics.com/datasets/classify/fashion-mnist/), [CIFAR10](https://docs.ultralytics.com/datasets/classify/cifar10/), [CIFAR100](https://docs.ultralytics.com/datasets/classify/cifar100/), [Imagenette](https://docs.ultralytics.com/datasets/classify/imagenette/), [Imagewoof](https://docs.ultralytics.com/datasets/classify/imagewoof/), and [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) using the `--data` argument. For example, start training on MNIST with `--data mnist`.
```bash
# Train on a single GPU using CIFAR-100 dataset
@ -497,17 +497,17 @@ For bug reports and feature requests related to YOLOv5, please visit [GitHub Iss
<br>
<div align="center">
<a href="https://github.com/ultralytics" title="GitHub"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
<a href="https://www.linkedin.com/company/ultralytics/" title="LinkedIn"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="Ultralytics LinkedIn"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
<a href="https://twitter.com/ultralytics" title="Twitter"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="Ultralytics Twitter"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
<a href="https://youtube.com/ultralytics?sub_confirmation=1" title="YouTube"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="Ultralytics YouTube"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
<a href="https://www.tiktok.com/@ultralytics" title="TikTok"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="3%" alt="Ultralytics TikTok"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
<a href="https://ultralytics.com/bilibili" title="BiliBili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="3%" alt="Ultralytics BiliBili"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
<a href="https://discord.com/invite/ultralytics" title="Discord"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="3%" alt="Ultralytics Discord"></a>
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="Ultralytics LinkedIn"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="Ultralytics Twitter"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="Ultralytics YouTube"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="3%" alt="Ultralytics TikTok"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="3%" alt="Ultralytics BiliBili"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="3%" alt="Ultralytics Discord"></a>
</div>

View File

@ -1,13 +1,13 @@
<div align="center">
<p>
<a href="https://www.ultralytics.com/blog/all-you-need-to-know-about-ultralytics-yolo11-and-its-applications" target="_blank">
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png" alt="Ultralytics YOLO 横幅"></a>
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png" alt="Ultralytics YOLO banner"></a>
</p>
[中文](https://docs.ultralytics.com/zh) | [한국어](https://docs.ultralytics.com/ko) | [日本語](https://docs.ultralytics.com/ja) | [Русский](https://docs.ultralytics.com/ru) | [Deutsch](https://docs.ultralytics.com/de) | [Français](https://docs.ultralytics.com/fr) | [Español](https://docs.ultralytics.com/es) | [Português](https://docs.ultralytics.com/pt) | [Türkçe](https://docs.ultralytics.com/tr) | [Tiếng Việt](https://docs.ultralytics.com/vi) | [العربية](https://docs.ultralytics.com/ar)
<div>
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI"></a>
<a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="YOLOv5 CI Testing"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv5 Citation"></a>
<a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
<a href="https://discord.com/invite/ultralytics"><img alt="Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a> <a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a> <a href="https://reddit.com/r/ultralytics"><img alt="Ultralytics Reddit" src="https://img.shields.io/reddit/subreddit-subscribers/ultralytics?style=flat&logo=reddit&logoColor=white&label=Reddit&color=blue"></a>
@ -18,84 +18,92 @@
</div>
<br>
YOLOv5 🚀 是全球最受喜爱的视觉 AI代表了 <a href="https://www.ultralytics.com/">Ultralytics</a> 在未来视觉 AI 方法上的开源研究成果,这些成果融合了经过数千小时研发的经验和最佳实践
Ultralytics YOLOv5 🚀 是由 [Ultralytics](https://www.ultralytics.com/) 开发的一款尖端的、代表当前最高水平SOTA的[计算机视觉](https://www.ultralytics.com/glossary/computer-vision-cv)模型。基于 [PyTorch](https://pytorch.org/) 框架YOLOv5 以其易用性、速度和准确性而闻名。它融合了广泛研究和开发的见解与最佳实践,使其成为各种视觉 AI 任务的热门选择,包括[目标检测](https://docs.ultralytics.com/tasks/detect/)、[图像分割](https://docs.ultralytics.com/tasks/segment/)和[图像分类](https://docs.ultralytics.com/tasks/classify/)
我们希望这里提供的资源能帮助您充分发挥 YOLOv5 的优势。请查阅 YOLOv5 的 <a href="https://docs.ultralytics.com/yolov5/">文档</a> 了解详情,如需支持请在 <a href="https://github.com/ultralytics/yolov5/issues/new/choose">GitHub</a> 上提交问题,或加入我们的 <a href="https://discord.com/invite/ultralytics">Discord</a> 社区进行提问和讨论!
我们希望这里的资源能帮助您充分利用 YOLOv5。请浏览 [YOLOv5 文档](https://docs.ultralytics.com/yolov5/)获取详细信息,在 [GitHub](https://github.com/ultralytics/yolov5/issues/new/choose) 上提出 issue 以获得支持,并加入我们的 [Discord 社区](https://discord.com/invite/ultralytics)进行提问和讨论!
若需申请企业许可证,请在 [Ultralytics Licensing](https://www.ultralytics.com/license) 完成相关表单
如需申请企业许可证,请填写 [Ultralytics 授权许可](https://www.ultralytics.com/license)表格
<div align="center">
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="Ultralytics GitHub"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="2%" alt="Ultralytics LinkedIn"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="2%" alt="Ultralytics Twitter"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="2%" alt="Ultralytics YouTube"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="2%" alt="Ultralytics TikTok"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="2%" alt="Ultralytics BiliBili"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="space">
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="2%" alt="Ultralytics Discord"></a>
</div>
</div>
<br>
## <div align="center">YOLO11 🚀 新品发布</div>
## 🚀 YOLO11下一代进化
我们很高兴地宣布推出 Ultralytics YOLO11 🚀——我们最先进SOTA视觉模型的最新成果现已在 **[GitHub](https://github.com/ultralytics/ultralytics)** 上发布YOLO11 延续了我们在速度、准确性和易用性方面的优秀传统。不论您是处理目标检测、图像分割还是图像分类YOLO11 都能提供多样应用场景下卓越的性能和灵活性。
我们激动地宣布推出 **Ultralytics YOLO11** 🚀这是我们最先进SOTA视觉模型的最新进展YOLO11 现已在 [Ultralytics YOLO GitHub 仓库](https://github.com/ultralytics/ultralytics)发布,它继承了我们速度快、精度高和易于使用的传统。无论您是处理[目标检测](https://docs.ultralytics.com/tasks/detect/)、[实例分割](https://docs.ultralytics.com/tasks/segment/)、[姿态估计](https://docs.ultralytics.com/tasks/pose/)、[图像分类](https://docs.ultralytics.com/tasks/classify/)还是[定向目标检测OBB](https://docs.ultralytics.com/tasks/obb/)YOLO11 都能提供在各种应用中脱颖而出所需的性能和多功能性。
今天就开始体验,释放 YOLO11 的全部潜力吧!请访问 [Ultralytics 文档](https://docs.ultralytics.com/) 获取全面的指南和资源:
立即开始,释放 YOLO11 的全部潜力!访问 [Ultralytics 文档](https://docs.ultralytics.com/)获取全面的指南和资源:
[![PyPI version](https://badge.fury.io/py/ultralytics.svg)](https://badge.fury.io/py/ultralytics) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://www.pepy.tech/projects/ultralytics)
```bash
# 安装 ultralytics 包
pip install ultralytics
```
<div align="center">
<a href="https://www.ultralytics.com/yolo" target="_blank">
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/refs/heads/main/yolo/performance-comparison.png"></a>
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/refs/heads/main/yolo/performance-comparison.png" alt="Ultralytics YOLO Performance Comparison"></a>
</div>
## <div align="center">文档</div>
## 📚 文档
请参阅 [YOLOv5 文档](https://docs.ultralytics.com/yolov5/) 获取关于训练、测试和部署的完整指南。下方提供了快速入门示例。
请参阅 [YOLOv5 文档](https://docs.ultralytics.com/yolov5/),了解有关训练、测试和部署的完整文档。请参阅下文的快速入门示例。
<details open>
<summary>安装</summary>
克隆仓库并在 [**Python>=3.8.0**](https://www.python.org/) 环境中安装 [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) 文件中的依赖,包括 [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/)。
克隆仓库并在 [**Python>=3.8.0**](https://www.python.org/) 环境中安装依赖项。确保您已安装 [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/)。
```bash
git clone https://github.com/ultralytics/yolov5 # 克隆仓库
# 克隆 YOLOv5 仓库
git clone https://github.com/ultralytics/yolov5
# 导航到克隆的目录
cd yolov5
pip install -r requirements.txt # 安装依赖
# 安装所需的包
pip install -r requirements.txt
```
</details>
<details open>
<summary>推理</summary>
<summary>使用 PyTorch Hub 进行推理</summary>
YOLOv5 [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/) 推理示例。[Models](https://github.com/ultralytics/yolov5/tree/master/models) 会自动从最新的 YOLOv5 [发行版](https://github.com/ultralytics/yolov5/releases) 下载。
通过 [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/) 使用 YOLOv5 进行推理。[模型](https://github.com/ultralytics/yolov5/tree/master/models) 会自动从最新的 YOLOv5 [发布版本](https://github.com/ultralytics/yolov5/releases)下载。
```python
import torch
# 加载 YOLOv5 模型(yolov5n, yolov5s, yolov5m, yolov5l, yolov5x
model = torch.hub.load("ultralytics/yolov5", "yolov5s")
# 加载 YOLOv5 模型(选yolov5n, yolov5s, yolov5m, yolov5l, yolov5x
model = torch.hub.load("ultralytics/yolov5", "yolov5s") # 默认yolov5s
# 输入源URL、文件、PIL、OpenCV、numpy 数组或列表)
img = "https://ultralytics.com/images/zidane.jpg"
# 定义输入图像URL、本地文件、PIL 图像、OpenCV、numpy 数组或列表)
img = "https://ultralytics.com/images/zidane.jpg" # 示例图像
# 执行推理(自动处理批、调整大小、归一化)
# 执行推理(自动处理批处理、调整大小、归一化)
results = model(img)
# 处理结果(可选:.print(), .show(), .save(), .crop(), .pandas()
results.print()
# 处理结果(选项:.print(), .show(), .save(), .crop(), .pandas()
results.print() # 将结果打印到控制台
results.show() # 在窗口中显示结果
results.save() # 将结果保存到 runs/detect/exp
```
</details>
@ -103,19 +111,38 @@ results.print()
<details>
<summary>使用 detect.py 进行推理</summary>
`detect.py` 在多种来源上运行推理,自动从最新的 YOLOv5 [发行版](https://github.com/ultralytics/yolov5/releases) 下载 [models](https://github.com/ultralytics/yolov5/tree/master/models),并将结果保存至 `runs/detect`
`detect.py` 脚本在各种来源上运行推理。它会自动从最新的 YOLOv5 [发布版本](https://github.com/ultralytics/yolov5/releases)下载[模型](https://github.com/ultralytics/yolov5/tree/master/models),并将结果保存到 `runs/detect` 目录
```bash
python detect.py --weights yolov5s.pt --source 0 # 摄像头
python detect.py --weights yolov5s.pt --source img.jpg # 图片
python detect.py --weights yolov5s.pt --source vid.mp4 # 视频
python detect.py --weights yolov5s.pt --source screen # 截图
python detect.py --weights yolov5s.pt --source path/ # 目录
python detect.py --weights yolov5s.pt --source list.txt # 图片列表
python detect.py --weights yolov5s.pt --source list.streams # 流列表
python detect.py --weights yolov5s.pt --source 'path/*.jpg' # glob 通配符
python detect.py --weights yolov5s.pt --source 'https://youtu.be/LNwODJXcvt4' # YouTube
python detect.py --weights yolov5s.pt --source 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP 流
# 使用网络摄像头运行推理
python detect.py --weights yolov5s.pt --source 0
# 对本地图像文件运行推理
python detect.py --weights yolov5s.pt --source img.jpg
# 对本地视频文件运行推理
python detect.py --weights yolov5s.pt --source vid.mp4
# 对屏幕截图运行推理
python detect.py --weights yolov5s.pt --source screen
# 对图像目录运行推理
python detect.py --weights yolov5s.pt --source path/to/images/
# 对列出图像路径的文本文件运行推理
python detect.py --weights yolov5s.pt --source list.txt
# 对列出流 URL 的文本文件运行推理
python detect.py --weights yolov5s.pt --source list.streams
# 使用 glob 模式对图像运行推理
python detect.py --weights yolov5s.pt --source 'path/to/*.jpg'
# 对 YouTube 视频 URL 运行推理
python detect.py --weights yolov5s.pt --source 'https://youtu.be/LNwODJXcvt4'
# 对 RTSP、RTMP 或 HTTP 流运行推理
python detect.py --weights yolov5s.pt --source 'rtsp://example.com/media.mp4'
```
</details>
@ -123,49 +150,58 @@ python detect.py --weights yolov5s.pt --source 'rtsp://example.com/media.mp4' #
<details>
<summary>训练</summary>
以下命令重现了 YOLOv5 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) 数据集的结果。[Models](https://github.com/ultralytics/yolov5/tree/master/models) 和 [datasets](https://github.com/ultralytics/yolov5/tree/master/data) 会自动从最新的 YOLOv5 [发行版](https://github.com/ultralytics/yolov5/releases) 下载。使用 V100 GPU 训练 YOLOv5n/s/m/l/x 的时间分别为 1/2/4/6/8 天([多 GPU](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/) 可大幅加快训练速度)。建议使用最大的 `--batch-size`,或传入 `--batch-size -1` 以使用 YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092)。下面显示的批量大小基于 V100-16GB
以下命令演示了如何重现 YOLOv5 在 [COCO 数据集](https://docs.ultralytics.com/datasets/detect/coco/)上的结果。 [模型](https://github.com/ultralytics/yolov5/tree/master/models)和[数据集](https://github.com/ultralytics/yolov5/tree/master/data)都会自动从最新的 YOLOv5 [发布版本](https://github.com/ultralytics/yolov5/releases)下载。在单个 [NVIDIA V100 GPU](https://www.nvidia.com/en-us/data-center/v100/) 上YOLOv5n/s/m/l/x 的训练时间分别约为 1/2/4/6/8 天。使用[多 GPU 训练](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/)可以显著减少训练时间。使用您的硬件允许的最大 `--batch-size`,或使用 `--batch-size -1` 以启用 YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092)。下面显示的批处理大小适用于 V100-16GB GPU
```bash
# 在 COCO 上训练 YOLOv5n 300 个周期
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128
# 在 COCO 上训练 YOLOv5s 300 个周期
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5s.yaml --batch-size 64
# 在 COCO 上训练 YOLOv5m 300 个周期
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5m.yaml --batch-size 40
# 在 COCO 上训练 YOLOv5l 300 个周期
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5l.yaml --batch-size 24
# 在 COCO 上训练 YOLOv5x 300 个周期
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5x.yaml --batch-size 16
```
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png" alt="YOLOv5 Training Results">
</details>
<details open>
<summary>教程</summary>
- [训练自定义数据](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/) 🚀 强烈推荐
- [获得最佳训练效果的技巧](https://docs.ultralytics.com/guides/model-training-tips/) ☘️
- [多 GPU 训练](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/)
- [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/) 🌟 新版
- [TFLite, ONNX, CoreML, TensorRT 导出](https://docs.ultralytics.com/yolov5/tutorials/model_export/) 🚀
- [NVIDIA Jetson 平台部署](https://docs.ultralytics.com/yolov5/tutorials/running_on_jetson_nano/) 🌟 新版
- [测试时数据增强 (TTA)](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/)
- [模型集成](https://docs.ultralytics.com/yolov5/tutorials/model_ensembling/)
- [模型剪/稀疏](https://docs.ultralytics.com/yolov5/tutorials/model_pruning_and_sparsity/)
- [超参数进化](https://docs.ultralytics.com/yolov5/tutorials/hyperparameter_evolution/)
- [冻结层进行迁移学习](https://docs.ultralytics.com/yolov5/tutorials/transfer_learning_with_frozen_layers/)
- [架构概述](https://docs.ultralytics.com/yolov5/tutorials/architecture_description/) 🌟 新版
- [Ultralytics HUB 进行训练和部署 YOLO](https://www.ultralytics.com/hub) 🚀 强烈推荐
- [ClearML 日志记录](https://docs.ultralytics.com/yolov5/tutorials/clearml_logging_integration/)
- [YOLOv5 与 Neural Magic 的 Deepsparse](https://docs.ultralytics.com/yolov5/tutorials/neural_magic_pruning_quantization/)
- [Comet 日志记录](https://docs.ultralytics.com/yolov5/tutorials/comet_logging_integration/) 🌟 新版
- **[训练自定义数据](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/)** 🚀 **推荐**:学习如何在您自己的数据集上训练 YOLOv5。
- **[获得最佳训练结果的技巧](https://docs.ultralytics.com/guides/model-training-tips/)** ☘️:通过专家技巧提高模型性能。
- **[多 GPU 训练](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/)**:使用多个 GPU 加速训练。
- **[PyTorch Hub 集成](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/)** 🌟 **新增**:使用 PyTorch Hub 轻松加载模型。
- **[模型导出 (TFLite, ONNX, CoreML, TensorRT)](https://docs.ultralytics.com/yolov5/tutorials/model_export/)** 🚀:将您的模型转换为各种部署格式,如 [ONNX](https://onnx.ai/) 或 [TensorRT](https://developer.nvidia.com/tensorrt)。
- **[NVIDIA Jetson 部署](https://docs.ultralytics.com/yolov5/tutorials/running_on_jetson_nano/)** 🌟 **新增**:在 [NVIDIA Jetson](https://developer.nvidia.com/embedded-computing) 设备上部署 YOLOv5。
- **[测试时增强 (TTA)](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/)**:使用 TTA 提高预测准确性。
- **[模型集成](https://docs.ultralytics.com/yolov5/tutorials/model_ensembling/)**:组合多个模型以获得更好的性能。
- **[模型/稀疏](https://docs.ultralytics.com/yolov5/tutorials/model_pruning_and_sparsity/)**:优化模型的大小和速度。
- **[超参数进化](https://docs.ultralytics.com/yolov5/tutorials/hyperparameter_evolution/)**:自动找到最佳训练超参数。
- **[使用冻结层进行迁移学习](https://docs.ultralytics.com/yolov5/tutorials/transfer_learning_with_frozen_layers/)**:使用[迁移学习](https://www.ultralytics.com/glossary/transfer-learning)高效地将预训练模型应用于新任务。
- **[架构摘要](https://docs.ultralytics.com/yolov5/tutorials/architecture_description/)** 🌟 **新增**:了解 YOLOv5 模型架构。
- **[Ultralytics HUB 训练](https://www.ultralytics.com/hub)** 🚀 **推荐**:使用 Ultralytics HUB 训练和部署 YOLO 模型。
- **[ClearML 日志记录](https://docs.ultralytics.com/yolov5/tutorials/clearml_logging_integration/)**:与 [ClearML](https://clear.ml/) 集成以进行实验跟踪。
- **[Neural Magic DeepSparse 集成](https://docs.ultralytics.com/yolov5/tutorials/neural_magic_pruning_quantization/)**:使用 DeepSparse 加速推理。
- **[Comet 日志记录](https://docs.ultralytics.com/yolov5/tutorials/comet_logging_integration/)** 🌟 **新增**:使用 [Comet ML](https://www.comet.com/) 记录实验。
</details>
## <div align="center">集成</div>
## 🛠️ 集成
我们与领先 AI 平台的深度集成拓展了 Ultralytics 解决方案的功能,提升了数据集标注、训练、可视化和模型管理等任务的效率。了解 Ultralytics 如何与 [W&B](https://docs.wandb.ai/guides/integrations/ultralytics/)、[Comet](https://bit.ly/yolov8-readme-comet)、[Roboflow](https://roboflow.com/?ref=ultralytics) 以及 [OpenVINO](https://docs.ultralytics.com/integrations/openvino/) 合作,优化您的 AI 工作流程。
探索 Ultralytics 与领先 AI 平台的关键集成。这些合作增强了[数据集标注](https://www.ultralytics.com/glossary/data-labeling)、训练、可视化和[模型管理](https://www.ultralytics.com/blog/streamline-custom-vision-ai-ops)的能力。了解 Ultralytics 如何与 [Weights & Biases (W&B)](https://docs.wandb.ai/guides/integrations/ultralytics/)、[Comet ML](https://bit.ly/yolov5-readme-comet)、[Roboflow](https://roboflow.com/?ref=ultralytics) 和 [Intel OpenVINO](https://docs.ultralytics.com/integrations/openvino/) 合作以优化您的 AI 工作流程。
<br>
<a href="https://www.ultralytics.com/hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png" alt="Ultralytics 主动学习集成"></a>
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png" alt="Ultralytics Active Learning Integrations Banner"></a>
<br>
<br>
@ -176,96 +212,98 @@ python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5x.yaml --
<a href="https://docs.wandb.ai/guides/integrations/ultralytics/">
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-wb.png" width="10%" alt="Weights & Biases logo"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
<a href="https://bit.ly/yolov8-readme-comet">
<a href="https://bit.ly/yolov5-readme-comet">
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-comet.png" width="10%" alt="Comet ML logo"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="space">
<a href="https://bit.ly/yolov5-neuralmagic">
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-neuralmagic.png" width="10%" alt="NeuralMagic logo"></a>
<img src="https://github.com/ultralytics/assets/raw/main/partners/logo-neuralmagic.png" width="10%" alt="Neural Magic logo"></a>
</div>
| Ultralytics HUB 🚀 | W&B | Comet ⭐ 新版 | Neural Magic |
| :----------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------: |
| 简化 YOLO 工作流程:使用 [Ultralytics HUB](https://www.ultralytics.com/hub) 轻松标注、训练和部署。立即体验 | 使用 [Weights & Biases](https://docs.wandb.ai/guides/integrations/ultralytics/) 跟踪实验、超参数和结果。 | 永久免费,[Comet](https://bit.ly/yolov5-readme-comet) 允许您保存 YOLOv5 模型、恢复训练,并以交互方式可视化和调试预测。 | 使用 [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) 使 YOLO11 推理速度提升最高 6 倍。 |
| Ultralytics HUB 🚀 | W&B | Comet ⭐ 新增 | Neural Magic |
| :----------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------: |
| 简化 YOLO 工作流程:使用 [Ultralytics HUB](https://www.ultralytics.com/hub) 轻松标注、训练和部署。立即试用 | 使用 [Weights & Biases](https://docs.wandb.ai/guides/integrations/ultralytics/) 无缝跟踪实验、超参数和结果。 | 永久免费,[Comet](https://bit.ly/yolov5-readme-comet) 让您保存 YOLOv5 模型、恢复训练,并交互式地可视化和调试预测。 | 使用 [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) 在 CPU 上运行 YOLOv5 推理速度提高达 6 倍。 |
## <div align="center">Ultralytics HUB</div>
## ⭐ Ultralytics HUB
体验无缝 AI 体验,使用 [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐ ——这是一个无需编写代码即可进行数据可视化、YOLOv5 与 YOLOv8 🚀 模型训练和部署的一体化解决方案。借助我们前沿的平台和用户友好的 [Ultralytics App](https://www.ultralytics.com/app-install),您能轻松将图像转化为可操作的洞察并让您的 AI 创想成真。立即开始您的【免费】之旅吧
使用 [Ultralytics HUB](https://www.ultralytics.com/hub) ⭐ 体验无缝的 AI 开发,这是构建、训练和部署[计算机视觉](https://www.ultralytics.com/glossary/computer-vision-cv)模型的终极平台。无需编写任何代码即可可视化数据集、训练 [YOLOv5](https://docs.ultralytics.com/models/yolov5/) 和 [YOLOv8](https://docs.ultralytics.com/models/yolov8/) 🚀 模型,并将它们部署到实际应用中。使用我们尖端的工具和用户友好的 [Ultralytics App](https://www.ultralytics.com/app-install) 将图像转化为可操作的见解。立即开始您的**免费**之旅
<a align="center" href="https://www.ultralytics.com/hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png" alt="Ultralytics HUB Platform Screenshot"></a>
## <div align="center">为何选择 YOLOv5</div>
## 🤔 为什么选择 YOLOv5
YOLOv5 的设计初衷是让入门变得极为简单且易于学习。我们专注于真实世界的结果
YOLOv5 的设计注重简单性和易用性。我们优先考虑实际性能和可访问性
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040763-93c22a27-347c-4e3c-847a-8094621d3f4e.png"></p>
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040763-93c22a27-347c-4e3c-847a-8094621d3f4e.png" alt="YOLOv5 Performance Chart"></p>
<details>
<summary>YOLOv5-P5 640 图</summary>
<summary>YOLOv5-P5 640 图</summary>
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040757-ce0934a3-06a6-43dc-a979-2edbbd69ea0e.png"></p>
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040757-ce0934a3-06a6-43dc-a979-2edbbd69ea0e.png" alt="YOLOv5 P5 640 Performance Chart"></p>
</details>
<details>
<summary>说明</summary>
<summary>说明</summary>
- **COCO AP val** 表示在 5000 张 [COCO val2017](http://cocodataset.org) 图像数据集上、推理尺寸从 256 到 1536 不同情况下测量的 mAP@0.5:0.95 指标
- **GPU Speed** 测量在 [COCO val2017](http://cocodataset.org) 数据集上使用 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/) V100 实例,以批量大小 32 计算的平均每张图推理时间。
- **EfficientDet** 数据来自 [google/automl](https://github.com/google/automl),批量大小为 8。
- **重现** 命令:`python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
- **COCO AP val** 表示在 [交并比 (IoU)](https://www.ultralytics.com/glossary/intersection-over-union-iou) 阈值从 0.5 到 0.95 的[平均精度均值 (mAP)](https://www.ultralytics.com/glossary/mean-average-precision-map),在包含 5000 张图像的 [COCO val2017 数据集](https://docs.ultralytics.com/datasets/detect/coco/)上使用不同推理尺寸256 到 1536 像素)测量
- **GPU Speed** 测量在 [COCO val2017 数据集](https://docs.ultralytics.com/datasets/detect/coco/)上使用 [AWS p3.2xlarge V100 实例](https://aws.amazon.com/ec2/instance-types/p3/),批处理大小为 32 时,每张图像的平均推理时间。
- **EfficientDet** 数据来源于 [google/automl 仓库](https://github.com/google/automl),批处理大小为 8。
- **重现** 这些结果,请使用命令:`python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
</details>
### 预训练检查点
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | mAP<sup>val<br>50 | Speed<br><sup>CPU b1<br>(ms) | Speed<br><sup>V100 b1<br>(ms) | Speed<br><sup>V100 b32<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
| ----------------------------------------------------------------------------------------------- | --------------------- | -------------------- | ----------------- | ---------------------------- | ----------------------------- | ------------------------------ | ------------------ | ---------------------- |
| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** |
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
| | | | | | | | | |
| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 |
| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)<br>+ [TTA] | 1280<br>1536 | 55.0<br>**55.8** | 72.7<br>**72.7** | 3136<br>- | 26.2<br>- | 19.4<br>- | 140.7<br>- | 209.8<br>- |
此表显示了在 COCO 数据集上训练的各种 YOLOv5 模型的性能指标。
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>验证集<br>50-95 | mAP<sup>验证集<br>50 | 速度<br><sup>CPU b1<br>(毫秒) | 速度<br><sup>V100 b1<br>(毫秒) | 速度<br><sup>V100 b32<br>(毫秒) | 参数<br><sup>(M) | FLOPs<br><sup>@640 (B) |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------- | ----------------------- | -------------------- | ----------------------------- | ------------------------------ | ------------------------------- | ---------------- | ---------------------- |
| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** |
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
| | | | | | | | | |
| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 |
| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)<br>+ [[TTA]](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/) | 1280<br>1536 | 55.0<br>**55.8** | 72.7<br>**72.7** | 3136<br>- | 26.2<br>- | 19.4<br>- | 140.7<br>- | 209.8<br>- |
<details>
<summary>表格说明</summary>
- 所有检查点均使用默认设置训练 300 个周期。Nano 与 Small 模型使用 [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) 超参数,其它模型使用 [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml)。
- **mAP<sup>val</sup>** 指标为在 [COCO val2017](http://cocodataset.org) 数据集上单模型单尺度计算的结果。<br>重现命令`python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
- **Speed** 为在 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/) 实例中基于 COCO val 图像平均推理时间(批量为 1。不包含 NMS 时间(约 1 ms/图)。<br>重现命令`python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **TTA** [测试时数据增强](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/) 包括反射翻转和缩放增强。<br>重现命令`python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
- 所有检查点都使用默认设置训练了 300 个周期。Nano (n) 和 Small (s) 模型使用 [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) 超参数,而 Medium (m)、Large (l) 和 Extra-Large (x) 模型使用 [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml)。
- **mAP<sup>验证集</sup>** 值表示在 [COCO val2017 数据集](https://docs.ultralytics.com/datasets/detect/coco/)上的单模型、单尺度性能。<br>重现请使用`python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
- **速度** 指标是在 [AWS p3.2xlarge V100 实例](https://aws.amazon.com/ec2/instance-types/p3/)上对 COCO 验证集图像进行平均计算得出的。不包括非极大值抑制 (NMS) 时间(约 1 毫秒/图像)。<br>重现请使用`python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **TTA** ([测试时增强](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/)) 包括反射和尺度增强以提高准确性。<br>重现请使用`python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
</details>
## <div align="center">分割</div>
## 🖼️ 分割
我们全新推出的 YOLOv5 [release v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) 实例分割模型是目前全球最快且最精准的,其性能超越了所有现有 [SOTA 基准](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco)。我们将其训练、验证和部署过程简化到了极致。详细信息请参阅我们的 [发行说明](https://github.com/ultralytics/yolov5/releases/v7.0) ,同时访问我们的 [YOLOv5 分割 Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) 获取快速入门教程
YOLOv5 [发布版本 v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) 引入了[实例分割](https://docs.ultralytics.com/tasks/segment/)模型,这些模型达到了当前最高水平的性能。这些模型设计用于轻松训练、验证和部署。有关完整详细信息,请参阅[发布说明](https://github.com/ultralytics/yolov5/releases/v7.0),并探索 [YOLOv5 分割 Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) 以获取快速入门示例
<details>
<summary>分割检查点</summary>
<div align="center">
<a align="center" href="https://www.ultralytics.com/yolo" target="_blank">
<img width="800" src="https://user-images.githubusercontent.com/61612323/204180385-84f3aca9-a5e9-43d8-a617-dda7ca12e54a.png"></a>
<img width="800" src="https://user-images.githubusercontent.com/61612323/204180385-84f3aca9-a5e9-43d8-a617-dda7ca12e54a.png" alt="YOLOv5 Segmentation Performance Chart"></a>
</div>
我们在 A100 GPU 上以图像尺寸 640 对 COCO 数据集训练了 YOLOv5 分割模型共 300 个周期。我们将所有模型导出为 ONNX FP32 以进行 CPU 速度测试,及 TensorRT FP16 以进行 GPU 速度测试。所有速度测试均在 Google [Colab Pro](https://colab.research.google.com/signup) 笔记本上进行,以便于结果重现
YOLOv5 分割模型在 [COCO 数据集](https://docs.ultralytics.com/datasets/segment/coco/)上使用 A100 GPU 训练了 300 个周期,图像大小为 640 像素。模型被导出为 [ONNX](https://onnx.ai/) FP32 用于 CPU 速度测试,以及 [TensorRT](https://developer.nvidia.com/tensorrt) FP16 用于 GPU 速度测试。所有速度测试均在 Google [Colab Pro](https://colab.research.google.com/signup) 笔记本上进行,以确保可复现性
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Train time<br><sup>300 epochs<br>A100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TRT A100<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>@640 (B) |
| ------------------------------------------------------------------------------------------ | --------------------- | -------------------- | --------------------- | --------------------------------------------- | ------------------------------ | ------------------------------ | ------------------ | ---------------------- |
| [YOLOv5n-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-seg.pt) | 640 | 27.6 | 23.4 | 80:17 | **62.7** | **1.2** | **2.0** | **7.1** |
| [YOLOv5s-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-seg.pt) | 640 | 37.6 | 31.7 | 88:16 | 173.3 | 1.4 | 7.6 | 26.4 |
| [YOLOv5m-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-seg.pt) | 640 | 45.0 | 37.1 | 108:36 | 427.0 | 2.2 | 22.0 | 70.8 |
| [YOLOv5l-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-seg.pt) | 640 | 49.0 | 39.9 | 66:43 (2x) | 857.4 | 2.9 | 47.9 | 147.7 |
| [YOLOv5x-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-seg.pt) | 640 | **50.7** | **41.4** | 62:56 (3x) | 1579.2 | 4.5 | 88.8 | 265.7 |
| 模型 | 尺寸<br><sup>(像素) | mAP<sup><br>50-95 | mAP<sup>掩码<br>50-95 | 训练时间<br><sup>300 周期<br>A100 (小时) | 速度<br><sup>ONNX CPU<br>(毫秒) | 速度<br><sup>TRT A100<br>(毫秒) | 参数<br><sup>(M) | FLOPs<br><sup>@640 (B) |
| ------------------------------------------------------------------------------------------ | ------------------- | ------------------- | --------------------- | ---------------------------------------- | ------------------------------- | ------------------------------- | ---------------- | ---------------------- |
| [YOLOv5n-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-seg.pt) | 640 | 27.6 | 23.4 | 80:17 | **62.7** | **1.2** | **2.0** | **7.1** |
| [YOLOv5s-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-seg.pt) | 640 | 37.6 | 31.7 | 88:16 | 173.3 | 1.4 | 7.6 | 26.4 |
| [YOLOv5m-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-seg.pt) | 640 | 45.0 | 37.1 | 108:36 | 427.0 | 2.2 | 22.0 | 70.8 |
| [YOLOv5l-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-seg.pt) | 640 | 49.0 | 39.9 | 66:43 (2x) | 857.4 | 2.9 | 47.9 | 147.7 |
| [YOLOv5x-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-seg.pt) | 640 | **50.7** | **41.4** | 62:56 (3x) | 1579.2 | 4.5 | 88.8 | 265.7 |
- 所有检查点均使用 SGD 优化器`lr0=0.01``weight_decay=5e-5`)以图像尺寸 640 训练 300 个周期,且均采用默认设置。<br>训练日志记录于 https://wandb.ai/glenn-jocher/YOLOv5_v70_official
- **准确度** 为在 COCO 数据集上单模型单尺度的结果。<br>重现命令`python segment/val.py --data coco.yaml --weights yolov5s-seg.pt`
- **速度** 为在 Google [Colab Pro](https://colab.research.google.com/signup) A100 高内存实例上,针对 100 张推理图像计算的平均推理速度。数值仅表示推理速度NMS 每图约增加 1ms<br>重现命令`python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1`
- **导出** 为 ONNX FP32 及 TensorRT FP16 均通过 `export.py` 执行。<br>重现命令`python export.py --weights yolov5s-seg.pt --include engine --device 0 --half`
- 所有检查点均使用 SGD 优化器`lr0=0.01` 和 `weight_decay=5e-5`,在图像大小为 640 像素下,使用默认设置训练了 300 个周期。<br>训练运行记录在 [https://wandb.ai/glenn-jocher/YOLOv5_v70_official](https://wandb.ai/glenn-jocher/YOLOv5_v70_official)。
- **准确度** 值表示在 COCO 数据集上的单模型、单尺度性能。<br>重现请使用`python segment/val.py --data coco.yaml --weights yolov5s-seg.pt`
- **速度** 指标是在 [Colab Pro A100 High-RAM 实例](https://colab.research.google.com/signup)上对 100 张推理图像进行平均计算得出的。值仅表示推理速度NMS 大约增加 1 毫秒/图像)。<br>重现请使用`python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1`
- **导出** 到 ONNX (FP32) 和 TensorRT (FP16) 是使用 `export.py` 完成的。<br>重现请使用`python export.py --weights yolov5s-seg.pt --include engine --device 0 --half`
</details>
@ -274,88 +312,92 @@ YOLOv5 的设计初衷是让入门变得极为简单且易于学习。我们专
### 训练
YOLOv5 分割训练支持通过 `--data coco128-seg.yaml` 参数自动下载 COCO128-seg 分割数据集,也支持通过执行 `bash data/scripts/get_coco.sh --train --val --segments` 手动下载 COCO-segments 数据集,然后运行 `python train.py --data coco.yaml`
YOLOv5 分割训练支持通过 `--data coco128-seg.yaml` 参数自动下载 [COCO128-seg 数据集](https://docs.ultralytics.com/datasets/segment/coco8-seg/)。对于完整的 [COCO-segments 数据集](https://docs.ultralytics.com/datasets/segment/coco/),请使用 `bash data/scripts/get_coco.sh --train --val --segments` 手动下载,然后使用 `python train.py --data coco.yaml` 进行训练
```bash
# 单 GPU 训练
# GPU 训练
python segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640
# 多 GPU DDP 训练
# 使用多 GPU 分布式数据并行 (DDP) 进行训练
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 --device 0,1,2,3
```
### 验证
在 COCO 数据集上验证 YOLOv5s-seg 的 mask mAP
在 COCO 数据集上验证 YOLOv5s-seg 的掩码[平均精度均值 (mAP)](https://www.ultralytics.com/glossary/mean-average-precision-map)
```bash
bash data/scripts/get_coco.sh --val --segments # 下载 COCO val 分割集780MB5000 张图)
python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # 验证
# 下载 COCO 验证集分割部分 (780MB, 5000 张图像)
bash data/scripts/get_coco.sh --val --segments
# 验证模型
python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640
```
### 预测
使用预训练的 YOLOv5m-seg.pt 对 bus.jpg 进行预测
使用预训练的 YOLOv5m-seg.pt 模型对 `bus.jpg` 执行分割
```bash
# 运行预测
python segment/predict.py --weights yolov5m-seg.pt --source data/images/bus.jpg
```
```python
model = torch.hub.load(
"ultralytics/yolov5", "custom", "yolov5m-seg.pt"
) # 从 PyTorch Hub 加载(注意:目前推理尚未支持)
# 从 PyTorch Hub 加载模型(注意:推理支持可能有所不同)
model = torch.hub.load("ultralytics/yolov5", "custom", "yolov5m-seg.pt")
```
| ![zidane](https://user-images.githubusercontent.com/26833433/203113421-decef4c4-183d-4a0a-a6c2-6435b33bc5d3.jpg) | ![bus](https://user-images.githubusercontent.com/26833433/203113416-11fe0025-69f7-4874-a0a6-65d0bfe2999a.jpg) |
| ---------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
| ![Zidane 分割示例](https://user-images.githubusercontent.com/26833433/203113421-decef4c4-183d-4a0a-a6c2-6435b33bc5d3.jpg) | ![Bus 分割示例](https://user-images.githubusercontent.com/26833433/203113416-11fe0025-69f7-4874-a0a6-65d0bfe2999a.jpg) |
| :-----------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------: |
### 导出
将 YOLOv5s-seg 模型导出为 ONNX TensorRT 格式:
将 YOLOv5s-seg 模型导出为 ONNX TensorRT 格式:
```bash
# 导出模型
python export.py --weights yolov5s-seg.pt --include onnx engine --img 640 --device 0
```
</details>
## <div align="center">分类</div>
## 🏷️ 分类
YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases) 新增了对分类模型的训练、验证和部署支持!详细信息请参阅我们的 [发行说明](https://github.com/ultralytics/yolov5/releases/v6.2) ,同时访问我们的 [YOLOv5 分类 Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb) 获取快速入门教程
YOLOv5 [发布版本 v6.2](https://github.com/ultralytics/yolov5/releases/v6.2) 引入了对[图像分类](https://docs.ultralytics.com/tasks/classify/)模型训练、验证和部署的支持。请查看[发布说明](https://github.com/ultralytics/yolov5/releases/v6.2)了解详情,并参阅 [YOLOv5 分类 Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb) 获取快速入门指南
<details>
<summary>分类检查点</summary>
<br>
我们在 ImageNet 数据集上训练了 YOLOv5-cls 分类模型,共训练 90 个周期,使用 4 个 A100 实例进行训练。同时为了对比,我们还训练了 ResNet 和 EfficientNet 模型,均采用相同的默认训练设置。所有模型均导出为 ONNX FP32 以进行 CPU 速度测试,并导出为 TensorRT FP16 以进行 GPU 速度测试。所有速度测试均在 Google [Colab Pro](https://colab.research.google.com/signup) 上进行,以方便结果重现
YOLOv5-cls 分类模型在 [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) 上使用 4xA100 实例训练了 90 个周期。[ResNet](https://arxiv.org/abs/1512.03385) 和 [EfficientNet](https://arxiv.org/abs/1905.11946) 模型在相同设置下进行了训练以供比较。模型被导出为 [ONNX](https://onnx.ai/) FP32CPU 速度测试)和 [TensorRT](https://developer.nvidia.com/tensorrt) FP16GPU 速度测试)。所有速度测试均在 Google [Colab Pro](https://colab.research.google.com/signup) 上运行以确保可复现性
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Training<br><sup>90 epochs<br>4xA100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TensorRT V100<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>@224 (B) |
| -------------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | -------------------------------------------- | ------------------------------ | ----------------------------------- | ------------------ | ---------------------- |
| [YOLOv5n-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-cls.pt) | 224 | 64.6 | 85.4 | 7:59 | **3.3** | **0.5** | **2.5** | **0.5** |
| [YOLOv5s-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-cls.pt) | 224 | 71.5 | 90.2 | 8:09 | 6.6 | 0.6 | 5.4 | 1.4 |
| [YOLOv5m-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-cls.pt) | 224 | 75.9 | 92.9 | 10:06 | 15.5 | 0.9 | 12.9 | 3.9 |
| [YOLOv5l-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-cls.pt) | 224 | 78.0 | 94.0 | 11:56 | 26.9 | 1.4 | 26.5 | 8.5 |
| [YOLOv5x-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-cls.pt) | 224 | **79.0** | **94.4** | 15:04 | 54.3 | 1.8 | 48.1 | 15.9 |
| | | | | | | | | |
| [ResNet18](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet18.pt) | 224 | 70.3 | 89.5 | **6:47** | 11.2 | 0.5 | 11.7 | 3.7 |
| [ResNet34](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet34.pt) | 224 | 73.9 | 91.8 | 8:33 | 20.6 | 0.9 | 21.8 | 7.4 |
| [ResNet50](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet50.pt) | 224 | 76.8 | 93.4 | 11:10 | 23.4 | 1.0 | 25.6 | 8.5 |
| [ResNet101](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet101.pt) | 224 | 78.5 | 94.3 | 17:10 | 42.1 | 1.9 | 44.5 | 15.9 |
| | | | | | | | | |
| [EfficientNet_b0](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b0.pt) | 224 | 75.1 | 92.4 | 13:03 | 12.5 | 1.3 | 5.3 | 1.0 |
| [EfficientNet_b1](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b1.pt) | 224 | 76.4 | 93.2 | 17:04 | 14.9 | 1.6 | 7.8 | 1.5 |
| [EfficientNet_b2](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b2.pt) | 224 | 76.6 | 93.4 | 17:10 | 15.9 | 1.6 | 9.1 | 1.7 |
| [EfficientNet_b3](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b3.pt) | 224 | 77.7 | 94.0 | 19:19 | 18.9 | 1.9 | 12.2 | 2.4 |
| 模型 | 尺寸<br><sup>(像素) | 准确率<br><sup>top1 | 准确率<br><sup>top5 | 训练<br><sup>90 周期<br>4xA100 (小时) | 速度<br><sup>ONNX CPU<br>(毫秒) | 速度<br><sup>TensorRT V100<br>(毫秒) | 参数<br><sup>(M) | FLOPs<br><sup>@224 (B) |
| -------------------------------------------------------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------------------------- | ------------------------------- | ------------------------------------ | ---------------- | ---------------------- |
| [YOLOv5n-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-cls.pt) | 224 | 64.6 | 85.4 | 7:59 | **3.3** | **0.5** | **2.5** | **0.5** |
| [YOLOv5s-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-cls.pt) | 224 | 71.5 | 90.2 | 8:09 | 6.6 | 0.6 | 5.4 | 1.4 |
| [YOLOv5m-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-cls.pt) | 224 | 75.9 | 92.9 | 10:06 | 15.5 | 0.9 | 12.9 | 3.9 |
| [YOLOv5l-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-cls.pt) | 224 | 78.0 | 94.0 | 11:56 | 26.9 | 1.4 | 26.5 | 8.5 |
| [YOLOv5x-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-cls.pt) | 224 | **79.0** | **94.4** | 15:04 | 54.3 | 1.8 | 48.1 | 15.9 |
| | | | | | | | | |
| [ResNet18](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet18.pt) | 224 | 70.3 | 89.5 | **6:47** | 11.2 | 0.5 | 11.7 | 3.7 |
| [ResNet34](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet34.pt) | 224 | 73.9 | 91.8 | 8:33 | 20.6 | 0.9 | 21.8 | 7.4 |
| [ResNet50](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet50.pt) | 224 | 76.8 | 93.4 | 11:10 | 23.4 | 1.0 | 25.6 | 8.5 |
| [ResNet101](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet101.pt) | 224 | 78.5 | 94.3 | 17:10 | 42.1 | 1.9 | 44.5 | 15.9 |
| | | | | | | | | |
| [EfficientNet_b0](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b0.pt) | 224 | 75.1 | 92.4 | 13:03 | 12.5 | 1.3 | 5.3 | 1.0 |
| [EfficientNet_b1](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b1.pt) | 224 | 76.4 | 93.2 | 17:04 | 14.9 | 1.6 | 7.8 | 1.5 |
| [EfficientNet_b2](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b2.pt) | 224 | 76.6 | 93.4 | 17:10 | 15.9 | 1.6 | 9.1 | 1.7 |
| [EfficientNet_b3](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b3.pt) | 224 | 77.7 | 94.0 | 19:19 | 18.9 | 1.9 | 12.2 | 2.4 |
<details>
<summary>表格说明 (点击展开)</summary>
<summary>表格说明(点击展开)</summary>
- 所有检查点均使用默认设置,在图像尺寸 224 下以 SGD 优化器(`lr0=0.001``weight_decay=5e-5`)训练 90 个周期。训练日志记录于 https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2
- **准确率** 为在 [ImageNet-1k](https://www.image-net.org/index.php) 数据集上单模型单尺度计算的结果。<br>重现命令`python classify/val.py --data ../datasets/imagenet --img 224`
- **速度** 为在 Google [Colab Pro](https://colab.research.google.com/signup) V100 高内存实例上,针对 100 张推理图像计算的平均推理速度。<br>重现命令`python classify/val.py --data ../datasets/imagenet --img 224 --batch 1`
- **导出** 为 ONNX FP32 及 TensorRT FP16 均通过 `export.py` 执行。<br>重现命令`python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224`
- 所有检查点均使用 SGD 优化器,`lr0=0.001` 和 `weight_decay=5e-5`,在图像大小为 224 像素下,使用默认设置训练了 90 个周期。<br>训练运行记录在 [https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2](https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2)。
- **准确率** top-1 和 top-5表示在 [ImageNet-1k 数据集](https://docs.ultralytics.com/datasets/classify/imagenet/)上的单模型、单尺度性能。<br>重现请使用`python classify/val.py --data ../datasets/imagenet --img 224`
- **速度** 指标是在 Google [Colab Pro V100 High-RAM 实例](https://colab.research.google.com/signup)上对 100 张推理图像进行平均计算得出的。<br>重现请使用`python classify/val.py --data ../datasets/imagenet --img 224 --batch 1`
- **导出** 到 ONNX (FP32) 和 TensorRT (FP16) 是使用 `export.py` 完成的。<br>重现请使用`python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224`
</details>
</details>
@ -365,106 +407,107 @@ YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases) 新增了
### 训练
YOLOv5 分类训练支持通过 `--data` 参数自动下载 MNIST、Fashion-MNIST、CIFAR10、CIFAR100、Imagenette、Imagewoof 和 ImageNet 数据集。例如,启动 MNIST 训练只需使用 `--data mnist`
YOLOv5 分类训练支持自动下载数据集,如 [MNIST](https://docs.ultralytics.com/datasets/classify/mnist/)[Fashion-MNIST](https://docs.ultralytics.com/datasets/classify/fashion-mnist/)[CIFAR10](https://docs.ultralytics.com/datasets/classify/cifar10/)[CIFAR100](https://docs.ultralytics.com/datasets/classify/cifar100/)[Imagenette](https://docs.ultralytics.com/datasets/classify/imagenette/)[Imagewoof](https://docs.ultralytics.com/datasets/classify/imagewoof/)[ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/),使用 `--data`数。例如,使用 `--data mnist` 开始在 MNIST 上训练
```bash
# 单 GPU 训练
# 使用 CIFAR-100 数据集在 GPU 训练
python classify/train.py --model yolov5s-cls.pt --data cifar100 --epochs 5 --img 224 --batch 128
# 多 GPU DDP 训练
# 在 ImageNet 数据集上使用多 GPU DDP 进行训练
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3
```
### 验证
在 ImageNet-1k 数据集上验证 YOLOv5m-cls 的准确率
在 ImageNet-1k 验证数据集上验证 YOLOv5m-cls 模型的准确性
```bash
bash data/scripts/get_imagenet.sh --val # 下载 ImageNet 验证集6.3G50000 张图)
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # 验证
# 下载 ImageNet 验证集分割 (6.3GB, 50,000 张图像)
bash data/scripts/get_imagenet.sh --val
# 验证模型
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224
```
### 预测
使用预训练的 YOLOv5s-cls.pt 对 bus.jpg 进行预测
使用预训练的 YOLOv5s-cls.pt 模型对图像 `bus.jpg` 进行分类
```bash
# 运行预测
python classify/predict.py --weights yolov5s-cls.pt --source data/images/bus.jpg
```
```python
model = torch.hub.load("ultralytics/yolov5", "custom", "yolov5s-cls.pt") # 从 PyTorch Hub 加载
# 从 PyTorch Hub 加载模型
model = torch.hub.load("ultralytics/yolov5", "custom", "yolov5s-cls.pt")
```
### 导出
一组训练好的 YOLOv5s-cls、ResNet 和 EfficientNet 模型导出为 ONNX 与 TensorRT 格式:
训练好的 YOLOv5s-cls、ResNet50 和 EfficientNet_b0 模型导出为 ONNX 和 TensorRT 格式:
```bash
# 导出模型
python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --include onnx engine --img 224
```
</details>
## <div align="center">环境</div>
## ☁️ 环境
只需几秒钟即可开始使用我们的经过验证的环境。点击下方每个图标了解详情。
使用我们预配置的环境快速开始。点击下面的图标查看设置详情。
<div align="center">
<a href="https://bit.ly/yolov5-paperspace-notebook">
<a href="https://bit.ly/yolov5-paperspace-notebook" title="在 Paperspace Gradient 上运行">
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-gradient.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb">
<a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb" title="在 Google Colab 中打开">
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-colab-small.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
<a href="https://www.kaggle.com/models/ultralytics/yolov5">
<a href="https://www.kaggle.com/models/ultralytics/yolov5" title="在 Kaggle 中打开">
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-kaggle-small.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
<a href="https://hub.docker.com/r/ultralytics/yolov5">
<a href="https://hub.docker.com/r/ultralytics/yolov5" title="拉取 Docker 镜像">
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-docker-small.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
<a href="https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/">
<a href="https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/" title="AWS 快速入门指南">
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-aws-small.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="5%" alt="" />
<a href="https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/">
<a href="https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial/" title="GCP 快速入门指南">
<img src="https://github.com/ultralytics/assets/releases/download/v0.0.0/logo-gcp-small.png" width="10%" /></a>
</div>
## <div align="center">贡献</div>
## 🤝 贡献
我们非常欢迎您的反馈!我们希望让贡献 YOLOv5 的过程变得简单且透明。请参阅我们的 [贡献指南](https://docs.ultralytics.com/help/contributing/) 开始贡献,并填写 [YOLOv5 调查问卷](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) 告诉我们您的体验。感谢所有贡献者
我们欢迎您的贡献!让 YOLOv5 变得易于访问和高效是社区共同努力的目标。请参阅我们的[贡献指南](https://docs.ultralytics.com/help/contributing/)开始。通过 [YOLOv5 调查](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey)分享您的反馈。感谢所有贡献者让 YOLOv5 变得更好
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->
[![Ultralytics 开源贡献者](https://raw.githubusercontent.com/ultralytics/assets/main/im/image-contributors.png)](https://github.com/ultralytics/yolov5/graphs/contributors)
<a href="https://github.com/ultralytics/yolov5/graphs/contributors">
<img src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png" /></a>
## 📜 许可证
## <div align="center">许可证</div>
Ultralytics 提供两种许可选项以满足不同需求:
Ultralytics 提供两种许可证选项以满足不同的使用场景:
- **AGPL-3.0 许可证**:一种经 [OSI 批准](https://opensource.org/license/agpl-v3)的开源许可证,非常适合学术研究、个人项目和测试。它促进开放合作和知识共享。详情请参阅 [LICENSE](https://github.com/ultralytics/yolov5/blob/master/LICENSE) 文件。
- **企业许可证**:专为商业应用量身定制,此许可证允许将 Ultralytics 软件和 AI 模型无缝集成到商业产品和服务中,绕过 AGPL-3.0 的开源要求。对于商业用途,请通过 [Ultralytics 授权许可](https://www.ultralytics.com/license)联系我们。
- **AGPL-3.0 许可证**:这种 [OSI 认可](https://opensource.org/license) 的开源许可证适合学生和爱好者,旨在促进开放协作与知识共享。详见 [LICENSE](https://github.com/ultralytics/yolov5/blob/master/LICENSE) 文件获取更多细节。
- **企业许可证**:专为商业用途设计,该许可证允许将 Ultralytics 软件和 AI 模型无缝集成到商业产品和服务中,从而规避 AGPL-3.0 的开源要求。如果您的使用场景涉及将我们的解决方案嵌入商业产品,请通过 [Ultralytics Licensing](https://www.ultralytics.com/license) 与我们联系。
## 📧 联系
## <div align="center">联系方式</div>
有关 YOLOv5 的 Bug 报告和功能请求,请访问 [GitHub Issues](https://github.com/ultralytics/yolov5/issues),与此同时欢迎加入我们的 [Discord](https://discord.com/invite/ultralytics) 社区进行交流和讨论!
有关 YOLOv5 的错误报告和功能请求,请访问 [GitHub Issues](https://github.com/ultralytics/yolov5/issues)。对于一般问题、讨论和社区支持,请加入我们的 [Discord 服务器](https://discord.com/invite/ultralytics)
<br>
<div align="center">
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="Ultralytics LinkedIn"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="Ultralytics Twitter"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="Ultralytics YouTube"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="3%" alt="Ultralytics TikTok"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="3%" alt="Ultralytics BiliBili"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="3%" alt="Ultralytics Discord"></a>
</div>
[tta]: https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation

View File

@ -2,35 +2,35 @@
# Flask REST API for YOLOv5
[Representational State Transfer (REST)](https://en.wikipedia.org/wiki/Representational_state_transfer) [Application Programming Interfaces (APIs)](https://en.wikipedia.org/wiki/API) are a standard way to expose [Machine Learning (ML)](https://www.ultralytics.com/glossary/machine-learning-ml) models for consumption by other services or applications. This directory provides an example REST API built with [Flask](https://flask.palletsprojects.com/en/stable/) to serve the Ultralytics [YOLOv5s](https://docs.ultralytics.com/models/yolov5/) model loaded directly from [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/). This allows you to easily integrate YOLOv5 object detection capabilities into your web applications or microservices.
[Representational State Transfer (REST)](https://en.wikipedia.org/wiki/Representational_state_transfer) [Application Programming Interfaces (APIs)](https://en.wikipedia.org/wiki/API) provide a standardized way to expose [Machine Learning (ML)](https://www.ultralytics.com/glossary/machine-learning-ml) models for use by other services or applications. This directory contains an example REST API built with the [Flask](https://flask.palletsprojects.com/en/stable/) web framework to serve the [Ultralytics YOLOv5s](https://docs.ultralytics.com/models/yolov5/) model, loaded directly from [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/). This setup allows you to easily integrate YOLOv5 [object detection](https://docs.ultralytics.com/tasks/detect/) capabilities into your web applications or microservices, aligning with common [model deployment options](https://docs.ultralytics.com/guides/model-deployment-options/).
## 💻 Requirements
The primary requirement is the [Flask](https://flask.palletsprojects.com/en/stable/) web framework. Install it using pip:
The primary requirement is the [Flask](https://flask.palletsprojects.com/en/stable/) web framework. You can install it using pip:
```shell
pip install Flask
```
You will also need `torch` and `yolov5` which are implicitly handled when loading the model from PyTorch Hub in the script. Ensure you have a working Python environment.
You will also need `torch` and `yolov5`. These are implicitly handled by the script when it loads the model from PyTorch Hub. Ensure you have a functioning Python environment set up.
## ▶️ Run the API
Once Flask is installed, you can start the API server:
Once Flask is installed, you can start the API server using the following command:
```shell
python restapi.py --port 5000
```
The server will start listening on the specified port (default is 5000). You can then send inference requests to the API endpoint using tools like [curl](https://curl.se/) or any HTTP client.
The server will begin listening on the specified port (defaulting to 5000). You can then send inference requests to the API endpoint using tools like [curl](https://curl.se/) or any other HTTP client.
To test the API with an image file (e.g., `zidane.jpg` from the `yolov5/data/images` directory):
To test the API with a local image file (e.g., `zidane.jpg` located in the `yolov5/data/images` directory relative to the script):
```shell
curl -X POST -F image=@../data/images/zidane.jpg 'http://localhost:5000/v1/object-detection/yolov5s'
```
The API processes the image using the YOLOv5s model and returns the detection results in [JSON](https://www.json.org/json-en.html) format. Each object in the JSON array represents a detected object with its class ID, confidence score, bounding box coordinates (normalized), and class name.
The API processes the submitted image using the YOLOv5s model and returns the detection results in [JSON](https://www.json.org/json-en.html) format. Each object within the JSON array represents a detected item, including its class ID, confidence score, normalized [bounding box](https://www.ultralytics.com/glossary/bounding-box) coordinates (`xcenter`, `ycenter`, `width`, `height`), and class name.
```json
[
@ -73,8 +73,8 @@ The API processes the image using the YOLOv5s model and returns the detection re
]
```
An example Python script, `example_request.py`, demonstrates how to perform inference using the popular [requests](https://requests.readthedocs.io/en/latest/) library. This script provides a simple way to interact with the running API programmatically.
An example Python script, `example_request.py`, is included to demonstrate how to perform inference using the popular [requests](https://requests.readthedocs.io/en/latest/) library. This script offers a straightforward method for interacting with the running API programmatically.
## 🤝 Contribute
Contributions to enhance this Flask API example are welcome! Whether it's adding support for different YOLO models, improving error handling, or adding new features, feel free to fork the repository, make your changes, and submit a pull request. Check out the main [Ultralytics YOLOv5 repository](https://github.com/ultralytics/yolov5) for more details on contributing.
Contributions to enhance this Flask API example are highly encouraged! Whether you're interested in adding support for different YOLO models, improving error handling, or implementing new features, please feel free to fork the repository, apply your changes, and submit a pull request. For more comprehensive contribution guidelines, please refer to the main [Ultralytics YOLOv5 repository](https://github.com/ultralytics/yolov5) and the general [Ultralytics documentation](https://docs.ultralytics.com/).

View File

@ -6,15 +6,15 @@
## About ClearML
[ClearML](https://clear.ml/) is an [open-source](https://github.com/clearml/clearml) MLOps platform designed to streamline your machine learning workflow and save you valuable time ⏱️. Integrating ClearML with Ultralytics YOLOv5 allows you to leverage a powerful suite of tools:
[ClearML](https://clear.ml/) is an [open-source](https://github.com/clearml/clearml) MLOps platform designed to streamline your machine learning workflow and save valuable time ⏱️. Integrating ClearML with [Ultralytics YOLOv5](https://docs.ultralytics.com/models/yolov5/) allows you to leverage a powerful suite of tools:
- **Experiment Management:** 🔨 Track every [YOLOv5](https://docs.ultralytics.com/models/yolov5/) training run, including parameters, metrics, and outputs. See the [Ultralytics ClearML integration guide](https://docs.ultralytics.com/integrations/clearml/) for more details.
- **Experiment Management:** 🔨 Track every YOLOv5 [training run](https://docs.ultralytics.com/modes/train/), including parameters, metrics, and outputs. See the [Ultralytics ClearML integration guide](https://docs.ultralytics.com/integrations/clearml/) for more details.
- **Data Versioning:** 🔧 Version and easily access your custom training data using the integrated ClearML Data Versioning Tool, similar to concepts in [DVC integration](https://docs.ultralytics.com/integrations/dvc/).
- **Remote Execution:** 🔦 [Remotely train and monitor](https://docs.ultralytics.com/hub/cloud-training/) your YOLOv5 models using ClearML Agent.
- **Hyperparameter Optimization:** 🔬 Achieve optimal [Mean Average Precision (mAP)](https://docs.ultralytics.com/guides/yolo-performance-metrics/) using ClearML's [Hyperparameter Optimization](https://docs.ultralytics.com/guides/hyperparameter-tuning/) capabilities.
- **Model Deployment:** 🔭 Turn your trained YOLOv5 model into an API with just a few commands using ClearML Serving, complementing [Ultralytics deployment options](https://docs.ultralytics.com/guides/model-deployment-options/).
You can choose to use only the experiment manager or combine multiple tools into a comprehensive MLOps pipeline.
You can choose to use only the experiment manager or combine multiple tools into a comprehensive [MLOps](https://www.ultralytics.com/glossary/machine-learning-operations-mlops) pipeline.
![ClearML scalars dashboard](https://raw.githubusercontent.com/thepycoder/clearml_screenshots/main/experiment_manager_with_compare.gif)
@ -69,7 +69,7 @@ ClearML automatically captures comprehensive information about your training run
- Hyperparameters and configuration settings
- Model checkpoints (use `--save-period n` to save every `n` epochs)
- Console output logs
- Performance metrics ([mAP_0.5](https://docs.ultralytics.com/guides/yolo-performance-metrics/), mAP_0.5:0.95, [precision, recall](https://docs.ultralytics.com/guides/yolo-performance-metrics/), [losses](https://docs.ultralytics.com/reference/utils/loss/), [learning rates](https://www.ultralytics.com/glossary/learning-rate), etc.)
- Performance metrics (mAP_0.5, mAP_0.5:0.95, [precision, recall](https://docs.ultralytics.com/guides/yolo-performance-metrics/), [losses](https://docs.ultralytics.com/reference/utils/loss/), [learning rates](https://www.ultralytics.com/glossary/learning-rate), etc.)
- System details (machine specs, runtime, creation date)
- Generated plots (e.g., label correlogram, [confusion matrix](https://www.ultralytics.com/glossary/confusion-matrix))
- Images with bounding boxes per epoch

View File

@ -2,15 +2,15 @@
<img src="https://cdn.comet.ml/img/notebook_logo.png">
# Using YOLOv5 with Comet
# Using Ultralytics YOLOv5 with Comet
Welcome to the guide on integrating [Ultralytics YOLOv5](https://github.com/ultralytics/yolov5) with [Comet](https://www.comet.com/site/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme)! Comet provides powerful tools for experiment tracking, model management, and visualization, enhancing your machine learning workflow. This document details how to leverage Comet to monitor training, log results, manage datasets, and optimize hyperparameters for your YOLOv5 models.
Welcome to the guide on integrating [Ultralytics YOLOv5](https://github.com/ultralytics/yolov5) with [Comet](https://www.comet.com/site/)! Comet provides powerful tools for experiment tracking, model management, and visualization, enhancing your [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) workflow. This document details how to leverage Comet to monitor training, log results, manage datasets, and optimize hyperparameters for your YOLOv5 models.
## 🧪 About Comet
[Comet](https://www.comet.com/site/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme) builds tools that help data scientists, engineers, and team leaders accelerate and optimize machine learning and deep learning models.
[Comet](https://www.comet.com/site/) builds tools that help data scientists, engineers, and team leaders accelerate and optimize machine learning and [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models.
Track and visualize model metrics in real-time, save your hyperparameters, datasets, and model checkpoints, and visualize your model predictions with [Comet Custom Panels](https://www.comet.com/docs/v2/guides/comet-dashboard/code-panels/about-panels/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme)! Comet ensures you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes.
Track and visualize model metrics in real-time, save your [hyperparameters](https://docs.ultralytics.com/guides/hyperparameter-tuning/), datasets, and model checkpoints, and visualize your model predictions with Comet Custom Panels! Comet ensures you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes. Find more information in the [Comet Documentation](https://www.comet.com/docs/v2/).
## 🚀 Getting Started
@ -35,7 +35,7 @@ You can configure Comet in two ways:
export COMET_PROJECT_NAME=<Your Comet Project Name> # Defaults to 'yolov5' if not set
```
Find your API key in your [Comet Account Settings](https://www.comet.com/docs/v2/guides/getting-started/quickstart/#get-an-api-key?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme).
Find your API key in your [Comet Account Settings](https://www.comet.com/).
2. **Configuration File:** Create a `.comet.config` file in your working directory with the following content:
```ini
@ -46,14 +46,14 @@ You can configure Comet in two ways:
### Run the Training Script
Execute the YOLOv5 training script. Comet will automatically start logging your run.
Execute the YOLOv5 [training script](https://docs.ultralytics.com/modes/train/). Comet will automatically start logging your run.
```shell
# Train YOLOv5s on COCO128 for 5 epochs
python train.py --img 640 --batch 16 --epochs 5 --data coco128.yaml --weights yolov5s.pt
```
That's it! Comet automatically logs hyperparameters, command-line arguments, and training/validation metrics. Visualize and analyze your runs in the Comet UI. For more details on training, see the [YOLOv5 Train documentation](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/).
That's it! Comet automatically logs hyperparameters, command-line arguments, and training/validation metrics. Visualize and analyze your runs in the Comet UI. For more details on training, see the [Ultralytics Training documentation](https://docs.ultralytics.com/modes/train/).
<img width="1920" alt="Comet UI showing YOLOv5 training metrics" src="https://user-images.githubusercontent.com/26833433/202851203-164e94e1-2238-46dd-91f8-de020e9d6b41.png">
@ -74,17 +74,17 @@ Comet automatically logs the following information by default:
### Metrics
- **Losses:** Box Loss, Object Loss, Classification Loss (Training and Validation).
- **Performance:** [mAP@0.5](https://docs.ultralytics.com/guides/yolo-performance-metrics/), mAP@0.5:0.95 (Validation).
- **Precision and Recall:** Validation data metrics.
- **Performance:** [mAP@0.5](https://www.ultralytics.com/glossary/mean-average-precision-map), mAP@0.5:0.95 (Validation). Learn more about these metrics in the [YOLO Performance Metrics guide](https://docs.ultralytics.com/guides/yolo-performance-metrics/).
- **[Precision](https://www.ultralytics.com/glossary/precision) and [Recall](https://www.ultralytics.com/glossary/recall):** Validation data metrics.
### Parameters
- **Model Hyperparameters:** Configuration used for the model.
- **Command Line Arguments:** All arguments passed via the CLI.
- **Command Line Arguments:** All arguments passed via the [CLI](https://docs.ultralytics.com/usage/cli/).
### Visualizations
- **[Confusion Matrix](https://en.wikipedia.org/wiki/Confusion_matrix):** Model predictions on validation data.
- **[Confusion Matrix](https://www.ultralytics.com/glossary/confusion-matrix):** Model predictions on validation data, useful for understanding classification performance ([Wikipedia definition](https://en.wikipedia.org/wiki/Confusion_matrix)).
- **Curves:** PR and F1 curves across all classes.
- **Label Correlogram:** Correlation visualization of class labels.
@ -104,7 +104,7 @@ export COMET_LOG_BATCH_LEVEL_METRICS=true # Log training metrics per batch. Defa
export COMET_LOG_PREDICTIONS=true # Disable prediction logging if set to false. Default: true
```
Refer to the [Comet documentation](https://www.comet.com/docs/v2/guides/experiment-logging/configure-comet/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme) for more configuration options.
Refer to the [Comet documentation](https://www.comet.com/docs/v2/) for more configuration options.
### Logging Checkpoints with Comet
@ -120,15 +120,15 @@ python train.py \
--save-period 1 # Save checkpoint every epoch
```
Checkpoints will appear in the "Assets & Artifacts" tab of your Comet experiment. Learn more about model management in the [Comet Model Registry](https://www.comet.com/docs/v2/guides/model-registry/overview/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme).
Checkpoints will appear in the "Assets & Artifacts" tab of your Comet experiment. Learn more about model management in the [Comet Model Registry documentation](https://www.comet.com/docs/v2/guides/model-registry/).
### Logging Model Predictions
By default, model predictions (images, ground truth labels, bounding boxes) for the validation set are logged. Control the logging frequency using the `--bbox_interval` argument, which specifies logging every Nth batch per epoch.
By default, model predictions (images, ground truth labels, [bounding boxes](https://www.ultralytics.com/glossary/bounding-box)) for the validation set are logged. Control the logging frequency using the `--bbox_interval` argument, which specifies logging every Nth batch per epoch.
**Note:** The YOLOv5 validation dataloader defaults to a batch size of 32. Adjust `--bbox_interval` accordingly.
Visualize predictions using Comet's [Object Detection Custom Panel](https://www.comet.com/docs/v2/guides/comet-dashboard/built-in-panels/object-detection-panel/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme). See an [example project using the Panel here](https://www.comet.com/examples/comet-example-yolov5?shareable=YcwMiJaZSXfcEXpGOHDD12vA1&utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme).
Visualize predictions using Comet's Object Detection Custom Panel. See an [example project using the Panel here](https://www.comet.com/examples/comet-example-yolov5?shareable=YcwMiJaZSXfcEXpGOHDD12vA1&utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme).
```shell
python train.py \
@ -169,11 +169,11 @@ env COMET_LOG_PER_CLASS_METRICS=true python train.py \
## 💾 Dataset Management with Comet Artifacts
Use [Comet Artifacts](https://www.comet.com/docs/v2/guides/data-management/using-artifacts/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme) to version, store, and manage your datasets.
Use [Comet Artifacts](https://www.comet.com/docs/v2/guides/data-management/artifacts/) to version, store, and manage your datasets.
### Uploading a Dataset
Upload your dataset using the `--upload_dataset` flag. Ensure your dataset follows the structure described in the [YOLOv5 documentation](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/) and the dataset config [YAML](https://yaml.org/) file matches the format of `coco128.yaml`.
Upload your dataset using the `--upload_dataset` flag. Ensure your dataset follows the structure described in the [Ultralytics Datasets documentation](https://docs.ultralytics.com/datasets/) and the dataset config [YAML](https://www.ultralytics.com/glossary/yaml) file matches the format of `coco128.yaml` (see the [COCO128 dataset docs](https://docs.ultralytics.com/datasets/detect/coco128/)).
```shell
python train.py \
@ -225,7 +225,7 @@ Artifacts track data lineage, showing which experiments used specific dataset ve
If a training run is interrupted (e.g., due to connection issues), you can resume it using the `--resume` flag with the Comet Run Path (`comet://<your_workspace>/<your_project>/<experiment_id>`).
This restores the model state, hyperparameters, arguments, and downloads necessary Artifacts, continuing logging to the existing Comet Experiment. Learn more about [resuming runs in Comet](https://www.comet.com/docs/v2/guides/experiment-logging/resume-experiment/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme).
This restores the model state, hyperparameters, arguments, and downloads necessary Artifacts, continuing logging to the existing Comet Experiment. Learn more about [resuming runs in the Comet documentation](https://www.comet.com/docs/v2/guides/experiment-logging/resume-experiment/).
```shell
python train.py \
@ -234,11 +234,11 @@ python train.py \
## 🔍 Hyperparameter Optimization (HPO)
YOLOv5 integrates with the [Comet Optimizer](https://www.comet.com/docs/v2/guides/hyperparameter-optimization/overview/?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github_readme) for easy hyperparameter sweeps and visualization. This helps in finding the best set of parameters for your model, a process often referred to as [Hyperparameter Tuning](https://docs.ultralytics.com/guides/hyperparameter-tuning/).
YOLOv5 integrates with the [Comet Optimizer](https://www.comet.com/docs/v2/guides/hyperparameter-optimization/) for easy hyperparameter sweeps and visualization. This helps in finding the best set of parameters for your model, a process often referred to as [Hyperparameter Tuning](https://docs.ultralytics.com/guides/hyperparameter-tuning/).
### Configuring an Optimizer Sweep
Create a [JSON](https://www.json.org/json-en.html) configuration file defining the sweep parameters, search strategy, and objective metric. An example is provided at `utils/loggers/comet/optimizer_config.json`.
Create a [JSON](https://www.ultralytics.com/glossary/json) configuration file defining the sweep parameters, search strategy, and objective metric. An example is provided at `utils/loggers/comet/optimizer_config.json`.
Run the sweep using the `hpo.py` script:
@ -262,7 +262,7 @@ Execute multiple sweep trials concurrently using the `comet optimizer` command:
```shell
comet optimizer -j \
utils/loggers/comet/optimizer_config.json < num_workers > utils/loggers/comet/hpo.py
utils/loggers/comet/hpo.py < num_workers > utils/loggers/comet/optimizer_config.json
```
Replace `<num_workers>` with the desired number of parallel processes.