[DOC] Add YOLOv8 docs (#461)

* Add YOLOv8 docs

* update

* update

* fix some error

* update en

* fix comment

* update
pull/480/head
Haian Huang(深度眸) 2023-01-17 15:12:02 +08:00 committed by GitHub
parent feef44da23
commit c32ca4549f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 491 additions and 0 deletions

View File

@ -160,6 +160,7 @@ MMYOLO 用法和 MMDetection 几乎一致,所有教程都是通用的,你也
- [YOLOv5 原理和实现全解析](docs/zh_cn/algorithm_descriptions/yolov5_description.md)
- [YOLOv6 原理和实现全解析](docs/zh_cn/algorithm_descriptions/yolov6_description.md)
- [RTMDet 原理和实现全解析](docs/zh_cn/algorithm_descriptions/rtmdet_description.md)
- [YOLOv8 原理和实现全解析](docs/zh_cn/algorithm_descriptions/yolov8_description.md)
- 算法部署

View File

@ -14,3 +14,5 @@ Algorithm principles and implementation
:maxdepth: 1
yolov5_description.md
yolov8_description.md
rtmdet_description.md

View File

@ -0,0 +1,241 @@
# Algorithm principles and implementation with YOLOv8
## 0 Introduction
<div align=center >
<img alt="YOLOv8-P5_structure" src="https://user-images.githubusercontent.com/27466624/211974251-8de633c8-090c-47c9-ba52-4941dc9e3a48.jpg"/>
Figure 1YOLOv8-P5
</div>
RangeKing@github provides the graph above. Thanks, RangeKing!
YOLOv8 is the next major update from YOLOv5, open sourced by Ultralytics on 2023.1.10, and now supports image classification, object detection and instance segmentation tasks.
<div align=center >
<img alt="YOLOv8-logo" src="https://user-images.githubusercontent.com/17425982/212823787-44031e62-e374-4851-8267-4e56e299473a.png"/>
Figure 2YOLOv8-logo
</div>
According to the official description, Ultralytics YOLOv8 is the latest version of the YOLO object detection and image segmentation model developed by Ultralytics. YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. These include a new backbone network, a new anchor-free detection head, and a new loss function. YOLOv8 is also highly efficient and can be run on a variety of hardware platforms, from CPUs to GPUs.
However, instead of naming the open source library YOLOv8, ultralytics uses the word ultralytics directly because ultralytics positions the library as an algorithmic framework rather than a specific algorithm, with a major focus on scalability. It is expected that the library can be used not only for the YOLO model family, but also for non-YOLO models and various tasks such as classification segmentation pose estimation.
Overall, YOLOv8 is a powerful and flexible tool for object detection and image segmentation that offers the best of both worlds: **the SOTA technology and the ability to use and compare all previous YOLO versions.**
<div align=center >
<img alt="YOLOv8-table" src="https://user-images.githubusercontent.com/17425982/212007736-f592bc70-3959-4ff6-baf7-a93c7ad1d882.png"/>
Figure 3YOLOv8-performance
</div>
YOLOv8 official open source address: [this](https://github.com/ultralytics/ultralytics)
MMYOLO open source address for YOLOv8: [this](https://github.com/open-mmlab/mmyolo/blob/dev/configs/yolov8/)
The following table shows the official results of mAP, number of parameters and FLOPs tested on the COCO Val 2017 dataset. It is evident that YOLOv8 has significantly improved precision compared to YOLOv5. However, the number of parameters and FLOPs of the N/S/M models have significantly increased. Additionally, it can be observed that the inference speed of YOLOv8 is slower in comparison to most of the YOLOv5 models.
| **model** | **YOLOv5** | **params(M)** | **FLOPs@640 (B)** | **YOLOv8** | **params(M)** | **FLOPs@640 (B)** |
| --------- | ----------- | ------------- | ----------------- | ----------- | ------------- | ----------------- |
| n | 28.0(300e) | 1.9 | 4.5 | 37.3 (500e) | 3.2 | 8.7 |
| s | 37.4 (300e) | 7.2 | 16.5 | 44.9 (500e) | 11.2 | 28.6 |
| m | 45.4 (300e) | 21.2 | 49.0 | 50.2 (500e) | 25.9 | 78.9 |
| l | 49.0 (300e) | 46.5 | 109.1 | 52.9 (500e) | 43.7 | 165.2 |
| x | 50.7 (300e) | 86.7 | 205.7 | 53.9 (500e) | 68.2 | 257.8 |
It is worth mentioning that the recent YOLO series have shown significant performance improvements on the COCO dataset. However, their generalizability on custom datasets has not been extensively tested, which thereby will be a focus in the future development of MMYOLO.
Before reading this article, if you are not familiar with YOLOv5, YOLOv6 and RTMDet, you can read the detailed explanation of [YOLOv5 and its implementation](https://mmyolo.readthedocs.io/en/latest/algorithm_descriptions/yolov5_description.html).
## 1 YOLOv8 Overview
The core features and modifications of YOLOv8 can be summarized as follows:
1. **A new state-of-the-art (SOTA) model is proposed, featuring an object detection model for P5 640 and P6 1280 resolutions, as well as a YOLACT-based instance segmentation model. The model also includes different size options with N/S/M/L/X scales, similar to YOLOv5, to cater to various scenarios.**
2. **The backbone network and neck module are based on the YOLOv7 ELAN design concept, replacing the C3 module of YOLOv5 with the C2f module. However, there are a lot of operations such as Split and Concat in this C2f module that are not as deployment-friendly as before.**
3. **The Head module has been updated to the current mainstream decoupled structure, separating the classification and detection heads, and switching from Anchor-Based to Anchor-Free.**
4. **The loss calculation adopts the TaskAlignedAssigner in TOOD and introduces the Distribution Focal Loss to the regression loss.**
5. **In the data augmentation part, Mosaic is closed in the last 10 training epoch, which is the same as YOLOX training part.**
**As can be seen from the above summaries, YOLOv8 mainly refers to the design of recently proposed algorithms such as YOLOX, YOLOv6, YOLOv7 and PPYOLOE.**
Next, we will introduce various improvements in the YOLOv8 model in detail by 5 parts: model structure design, loss calculation, training strategy, model inference process and data augmentation.
## 2 Model structure design
The Figure 1 is the model structure diagram based on the official code of YOLOv8. **If you like this style of model structure diagram, welcome to check out the model structure diagram in algorithm README of MMYOLO, which currently covers YOLOv5, YOLOv6, YOLOX, RTMDet and YOLOv8.**
Comparing the YOLOv5 and YOLOv8 yaml configuration files without considering the head module, you can see that the changes are minor.
<div align=center >
<img alt="yaml" src="https://user-images.githubusercontent.com/17425982/212008977-28c3fc7b-ee00-4d56-b912-d77ded585d78.png"/>
Figure 4YOLOv5 and YOLOv8 YAML diff
</div>
The structure on the left is YOLOv5-s and the other side is YOLOv8-s. The specific changes in the backbone network and neck module are:
- The kernel of the first convolutional layer has been changed from 6x6 to 3x3
- All C3 modules are replaced by C2f, and the structure is as follows, with more skip connections and additional split operations.
<div align=center >
<img alt="module" src="https://user-images.githubusercontent.com/17425982/212009208-92f45c23-a024-49bb-a2ee-bb6f87adcc92.png"/>
Figure 5YOLOv5 and YOLOv8 module diff
</div>
- Removed 2 convolutional connection layers from neck module
- The block number has been changed from 3-6-9-3 to 3-6-6-3.
- **If we look at the N/S/M/L/X models, we can see that of the N/S and L/X models only changed the scaling factors, but the number of channels in the S/ML backbone network is not the same and does not follow the same scaling factor principle. The main reason for this design is that the channel settings under the same set of scaling factors are not the most optimal, and the YOLOv7 network design does not follow one set of scaling factors for all models either.**
The most significant changes in the model lay in the head module. The head module has been changed from the original coupling structure to the decoupling one, and its style has been changed from **YOLOv5's Anchor-Based to Anchor-Free**. The structure is shown below.
<div align=center >
<img alt="head" src="https://user-images.githubusercontent.com/17425982/212009547-189e14aa-6f93-4af0-8446-adf604a46b95.png"/>
Figure 6YOLOv8 Head
</div>
As demonstrated, the removal of the objectness branch and the retention of only the decoupled classification and regression branches stand as the major differences. Additionally, the regression branch now employs integral form representation as proposed in the Distribution Focal Loss.
## 3 Loss calculation
The loss calculation process consists of 2 parts: the sample assignment strategy and loss calculation.
The majority of contemporary detectors employ dynamic sample assignment strategies, such as YOLOX's simOTA, TOOD's TaskAlignedAssigner, and RTMDet's DynamicSoftLabelAssigner. Given the superiority of dynamic assignment strategies, the YOLOv8 algorithm directly incorporates the one employed in TOOD's TaskAlignedAssigner.
The matching strategy of TaskAlignedAssigner can be summarized as follows: positive samples are selected based on the weighted scores of classification and regression.
```{math}
t=s^\alpha+u^\beta
```
`s` is the prediction score corresponding to the ground truth category, `u` is the IoU of the prediction bounding box and the gt bounding box.
1. For each ground truth, the task-aligned assigner calculates the `alignment metric` for each anchor by taking the weighted product of two values: the predicted classification score of the corresponding class, and the Intersection over Union (IoU) between the predicted bounding box and the Ground Truth bounding box.
2. For each Ground Truth, the larger top-k samples are selected as positive based on the `alignment_metrics` values directly.
The loss calculation consists of 2 parts: the classification and regression, without the objectness loss in the previous model.
- The classification branch still uses BCE Loss.
- The regression branch employs both Distribution Focal Loss and CIoU Loss.
The 3 Losses are weighted by a specific weight ratio.
## 4 Data augmentation
YOLOv8's data augmentation is similar to YOLOv5, whereas it stops the Mosaic augmentation in the final 10 epochs as proposed in YOLOX. The data process pipelines are illustrated in the diagram below.
<div align=center >
<img alt="head" src="https://user-images.githubusercontent.com/17425982/212815248-38384da9-b289-468e-8414-ab3c27ee2026.png"/>
Figure 7pipeline
</div>
The intensity of data augmentation required for different scale models varies, therefore the hyperparameters for the scaled models are adjusted depending on the situation. For larger models, techniques such as MixUp and CopyPaste are typically employed. The result of data augmentation can be seen in the example below:
<div align=center >
<img alt="head" src="https://user-images.githubusercontent.com/17425982/212815840-063524e1-d754-46b1-9efc-61d17c03fd0e.png"/>
Figure 8results
</div>
The above visualization result can be obtained by running the [browse_dataset](https://github.com/open-mmlab/mmyolo/blob/dev/tools/analysis_tools/browse_dataset.py) script.
As the data augmentation process utilized in YOLOv8 is similar to YOLOv5, we will not delve into the specifics within this article. For a more in-depth understanding of each data transformation, we recommend reviewing the [YOLOv5 algorithm analysis document](https://mmyolo.readthedocs.io/en/latest/algorithm_descriptions/yolov5_description.html#id2) in MMYOLO.
## 5 Training strategy
The distinctions between the training strategy of YOLOv8 and YOLOv5 are minimal. The most notable variation is that the overall number of training epochs for YOLOv8 has been raised from 300 to 500, resulting in a significant expansion in the duration of training. As an illustration, the training strategy for YOLOv8-S can be succinctly outlined as follows:
| config | YOLOv8-s P5 hyp |
| ---------------------- | ------------------------------- |
| optimizer | SGD |
| base learning rate | 0.01 |
| Base weight decay | 0.0005 |
| optimizer momentum | 0.937 |
| batch size | 128 |
| learning rate schedule | linear |
| training epochs | **500** |
| warmup iterations | max(10003 * iters_per_epochs) |
| input size | 640x640 |
| EMA decay | 0.9999 |
## 6 Inference process
The inference process of YOLOv8 is almost the same as YOLOv5. The only difference is that the integral representation bbox in Distribution Focal Loss needs to be decoded into a regular 4-dimensional bbox, and the subsequent calculation process is the same as YOLOv5.
Taking COCO 80 class as an example, assuming that the input image size is 640x640, the inference process implemented in MMYOLO is shown as follows.
<div align=center >
<img alt="head" src="https://user-images.githubusercontent.com/17425982/212816206-33815716-3c12-49a2-9c37-0bd85f941bec.png"/>
Figure 9results
</div>
The inference and post-processing process is:
**(1) Decoding bounding box**
Integrate the probability of the distance between the center and the boundary of the box into the mathematical expectation of the distances.
**(2) Dimensional transformation**
YOLOv8 outputs three feature maps with `80x80`, `40x40` and `20x20` scales. A total of 6 classification and regression different scales of feature map are output by the head module.
The 3 different scales of category prediction branch and bbox prediction branch are combined and dimensionally transformed. For the convenience of subsequent processing, the original channel dimensions are transposed to the end, and the category prediction branch and bbox prediction branch shapes are (b, 80x80+40x40+20x20, 80)=(b,8400,80), (b,8400,4), respectively.
**(3) Scale Restroation**
The classification prediction branch utilizes sigmoid calculations, whereas the bbox prediction branch requires decoding to xyxy format and conversion to the original scale of the input images.
**(4) Thresholding**
Iterate through each graph in the batch and use `score_thr` to perform thresholding. In this process, we also need to consider multi_label and nms_pre to ensure that the number of detected bboxs after filtering is no more than nms_pre.
**(5) Reduction to the original image scale and NMS**
Reusing the parameters for preprocessing, the remaining bboxs are first resized to the original image scale and then NMS is performed. The final number of bboxes cannot be more than `max_per_img`.
Special Note: **The Batch shape inference strategy, which is present in YOLOv5, is currently not activated in YOLOv8. By performing a quick test in MMYOLO, it can be observed that activating the Batch shape strategy can result in an approximate AP increase of around 0.1% to 0.2%.**
## 7 Feature map visualization
A comprehensive set of feature map visualization tools are provided in MMYOLO to help users visualize the feature maps.
Take the YOLOv8-s model as an example. The first step is to download the official weights, and then convert them to MMYOLO by using the [yolov8_to_mmyolo](https://github.com/open-mmlab/mmyolo/blob/dev/tools/model_converters/yolov8_to_mmyolo.py) script. Note that the script must be placed under the official repository in order to run correctly.
Assuming that you want to visualize the effect of the 3 feature maps output by backbone and the weights are named 'mmyolov8s.pth'. Run the following command:
```bash
cd mmyolo
python demo/featmap_vis_demo.py demo/demo.jpg configs/yolov8/yolov8_s_syncbn_fast_8xb16-500e_coco.py mmyolov8s.pth --channel-reductio squeeze_mean
```
In particular, to ensure that the feature map and image are shown aligned, the original `test_pipeline` configuration needs to be replaced with the following:
```Python
test_pipeline = [
dict(
type='LoadImageFromFile',
file_client_args=_base_.file_client_args),
dict(type='mmdet.Resize', scale=img_scale, keep_ratio=False), # change
dict(type='LoadAnnotations', with_bbox=True, _scope_='mmdet'),
dict(
type='mmdet.PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
```
<div align=center >
<img alt="head" src="https://user-images.githubusercontent.com/17425982/212816319-9ac19484-987a-40ac-a0fe-2c13a7048df7.png"/>
Figure 10featmap
</div>
From the above figure, we can see that the different output feature maps are mainly responsible for predicting objects at different scales.
We can also visualize the 3 output feature maps of the neck layer.
```bash
cd mmyolo
python demo/featmap_vis_demo.py demo/demo.jpg configs/yolov8/yolov8_s_syncbn_fast_8xb16-500e_coco.py mmyolov8s.pth --channel-reductio squeeze_mean --target-layers neck
```
<div align=center >
<img alt="head" src="https://user-images.githubusercontent.com/17425982/212816458-a4e4600a-5f50-49c6-864b-0254a2720f3c.png"/>
Figure 11featmap
</div>
From the above figure, we can find the features at the object are more focused.
## Summary
This article delves into the intricacies of the YOLOv8 algorithm, offering a comprehensive examination of its overall design, model structure, loss function, training data enhancement techniques, and inference process. To aid in comprehension, a plethora of diagrams are provided.
In summary, YOLOv8 is a highly efficient algorithm that incorporates image classification, Anchor-Free object detection, and instance segmentation. Its detection component incorporates numerous state-of-the-art YOLO algorithms to achieve new levels of performance.
MMYOLO open source address for YOLOV8 [this](https://github.com/open-mmlab/mmyolo/blob/dev/configs/yolov8/)
MMYOLO Algorithm Analysis Tutorial address is [yolov5_description](https://mmyolo.readthedocs.io/en/latest/algorithm_descriptions/yolov5_description.html)

View File

@ -16,3 +16,4 @@
yolov5_description.md
yolov6_description.md
rtmdet_description.md
yolov8_description.md

View File

@ -0,0 +1,244 @@
# YOLOv8 原理和实现全解析
## 0 简介
<div align=center >
<img alt="YOLOv8-P5_structure" src="https://user-images.githubusercontent.com/27466624/211974251-8de633c8-090c-47c9-ba52-4941dc9e3a48.jpg"/>
图 1YOLOv8-P5 模型结构
</div>
以上结构图由 RangeKing@github 绘制。
YOLOv8 是 Ultralytics 公司在 2023 年 1月 10 号开源的 YOLOv5 的下一个重大更新版本,目前支持图像分类、物体检测和实例分割任务,在还没有开源时就收到了用户的广泛关注。
按照官方描述YOLOv8 是一个 SOTA 模型,它建立在以前 YOLO 版本的成功基础上,并引入了新的功能和改进,以进一步提升性能和灵活性。具体创新包括一个新的骨干网络、一个新的 Ancher-Free 检测头和一个新的损失函数,可以在从 CPU 到 GPU 的各种硬件平台上运行。
不过 Ultralytics 并没有直接将开源库命名为 YOLOv8而是直接使用 Ultralytics 这个词,原因是 Ultralytics 将这个库定位为算法框架,而非某一个特定算法,一个主要特点是可扩展性。其希望这个库不仅仅能够用于 YOLO 系列模型,而是能够支持非 YOLO 模型以及分类分割姿态估计等各类任务。
总而言之Ultralytics 开源库的两个主要优点是:
- **融合众多当前 SOTA 技术于一体**
- **未来将支持其他 YOLO 系列以及 YOLO 之外的更多算法**
<div align=center >
<img alt="YOLOv8-table" src="https://user-images.githubusercontent.com/17425982/212007736-f592bc70-3959-4ff6-baf7-a93c7ad1d882.png"/>
图 2YOLOv8 性能曲线
</div>
下表为官方在 COCO Val 2017 数据集上测试的 mAP、参数量和 FLOPs 结果。可以看出 YOLOv8 相比 YOLOv5 精度提升非常多,但是 N/S/M 模型相应的参数量和 FLOPs 都增加了不少,从上图也可以看出相比 YOLOV5 大部分模型推理速度变慢了。
| **模型** | **YOLOv5** | **params(M)** | **FLOPs@640 (B)** | **YOLOv8** | **params(M)** | **FLOPs@640 (B)** |
| -------- | ----------- | ------------- | ----------------- | ----------- | ------------- | ----------------- |
| n | 28.0(300e) | 1.9 | 4.5 | 37.3 (500e) | 3.2 | 8.7 |
| s | 37.4 (300e) | 7.2 | 16.5 | 44.9 (500e) | 11.2 | 28.6 |
| m | 45.4 (300e) | 21.2 | 49.0 | 50.2 (500e) | 25.9 | 78.9 |
| l | 49.0 (300e) | 46.5 | 109.1 | 52.9 (500e) | 43.7 | 165.2 |
| x | 50.7 (300e) | 86.7 | 205.7 | 53.9 (500e) | 68.2 | 257.8 |
额外提一句,现在各个 YOLO 系列改进算法都在 COCO 上面有明显性能提升,但是在自定义数据集上面的泛化性还没有得到广泛验证,至今依然听到不少关于 YOLOv5 泛化性能较优异的说法。**对各系列 YOLO 泛化性验证也是 MMYOLO 中一个特别关心和重点发力的方向。**
阅读本文前,如果你对 YOLOv5、YOLOv6 和 RTMDet 不熟悉,可以先看下如下文档:
1. [YOLOv5 原理和实现全解析](https://mmyolo.readthedocs.io/zh_CN/latest/algorithm_descriptions/yolov5_description.html)
2. [YOLOv6 原理和实现全解析](https://mmyolo.readthedocs.io/zh_CN/latest/algorithm_descriptions/yolov6_description.html)
3. [RTMDet 原理和实现全解析](https://mmyolo.readthedocs.io/zh_CN/latest/algorithm_descriptions/rtmdet_description.html)
## 1 YOLOv8 概述
YOLOv8 算法的核心特性和改动可以归结为如下:
1. **提供了一个全新的 SOTA 模型,包括 P5 640 和 P6 1280 分辨率的目标检测网络和基于 YOLACT 的实例分割模型。和 YOLOv5 一样,基于缩放系数也提供了 N/S/M/L/X 尺度的不同大小模型,用于满足不同场景需求**
2. **骨干网络和 Neck 部分可能参考了 YOLOv7 ELAN 设计思想,将 YOLOv5 的 C3 结构换成了梯度流更丰富的 C2f 结构,并对不同尺度模型调整了不同的通道数,属于对模型结构精心微调,不再是无脑一套参数应用所有模型,大幅提升了模型性能。不过这个 C2f 模块中存在 Split 等操作对特定硬件部署没有之前那么友好了**
3. **Head 部分相比 YOLOv5 改动较大,换成了目前主流的解耦头结构,将分类和检测头分离,同时也从 Anchor-Based 换成了 Anchor-Free**
4. **Loss 计算方面采用了 TaskAlignedAssigner 正样本分配策略,并引入了 Distribution Focal Loss**
5. **训练的数据增强部分引入了 YOLOX 中的最后 10 epoch 关闭 Mosiac 增强的操作,可以有效地提升精度**
从上面可以看出YOLOv8 主要参考了最近提出的诸如 YOLOX、YOLOv6、YOLOv7 和 PPYOLOE 等算法的相关设计,本身的创新点不多,偏向工程实践,主推的还是 ultralytics 这个框架本身。
下面将按照模型结构设计、Loss 计算、训练数据增强、训练策略和模型推理过程共 5 个部分详细介绍 YOLOv8 目标检测的各种改进,实例分割部分暂时不进行描述。
## 2 模型结构设计
模型完整图示可以看图 1。
在暂时不考虑 Head 情况下,对比 YOLOv5 和 YOLOv8 的 yaml 配置文件可以发现改动较小。
<div align=center >
<img alt="yaml" src="https://user-images.githubusercontent.com/17425982/212008977-28c3fc7b-ee00-4d56-b912-d77ded585d78.png"/>
图 3YOLOv5 和 YOLOv8 YAML 文件对比
</div>
左侧为 YOLOv5-s右侧为 YOLOv8-s
骨干网络和 Neck 的具体变化为:
- 第一个卷积层的 kernel 从 6x6 变成了 3x3
- 所有的 C3 模块换成 C2f结构如下所示可以发现多了更多的跳层连接和额外的 Split 操作
<div align=center >
<img alt="module" src="https://user-images.githubusercontent.com/17425982/212009208-92f45c23-a024-49bb-a2ee-bb6f87adcc92.png"/>
图 4YOLOv5 和 YOLOv8 模块对比
</div>
- 去掉了 Neck 模块中的 2 个卷积连接层
- Backbone 中 C2f 的 block 数从 3-6-9-3 改成了 3-6-6-3
- 查看 N/S/M/L/X 等不同大小模型,可以发现 N/S 和 L/X 两组模型只是改了缩放系数,但是 S/M/L 等骨干网络的通道数设置不一样没有遵循同一套缩放系数。如此设计的原因应该是同一套缩放系数下的通道设置不是最优设计YOLOv7 网络设计时也没有遵循一套缩放系数作用于所有模型
Head 部分变化最大,从原先的耦合头变成了解耦头,并且从 YOLOv5 的 Anchor-Based 变成了 Anchor-Free。其结构如下所示
<div align=center >
<img alt="head" src="https://user-images.githubusercontent.com/17425982/212009547-189e14aa-6f93-4af0-8446-adf604a46b95.png"/>
图 5YOLOv8 Head 结构
</div>
可以看出,不再有之前的 objectness 分支,只有解耦的分类和回归分支,并且其回归分支使用了 Distribution Focal Loss 中提出的积分形式表示法。
## 3 Loss 计算
Loss 计算过程包括 2 个部分: 正负样本分配策略和 Loss 计算。
现代目标检测器大部分都会在正负样本分配策略上面做文章,典型的如 YOLOX 的 simOTA、TOOD 的 TaskAlignedAssigner 和 RTMDet 的 DynamicSoftLabelAssigner这类 Assigner 大都是动态分配策略,而 YOLOv5 采用的依然是静态分配策略。考虑到动态分配策略的优异性YOLOv8 算法中则直接引用了 TOOD 的 TaskAlignedAssigner。
TaskAlignedAssigner 的匹配策略简单总结为: 根据分类与回归的分数加权的分数选择正样本。
```{math}
t=s^\alpha+u^\beta
```
`s` 是标注类别对应的预测分值,`u` 是预测框和 gt 框的 iou两者相乘就可以衡量对齐程度。
1. 对于每一个 GT对所有的预测框基于 GT 类别对应分类分数,预测框与 GT 的 IoU 的加权得到一个关联分类以及回归的对齐分数 `alignment_metrics`
2. 对于每一个 GT直接基于 `alignment_metrics` 对齐分数选取 topK 大的作为正样本
Loss 计算包括 2 个分支: **分类和回归分支,没有了之前的 objectness 分支**。
- 分类分支依然采用 BCE Loss
- 回归分支需要和 Distribution Focal Loss 中提出的积分形式表示法绑定,因此使用了 Distribution Focal Loss 同时还使用了 CIoU Loss
3 个 Loss 采用一定权重比例加权即可。
## 4 训练数据增强
数据增强方面和 YOLOv5 差距不大,只不过引入了 YOLOX 中提出的最后 10 个 epoch 关闭 Mosaic 的操作。假设训练 epoch 是 500其示意图如下所示
<div align=center >
<img alt="head" src="https://user-images.githubusercontent.com/17425982/212815248-38384da9-b289-468e-8414-ab3c27ee2026.png"/>
图 6pipeline
</div>
考虑到不同模型应该采用的数据增强强度不一样,因此对于不同大小模型,有部分超参会进行修改,典型的如大模型会开启 MixUp 和 CopyPaste。数据增强后典型效果如下所示
<div align=center >
<img alt="head" src="https://user-images.githubusercontent.com/17425982/212815840-063524e1-d754-46b1-9efc-61d17c03fd0e.png"/>
图 7results
</div>
上述效果可以运行 [browse_dataset](https://github.com/open-mmlab/mmyolo/blob/dev/tools/analysis_tools/browse_dataset.py) 脚本得到。由于每个 pipeline 都是比较常规的操作,本文不再赘述。如果想了解每个 pipeline 的细节,可以查看 MMYOLO 中 [YOLOv5 的算法解析文档](https://mmyolo.readthedocs.io/zh_CN/latest/algorithm_descriptions/yolov5_description.html#id2) 。
## 5 训练策略
YOLOv8 的训练策略和 YOLOv5 没有啥区别,最大区别就是**模型的训练总 epoch 数从 300 提升到了 500**,这也导致训练时间急剧增加。以 YOLOv8-S 为例,其训练策略汇总如下:
| 配置 | YOLOv8-s P5 参数 |
| ---------------------- | ------------------------------- |
| optimizer | SGD |
| base learning rate | 0.01 |
| Base weight decay | 0.0005 |
| optimizer momentum | 0.937 |
| batch size | 128 |
| learning rate schedule | linear |
| training epochs | **500** |
| warmup iterations | max(10003 * iters_per_epochs) |
| input size | 640x640 |
| EMA decay | 0.9999 |
## 6 模型推理过程
YOLOv8 的推理过程和 YOLOv5 几乎一样,唯一差别在于前面需要对 Distribution Focal Loss 中的积分表示 bbox 形式进行解码,变成常规的 4 维度 bbox后续计算过程就和 YOLOv5 一样了。
以 COCO 80 类为例,假设输入图片大小为 640x640MMYOLO 中实现的推理过程示意图如下所示:
<div align=center >
<img alt="head" src="https://user-images.githubusercontent.com/17425982/212816206-33815716-3c12-49a2-9c37-0bd85f941bec.png"/>
图 8results
</div>
其推理和后处理过程为:
**(1) bbox 积分形式转换为 4d bbox 格式**
对 Head 输出的 bbox 分支进行转换,利用 Softmax 和 Conv 计算将积分形式转换为 4 维 bbox 格式
**(2) 维度变换**
YOLOv8 输出特征图尺度为 `80x80`、`40x40` 和 `20x20` 的三个特征图。Head 部分输出分类和回归共 6 个尺度的特征图。
将 3 个不同尺度的类别预测分支、bbox 预测分支进行拼接,并进行维度变换。为了后续方便处理,会将原先的通道维度置换到最后,类别预测分支 和 bbox 预测分支 shape 分别为 (b, 80x80+40x40+20x20, 80)=(b,8400,80)(b,8400,4)。
**(3) 解码还原到原图尺度**
分类预测分支进行 Sigmoid 计算,而 bbox 预测分支需要进行解码,还原为真实的原图解码后 xyxy 格式。
**(4) 阈值过滤**
遍历 batch 中的每张图,采用 `score_thr` 进行阈值过滤。在这过程中还需要考虑 **multi_label 和 nms_pre确保过滤后的检测框数目不会多于 nms_pre。**
**(5) 还原到原图尺度和 nms**
基于前处理过程,将剩下的检测框还原到网络输出前的原图尺度,然后进行 nms 即可。最终输出的检测框不能多于 **max_per_img。**
有一个特别注意的点:**YOLOv5 中采用的 Batch shape 推理策略,在 YOLOv8 推理中暂时没有开启,不清楚后面是否会开启,在 MMYOLO 中快速测试了下,如果开启 Batch shape 会涨大概 0.1~0.2。**
## 7 特征图可视化
MMYOLO 中提供了一套完善的特征图可视化工具,可以帮助用户可视化特征的分布情况。 为了和官方性能对齐,此处依然采用官方权重进行可视化。
以 YOLOv8-s 模型为例,第一步需要下载官方权重,然后将该权重通过 [yolov8_to_mmyolo](https://github.com/open-mmlab/mmyolo/blob/dev/tools/model_converters/yolov8_to_mmyolo.py) 脚本将去转换到 MMYOLO 中,注意必须要将脚本置于官方仓库下才能正确运行,假设得到的权重名字为 mmyolov8s.pth。
假设想可视化 backbone 输出的 3 个特征图效果,则只需要
```bash
cd mmyolo
python demo/featmap_vis_demo.py demo/demo.jpg configs/yolov8/yolov8_s_syncbn_fast_8xb16-500e_coco.py mmyolov8s.pth --channel-reductio squeeze_mean
```
需要特别注意,为了确保特征图和图片叠加显示能对齐效果,需要先将原先的 `test_pipeline` 替换为如下:
```Python
test_pipeline = [
dict(
type='LoadImageFromFile',
file_client_args=_base_.file_client_args),
dict(type='mmdet.Resize', scale=img_scale, keep_ratio=False), # 这里将 LetterResize 修改成 mmdet.Resize
dict(type='LoadAnnotations', with_bbox=True, _scope_='mmdet'),
dict(
type='mmdet.PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
```
<div align=center >
<img alt="head" src="https://user-images.githubusercontent.com/17425982/212816319-9ac19484-987a-40ac-a0fe-2c13a7048df7.png"/>
图 9featmap
</div>
从上图可以看出**不同输出特征图层主要负责预测不同尺度的物体**。
我们也可以可视化 Neck 层的 3 个输出层特征图:
```bash
cd mmyolo
python demo/featmap_vis_demo.py demo/demo.jpg configs/yolov8/yolov8_s_syncbn_fast_8xb16-500e_coco.py mmyolov8s.pth --channel-reductio squeeze_mean --target-layers neck
```
<div align=center >
<img alt="head" src="https://user-images.githubusercontent.com/17425982/212816458-a4e4600a-5f50-49c6-864b-0254a2720f3c.png"/>
图 10featmap
</div>
**从上图可以发现物体处的特征更加聚焦。**
## 总结
本文详细分析和总结了最新的 YOLOv8 算法从整体设计到模型结构、Loss 计算、训练数据增强、训练策略和推理过程进行了详细的说明,并提供了大量的示意图供大家方便理解。
简单来说 YOLOv8 是一个包括了图像分类、Anchor-Free 物体检测和实例分割的高效算法,检测部分设计参考了目前大量优异的最新的 YOLO 改进算法,实现了新的 SOTA。不仅如此还推出了一个全新的框架。不过这个框架还处于早期阶段还需要不断完善。
MMYOLO 开源地址: https://github.com/open-mmlab/mmyolo/blob/dev/configs/yolov8/README.md
MMYOLO 算法解析教程https://mmyolo.readthedocs.io/zh_CN/latest/algorithm_descriptions/index.html#id2

View File

@ -8,6 +8,7 @@
- [社区协作,简洁易用,快来开箱新一代 YOLO 系列开源库](https://zhuanlan.zhihu.com/p/575615805)
- [MMYOLO 社区倾情贡献RTMDet 原理社区开发者解读来啦!](https://zhuanlan.zhihu.com/p/569777684)
- [YOLOv8 深度详解!一文看懂,快速上手](https://zhuanlan.zhihu.com/p/598566644)
- [玩转 MMYOLO 基础类第一期: 配置文件太复杂?继承用法看不懂?配置全解读来了](https://zhuanlan.zhihu.com/p/577715188)
- [玩转 MMYOLO 工具类第一期: 特征图可视化](https://zhuanlan.zhihu.com/p/578141381?)
- [玩转 MMYOLO 实用类第二期:源码阅读和调试「必备」技巧文档](https://zhuanlan.zhihu.com/p/580885852)
@ -62,6 +63,7 @@
## 文章
- [MMDetection 3.0:目标检测新基准与前沿](https://zhuanlan.zhihu.com/p/575246786)
- [目标检测、实例分割、旋转框样样精通!详解高性能检测算法 RTMDet](https://zhuanlan.zhihu.com/p/598846422)
- [MMDetection 支持数据增强神器 Simple Copy Paste 全过程](https://zhuanlan.zhihu.com/p/559940982)
## 知乎问答和资源