mirror of https://github.com/open-mmlab/mmyolo.git
* add yolov5 voc training * format code * [Feature] Support VOC Dataset in YOLOv5 (#134) * add yolov5 voc training * fix mosaic bug * fix mosaic bug and temp config * fix mosaic bug * update config * support training on voc dataset * format code * format code * Optimize Code. Change `RandomTransform` to `OneOf` * Change `OneOf` to `mmcv.RandomChoice` * fix yolov5coco dataset * fix yolov5coco dataset * fix bug, format code * format config * format code * add yolov5 voc training * rebase * fix mosaic bug * update config * support training on voc dataset * format code * format code * Optimize Code. Change `RandomTransform` to `OneOf` * Change `OneOf` to `mmcv.RandomChoice` * fix yolov5coco dataset * fix yolov5coco dataset * fix bug, format code * format code * add yolov5 voc training * fix mosaic bug and temp config * fix mosaic bug * update config * support training on voc dataset * format code * format code * Optimize Code. Change `RandomTransform` to `OneOf` * Change `OneOf` to `mmcv.RandomChoice` * fix yolov5coco dataset * fix yolov5coco dataset * fix bug, format code * format code * add yolov5 voc training * rebase * fix mosaic bug * update config * support training on voc dataset * format code * format code * Optimize Code. Change `RandomTransform` to `OneOf` * Change `OneOf` to `mmcv.RandomChoice` * fix yolov5coco dataset * fix yolov5coco dataset * fix bug, format code * format code * format code * fix lint * add unittest * add auto loss_weight * add doc; add model log url * add doc; add model log url * add doc; add model log url * [Feature] support mmyolo deployment (#79) * support mmyolo deployment * mv deploy place * remove unused configs * add deploy code * fix new register * fix comments * fix dependent codebase register * remove unused initialize * refact deploy config * credit return to triplemu * Add yolov5 head rewrite * refactor deploy * refactor deploy * Add yolov5 head rewrite * fix configs * refact config * fix comment * sync name after mmdeploy 1088 * fix mmyolo * fix yapf * fix deploy config * try to fix flake8 importlib-metadata * add mmyolo models ut * add deploy uts * add deploy uts * fix trt dynamic error * fix multi-batch for dynamic batch value * fix mode * fix lint * sync model.py * add ci for deploy test * fix ci * fix ci * fix ci * extract script to command for fixing CI * fix cmake for CI * sudo ln * move ort position * remove unused sdk compile * cd mmdeploy * simplify build * add missing make * change order * add -v * add setuptools * get locate * get locate * upgrade torch * change torchvision version * fix config * fix ci * fix ci * fix lint Co-authored-by: tripleMu <gpu@163.com> Co-authored-by: RunningLeon <mnsheng@yeah.net> * [Feature] Support YOLOv5 YOLOv6 YOLOX Deploy in mmdeploy (#199) * Support YOLOv5 YOLOv6 YOLOX Deploy in mmdeploy * Fix lint * Rename _class to detector_type * Add some common * fix lint Co-authored-by: huanghaian <huanghaian@sensetime.com> * fix vocdatasets * fix vocdatasets Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: tripleMu <gpu@163.com> Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: huanghaian <huanghaian@sensetime.com> |
||
---|---|---|
.. | ||
voc | ||
README.md | ||
metafile.yml | ||
yolov5_l-p6-v62_syncbn_fast_8xb16-300e_coco.py | ||
yolov5_l-v61_syncbn_fast_8xb16-300e_coco.py | ||
yolov5_m-p6-v62_syncbn_fast_8xb16-300e_coco.py | ||
yolov5_m-v61_syncbn_fast_8xb16-300e_coco.py | ||
yolov5_n-p6-v62_syncbn_fast_8xb16-300e_coco.py | ||
yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py | ||
yolov5_s-p6-v62_syncbn_fast_8xb16-300e_coco.py | ||
yolov5_s-v61_syncbn-detect_8xb16-300e_coco.py | ||
yolov5_s-v61_syncbn_8xb16-300e_coco.py | ||
yolov5_s-v61_syncbn_fast_1xb4-300e_balloon.py | ||
yolov5_s-v61_syncbn_fast_8xb16-300e_coco.py | ||
yolov5_x-p6-v62_syncbn_fast_8xb16-300e_coco.py | ||
yolov5_x-v61_syncbn_fast_8xb16-300e_coco.py |
README.md
YOLOv5
Abstract
YOLOv5 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
Results and models
COCO
Backbone | Arch | size | SyncBN | AMP | Mem (GB) | box AP | Config | Download |
---|---|---|---|---|---|---|---|---|
YOLOv5-n | P5 | 640 | Yes | Yes | 1.5 | 28.0 | config | model | log |
YOLOv5-s | P5 | 640 | Yes | Yes | 2.7 | 37.7 | config | model | log |
YOLOv5-m | P5 | 640 | Yes | Yes | 5.0 | 45.3 | config | model | log |
YOLOv5-l | P5 | 640 | Yes | Yes | 8.1 | 48.8 | config | model | log |
YOLOv5-n | P6 | 1280 | Yes | Yes | 5.8 | 35.9 | config | model | log |
YOLOv5-s | P6 | 1280 | Yes | Yes | 10.5 | 44.4 | config | model | log |
YOLOv5-m | P6 | 1280 | Yes | Yes | 19.1 | 51.3 | config | model | log |
YOLOv5-l | P6 | 1280 | Yes | Yes | 30.5 | 53.7 | config | model | log |
Note:
In the official YOLOv5 code, the random_perspective
data augmentation in COCO object detection task training uses mask annotation information, which leads to higher performance. Object detection should not use mask annotation, so only box annotation information is used in MMYOLO
. We will use the mask annotation information in the instance segmentation task. See https://github.com/ultralytics/yolov5/issues/9917 for details.
fast
means thatYOLOv5DetDataPreprocessor
andyolov5_collate
are used for data preprocessing, which is faster for training, but less flexible for multitasking. Recommended to use fast version config if you only care about object detection.detect
means that the network input is fixed to640x640
and the post-processing thresholds is modified.SyncBN
means use SyncBN,AMP
indicates training with mixed precision.- We use 8x A100 for training, and the single-GPU batch size is 16. This is different from the official code.
- The performance is unstable and may fluctuate by about 0.4 mAP and the highest performance weight in
COCO
training inYOLOv5
may not be the last epoch. balloon
means that this is a demo configuration.
VOC
Backbone | size | Batchsize | AMP | Mem (GB) | box AP(COCO metric) | Config | Download |
---|---|---|---|---|---|---|---|
YOLOv5-n | 512 | 64 | Yes | 3.5 | 51.2 | config | model | log |
YOLOv5-s | 512 | 64 | Yes | 6.5 | 62.7 | config | model | log |
YOLOv5-m | 512 | 64 | Yes | 12.0 | 70.1 | config | model | log |
YOLOv5-l | 512 | 32 | Yes | 10.0 | 73.1 | config | model | log |
Note:
- Training on VOC dataset need pretrained model which trained on COCO.
- The performance is unstable and may fluctuate by about 0.4 mAP.
- Official YOLOv5 use COCO metric, while training VOC dataset.
- We converted the VOC test dataset to COCO format offline, while reproducing mAP result as shown above. We will support to use COCO metric while training VOC dataset in later version.
- Hyperparameter reference from
https://wandb.ai/glenn-jocher/YOLOv5_VOC_official
.
Citation
@software{glenn_jocher_2022_7002879,
author = {Glenn Jocher and
Ayush Chaurasia and
Alex Stoken and
Jirka Borovec and
NanoCode012 and
Yonghye Kwon and
TaoXie and
Kalen Michael and
Jiacong Fang and
imyhxy and
Lorna and
Colin Wong and
曾逸夫(Zeng Yifu) and
Abhiram V and
Diego Montes and
Zhiqiang Wang and
Cristi Fati and
Jebastin Nadar and
Laughing and
UnglvKitDe and
tkianai and
yxNONG and
Piotr Skalski and
Adam Hogan and
Max Strobel and
Mrinal Jain and
Lorenzo Mammana and
xylieong},
title = {{ultralytics/yolov5: v6.2 - YOLOv5 Classification
Models, Apple M1, Reproducibility, ClearML and
Deci.ai integrations}},
month = aug,
year = 2022,
publisher = {Zenodo},
version = {v6.2},
doi = {10.5281/zenodo.7002879},
url = {https://doi.org/10.5281/zenodo.7002879}
}