diff --git a/README.md b/README.md index 163985b5..e5e8af2c 100644 --- a/README.md +++ b/README.md @@ -71,14 +71,32 @@ And the figure of P6 model is in [model_design.md](docs/en/algorithm_description ## What's New -💎 **v0.2.0** was released on 1/12/2022: +### Highlight -1. Support [YOLOv7](https://github.com/open-mmlab/mmyolo/tree/dev/configs/yolov7) P5 and P6 model -2. Support [YOLOv6](https://github.com/open-mmlab/mmyolo/blob/dev/configs/yolov6/README.md) ML model -3. Support [Grad-Based CAM and Grad-Free CAM](https://github.com/open-mmlab/mmyolo/blob/dev/demo/boxam_vis_demo.py) -4. Support [large image inference](https://github.com/open-mmlab/mmyolo/blob/dev/demo/large_image_demo.py) based on sahi -5. Add [easydeploy](https://github.com/open-mmlab/mmyolo/blob/dev/projects/easydeploy/README.md) project under the projects folder -6. Add [custom dataset guide](https://github.com/open-mmlab/mmyolo/blob/dev/docs/zh_cn/user_guides/custom_dataset.md) +We are excited to announce our latest work on real-time object recognition tasks, **RTMDet**, a family of fully convolutional single-stage detectors. RTMDet not only achieves the best parameter-accuracy trade-off on object detection from tiny to extra-large model sizes but also obtains new state-of-the-art performance on instance segmentation and rotated object detection tasks. Details can be found in the [technical report](https://arxiv.org/abs/2212.07784). Pre-trained models are [here](configs/rtmdet). + +[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/real-time-instance-segmentation-on-mscoco)](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco?p=rtmdet-an-empirical-study-of-designing-real) +[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/object-detection-in-aerial-images-on-dota-1)](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-dota-1?p=rtmdet-an-empirical-study-of-designing-real) +[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/object-detection-in-aerial-images-on-hrsc2016)](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-hrsc2016?p=rtmdet-an-empirical-study-of-designing-real) + +| Task | Dataset | AP | FPS(TRT FP16 BS1 3090) | +| ------------------------ | ------- | ------------------------------------ | ---------------------- | +| Object Detection | COCO | 52.8 | 322 | +| Instance Segmentation | COCO | 44.6 | 188 | +| Rotated Object Detection | DOTA | 78.9(single-scale)/81.3(multi-scale) | 121 | + +
+ +
+ +MMYOLO currently only implements the object detection algorithm, but it has a significant training acceleration compared to the MMDeteciton version. The training speed is 2.6 times faster than the previous version. + +💎 **v0.3.0** was released on 8/1/2023: + +1. Implement fast version of [RTMDet](https://github.com/open-mmlab/mmyolo/blob/dev/configs/rtmdet/README.md). RTMDet-s 8xA100 training takes only 14 hours. The training speed is 2.6 times faster than the previous version. +2. Support [PPYOLOE](https://github.com/open-mmlab/mmyolo/blob/dev/configs/ppyoloe/README.md) training +3. Support `iscrowd` attribute training in [YOLOv5](https://github.com/open-mmlab/mmyolo/blob/dev/configs/yolov5/crowdhuman/yolov5_s-v61_8xb16-300e_ignore_crowdhuman.py) +4. Support [YOLOv5 assigner result visualization](https://github.com/open-mmlab/mmyolo/blob/dev/projects/assigner_visualization/README.md) For release history and update details, please refer to [changelog](https://mmyolo.readthedocs.io/en/latest/notes/changelog.html). @@ -92,7 +110,7 @@ conda activate open-mmlab pip install openmim mim install "mmengine>=0.3.1" mim install "mmcv>=2.0.0rc1,<2.1.0" -mim install "mmdet>=3.0.0rc3,<3.1.0" +mim install "mmdet>=3.0.0rc5,<3.1.0" git clone https://github.com/open-mmlab/mmyolo.git cd mmyolo # Install albumentations @@ -152,7 +170,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md). - [x] [RTMDet](configs/rtmdet) - [x] [YOLOv6](configs/yolov6) - [x] [YOLOv7](configs/yolov7) -- [ ] [PPYOLOE](configs/ppyoloe)(Inference only) +- [x] [PPYOLOE](configs/ppyoloe) @@ -183,6 +201,8 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
  • YOLOXCSPDarknet
  • EfficientRep
  • CSPNeXt
  • +
  • YOLOv7Backbone
  • +
  • PPYOLOECSPResNet
  • @@ -191,6 +211,8 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
  • YOLOv6RepPAFPN
  • YOLOXPAFPN
  • CSPNeXtPAFPN
  • +
  • YOLOv7PAFPN
  • +
  • PPYOLOECSPPAFPN
  • diff --git a/README_zh-CN.md b/README_zh-CN.md index 43040071..3c0fa5ad 100644 --- a/README_zh-CN.md +++ b/README_zh-CN.md @@ -71,25 +71,46 @@ P6 模型图详见 [model_design.md](docs/zh_CN/algorithm_descriptions/model_des ## 最新进展 -💎 **v0.2.0** 版本已经在 2022.12.1 发布: +### 亮点 -1. 支持 [YOLOv7](https://github.com/open-mmlab/mmyolo/tree/dev/configs/yolov7) P5 和 P6 模型 -2. 支持 [YOLOv6](https://github.com/open-mmlab/mmyolo/blob/dev/configs/yolov6/README.md) 中的 ML 大模型 -3. 支持 [Grad-Based CAM 和 Grad-Free CAM](https://github.com/open-mmlab/mmyolo/blob/dev/demo/boxam_vis_demo.py) -4. 基于 sahi 支持 [大图推理](https://github.com/open-mmlab/mmyolo/blob/dev/demo/large_image_demo.py) -5. projects 文件夹下新增 [easydeploy](https://github.com/open-mmlab/mmyolo/blob/dev/projects/easydeploy/README.md) 项目 -6. 新增 [自定义数据集教程](https://github.com/open-mmlab/mmyolo/blob/dev/docs/zh_cn/user_guides/custom_dataset.md) +我们很高兴向大家介绍我们在实时目标识别任务方面的最新成果 RTMDet,包含了一系列的全卷积单阶段检测模型。 RTMDet 不仅在从 tiny 到 extra-large 尺寸的目标检测模型上实现了最佳的参数量和精度的平衡,而且在实时实例分割和旋转目标检测任务上取得了最先进的成果。 更多细节请参阅[技术报告](https://arxiv.org/abs/2212.07784)。 预训练模型可以在[这里](configs/rtmdet)找到。 + +[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/real-time-instance-segmentation-on-mscoco)](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco?p=rtmdet-an-empirical-study-of-designing-real) +[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/object-detection-in-aerial-images-on-dota-1)](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-dota-1?p=rtmdet-an-empirical-study-of-designing-real) +[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/object-detection-in-aerial-images-on-hrsc2016)](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-hrsc2016?p=rtmdet-an-empirical-study-of-designing-real) + +| Task | Dataset | AP | FPS(TRT FP16 BS1 3090) | +| ------------------------ | ------- | ------------------------------------ | ---------------------- | +| Object Detection | COCO | 52.8 | 322 | +| Instance Segmentation | COCO | 44.6 | 188 | +| Rotated Object Detection | DOTA | 78.9(single-scale)/81.3(multi-scale) | 121 | + +
    + +
    + +MMYOLO 中目前仅仅实现了目标检测算法,但是相比 MMDeteciton 版本有显著训练加速,训练速度相比原先版本提升 2.6 倍。 + +💎 **v0.3.0** 版本已经在 2023.1.8 发布: + +1. 实现了 [RTMDet](https://github.com/open-mmlab/mmyolo/blob/dev/configs/rtmdet/README.md) 的快速版本。RTMDet-s 8xA100 训练只需要 14 个小时,训练速度相比原先版本提升 2.6 倍。 +2. 支持 [PPYOLOE](https://github.com/open-mmlab/mmyolo/blob/dev/configs/ppyoloe/README.md) 训练。 +3. 支持 [YOLOv5](https://github.com/open-mmlab/mmyolo/blob/dev/configs/yolov5/crowdhuman/yolov5_s-v61_8xb16-300e_ignore_crowdhuman.py) 的 `iscrowd` 属性训练。 +4. 支持 [YOLOv5 正样本分配结果可视化](https://github.com/open-mmlab/mmyolo/blob/dev/projects/assigner_visualization/README.md) +5. 新增 [YOLOv6 原理和实现全解析文档](https://github.com/open-mmlab/mmyolo/blob/dev/docs/zh_cn/algorithm_descriptions/yolov6_description.md) 同时我们也推出了解读视频: -| | 内容 | 视频 | 课程中的代码 | -| :-: | :------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| 🌟 | 特征图可视化 | [![Link](https://i2.hdslb.com/bfs/archive/480a0eb41fce26e0acb65f82a74501418eee1032.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV188411s7o8) [![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV188411s7o8)](https://www.bilibili.com/video/BV188411s7o8) | [特征图可视化.ipynb](https://github.com/open-mmlab/OpenMMLabCourse/blob/main/codes/MMYOLO_tutorials/%5B%E5%B7%A5%E5%85%B7%E7%B1%BB%E7%AC%AC%E4%B8%80%E6%9C%9F%5D%E7%89%B9%E5%BE%81%E5%9B%BE%E5%8F%AF%E8%A7%86%E5%8C%96.ipynb) | -| 🌟 | 特征图可视化 Demo | [![Link](http://i0.hdslb.com/bfs/archive/081f300c84d6556f40d984cfbe801fc0644ff449.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV1je4y1478R/) [![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV1je4y1478R)](https://www.bilibili.com/video/BV1je4y1478R/) | | -| 🌟 | 配置全解读 | [![Link](http://i1.hdslb.com/bfs/archive/e06daf640ea39b3c0700bb4dc758f1a253f33e13.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV1214y157ck) [![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV1214y157ck)](https://www.bilibili.com/video/BV1214y157ck) | [配置全解读文档](https://zhuanlan.zhihu.com/p/577715188) | -| 🌟 | 源码阅读和调试「必备」技巧 | [![Link](https://i2.hdslb.com/bfs/archive/790d2422c879ff20488910da1c4422b667ea6af7.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV1N14y1V7mB) [![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV1N14y1V7mB)](https://www.bilibili.com/video/BV1N14y1V7mB) | [源码阅读和调试「必备」技巧文档](https://zhuanlan.zhihu.com/p/580885852) | -| 🌟 | 工程文件结构简析 | [![Link](http://i2.hdslb.com/bfs/archive/41030efb84d0cada06d5451c1e6e9bccc0cdb5a3.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV1LP4y117jS)[![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV1LP4y117jS)](https://www.bilibili.com/video/BV1LP4y117jS) | [工程文件结构简析文档](https://zhuanlan.zhihu.com/p/584807195) | -| 🌟 | 10分钟换遍主干网络 | [![Link](http://i0.hdslb.com/bfs/archive/c51f1aef7c605856777249a7b4478f44bd69f3bd.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV1JG4y1d7GC) [![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV1JG4y1d7GC)](https://www.bilibili.com/video/BV1JG4y1d7GC) | [10分钟换遍主干网络文档](https://zhuanlan.zhihu.com/p/585641598)
    [10分钟换遍主干网络.ipynb](https://github.com/open-mmlab/OpenMMLabCourse/blob/main/codes/MMYOLO_tutorials/[实用类第二期]10分钟换遍主干网络.ipynb) | +| | 内容 | 视频 | 课程中的代码 | +| :-: | :--------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | +| 🌟 | 特征图可视化 | [![Link](https://i2.hdslb.com/bfs/archive/480a0eb41fce26e0acb65f82a74501418eee1032.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV188411s7o8) [![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV188411s7o8)](https://www.bilibili.com/video/BV188411s7o8) | [特征图可视化.ipynb](https://github.com/open-mmlab/OpenMMLabCourse/blob/main/codes/MMYOLO_tutorials/%5B%E5%B7%A5%E5%85%B7%E7%B1%BB%E7%AC%AC%E4%B8%80%E6%9C%9F%5D%E7%89%B9%E5%BE%81%E5%9B%BE%E5%8F%AF%E8%A7%86%E5%8C%96.ipynb) | +| 🌟 | 特征图可视化 Demo | [![Link](http://i0.hdslb.com/bfs/archive/081f300c84d6556f40d984cfbe801fc0644ff449.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV1je4y1478R/) [![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV1je4y1478R)](https://www.bilibili.com/video/BV1je4y1478R/) | | +| 🌟 | 配置全解读 | [![Link](http://i1.hdslb.com/bfs/archive/e06daf640ea39b3c0700bb4dc758f1a253f33e13.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV1214y157ck) [![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV1214y157ck)](https://www.bilibili.com/video/BV1214y157ck) | [配置全解读文档](https://zhuanlan.zhihu.com/p/577715188) | +| 🌟 | 源码阅读和调试「必备」技巧 | [![Link](https://i2.hdslb.com/bfs/archive/790d2422c879ff20488910da1c4422b667ea6af7.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV1N14y1V7mB) [![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV1N14y1V7mB)](https://www.bilibili.com/video/BV1N14y1V7mB) | [源码阅读和调试「必备」技巧文档](https://zhuanlan.zhihu.com/p/580885852) | +| 🌟 | 工程文件结构简析 | [![Link](http://i2.hdslb.com/bfs/archive/41030efb84d0cada06d5451c1e6e9bccc0cdb5a3.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV1LP4y117jS)[![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV1LP4y117jS)](https://www.bilibili.com/video/BV1LP4y117jS) | [工程文件结构简析文档](https://zhuanlan.zhihu.com/p/584807195) | +| 🌟 | 10分钟换遍主干网络 | [![Link](http://i0.hdslb.com/bfs/archive/c51f1aef7c605856777249a7b4478f44bd69f3bd.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV1JG4y1d7GC) [![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV1JG4y1d7GC)](https://www.bilibili.com/video/BV1JG4y1d7GC) | [10分钟换遍主干网络文档](https://zhuanlan.zhihu.com/p/585641598)
    [10分钟换遍主干网络.ipynb](https://github.com/open-mmlab/OpenMMLabCourse/blob/main/codes/MMYOLO_tutorials/[实用类第二期]10分钟换遍主干网络.ipynb) | +| 🌟 | 基于 sahi 的大图推理 | [![Link](https://i0.hdslb.com/bfs/archive/62c41f508dbcf63a4c721738171612d2d7069ac2.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV1EK411R7Ws/) [![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV1EK411R7Ws)](https://www.bilibili.com/video/BV1EK411R7Ws/) | [10分钟轻松掌握大图推理.ipynb](https://github.com/open-mmlab/OpenMMLabCourse/blob/main/codes/MMYOLO_tutorials/[工具类第二期]10分钟轻松掌握大图推理.ipynb) | +| 🌟 | 自定义数据集从标注到部署保姆级教程 | [![Link](https://i2.hdslb.com/bfs/archive/13f566c89a18c9c881713b63ec14da952d4c0b14.jpg@112w_63h_1c.webp)](https://www.bilibili.com/video/BV1RG4y137i5) [![bilibili](https://img.shields.io/badge/dynamic/json?label=views&style=social&logo=bilibili&query=data.stat.view&url=https%3A%2F%2Fapi.bilibili.com%2Fx%2Fweb-interface%2Fview%3Fbvid%3DBV1RG4y137i5)](https://www.bilibili.com/video/BV1JG4y1d7GC) | [自定义数据集从标注到部署保姆级教程](https://github.com/open-mmlab/mmyolo/blob/dev/docs/zh_cn/user_guides/custom_dataset.md) | 发布历史和更新细节请参考 [更新日志](https://mmyolo.readthedocs.io/zh_CN/latest/notes/changelog.html) @@ -103,7 +124,7 @@ conda activate open-mmlab pip install openmim mim install "mmengine>=0.3.1" mim install "mmcv>=2.0.0rc1,<2.1.0" -mim install "mmdet>=3.0.0rc3,<3.1.0" +mim install "mmdet>=3.0.0rc5,<3.1.0" git clone https://github.com/open-mmlab/mmyolo.git cd mmyolo # Install albumentations @@ -149,6 +170,7 @@ MMYOLO 用法和 MMDetection 几乎一致,所有教程都是通用的,你也 - 进阶指南 + - [模块组合](docs/zh_cn/advanced_guides/module_combination.md) - [数据流](docs/zh_cn/advanced_guides/data_flow.md) - [How to](docs/zh_cn/advanced_guides/how_to.md) - [插件](docs/zh_cn/advanced_guides/plugins.md) @@ -167,7 +189,7 @@ MMYOLO 用法和 MMDetection 几乎一致,所有教程都是通用的,你也 - [x] [RTMDet](configs/rtmdet) - [x] [YOLOv6](configs/yolov6) - [x] [YOLOv7](configs/yolov7) -- [ ] [PPYOLOE](configs/ppyoloe)(仅推理) +- [x] [PPYOLOE](configs/ppyoloe) @@ -198,6 +220,8 @@ MMYOLO 用法和 MMDetection 几乎一致,所有教程都是通用的,你也
  • YOLOXCSPDarknet
  • EfficientRep
  • CSPNeXt
  • +
  • YOLOv7Backbone
  • +
  • PPYOLOECSPResNet
  • @@ -206,6 +230,8 @@ MMYOLO 用法和 MMDetection 几乎一致,所有教程都是通用的,你也
  • YOLOv6RepPAFPN
  • YOLOXPAFPN
  • CSPNeXtPAFPN
  • +
  • YOLOv7PAFPN
  • +
  • PPYOLOECSPPAFPN
  • diff --git a/docker/Dockerfile b/docker/Dockerfile index 191273c1..2bd00697 100644 --- a/docker/Dockerfile +++ b/docker/Dockerfile @@ -26,7 +26,7 @@ RUN apt-get update \ # Install MMEngine , MMCV and MMDet RUN pip install --no-cache-dir openmim && \ - mim install --no-cache-dir "mmengine>=0.3.1" "mmcv>=2.0.0rc1,<2.1.0" "mmdet>=3.0.0rc3,<3.1.0" + mim install --no-cache-dir "mmengine>=0.3.1" "mmcv>=2.0.0rc1,<2.1.0" "mmdet>=3.0.0rc5,<3.1.0" # Install MMYOLO RUN git clone https://github.com/open-mmlab/mmyolo.git /mmyolo && \ diff --git a/docker/Dockerfile_deployment b/docker/Dockerfile_deployment index 7326cafa..7f63c1cc 100644 --- a/docker/Dockerfile_deployment +++ b/docker/Dockerfile_deployment @@ -30,7 +30,7 @@ RUN wget -q https://github.com/microsoft/onnxruntime/releases/download/v${ONNXRU # Install OPENMIM MMENGINE MMDET RUN pip install --no-cache-dir openmim \ - && mim install --no-cache-dir "mmengine>=0.3.1" "mmdet>=3.0.0rc3,<3.1.0" \ + && mim install --no-cache-dir "mmengine>=0.3.1" "mmdet>=3.0.0rc5,<3.1.0" \ && mim install --no-cache-dir opencv-python==4.5.5.64 opencv-python-headless==4.5.5.64 RUN git clone https://github.com/open-mmlab/mmcv.git -b 2.x mmcv \ diff --git a/docs/en/get_started.md b/docs/en/get_started.md index ab04f5ba..2848c7bb 100644 --- a/docs/en/get_started.md +++ b/docs/en/get_started.md @@ -6,7 +6,8 @@ Compatible MMEngine, MMCV and MMDetection versions are shown as below. Please in | MMYOLO version | MMDetection version | MMEngine version | MMCV version | | :------------: | :----------------------: | :----------------------: | :---------------------: | -| main | mmdet>=3.0.0rc3, \<3.1.0 | mmengine>=0.3.1, \<1.0.0 | mmcv>=2.0.0rc0, \<2.1.0 | +| main | mmdet>=3.0.0rc5, \<3.1.0 | mmengine>=0.3.1, \<1.0.0 | mmcv>=2.0.0rc0, \<2.1.0 | +| 0.3.0 | mmdet>=3.0.0rc5, \<3.1.0 | mmengine>=0.3.1, \<1.0.0 | mmcv>=2.0.0rc0, \<2.1.0 | | 0.2.0 | mmdet>=3.0.0rc3, \<3.1.0 | mmengine>=0.3.1, \<1.0.0 | mmcv>=2.0.0rc0, \<2.1.0 | | 0.1.3 | mmdet>=3.0.0rc3, \<3.1.0 | mmengine>=0.3.1, \<1.0.0 | mmcv>=2.0.0rc0, \<2.1.0 | | 0.1.2 | mmdet>=3.0.0rc2, \<3.1.0 | mmengine>=0.3.0, \<1.0.0 | mmcv>=2.0.0rc0, \<2.1.0 | @@ -54,7 +55,7 @@ conda install pytorch torchvision cpuonly -c pytorch pip install -U openmim mim install "mmengine>=0.3.1" mim install "mmcv>=2.0.0rc1,<2.1.0" -mim install "mmdet>=3.0.0rc3,<3.1.0" +mim install "mmdet>=3.0.0rc5,<3.1.0" ``` **Note:** @@ -213,7 +214,7 @@ thus we only need to install MMEngine, MMCV, MMDetection, and MMYOLO with the fo !pip3 install openmim !mim install "mmengine==0.1.0" !mim install "mmcv>=2.0.0rc1,<2.1.0" -!mim install "mmdet>=3.0.0.rc1" +!mim install "mmdet>=3.0.0rc5,<3.1.0" ``` **Step 2.** Install MMYOLO from the source. diff --git a/docs/en/notes/changelog.md b/docs/en/notes/changelog.md index fc09f75d..f05b39b1 100644 --- a/docs/en/notes/changelog.md +++ b/docs/en/notes/changelog.md @@ -1,5 +1,67 @@ # Changelog +## v0.3.0 (8/1/2023) + +### Highlights + +1. Implement fast version of [RTMDet](https://github.com/open-mmlab/mmyolo/blob/dev/configs/rtmdet/README.md). RTMDet-s 8xA100 training takes only 14 hours. The training speed is 2.6 times faster than the previous version. +2. Support [PPYOLOE](https://github.com/open-mmlab/mmyolo/blob/dev/configs/ppyoloe/README.md) training +3. Support `iscrowd` attribute training in [YOLOv5](https://github.com/open-mmlab/mmyolo/blob/dev/configs/yolov5/crowdhuman/yolov5_s-v61_8xb16-300e_ignore_crowdhuman.py) +4. Support [YOLOv5 assigner result visualization](https://github.com/open-mmlab/mmyolo/blob/dev/projects/assigner_visualization/README.md) + +### New Features + +01. Add `crowdhuman` dataset (#368) +02. Easydeploy support TensorRT inference (#377) +03. Add `YOLOX` structure description (#402) +04. Add a feature for the video demo (#392) +05. Support `YOLOv7` easy deploy (#427) +06. Add resume from specific checkpoint in CLI (#393) +07. Set `metainfo` fields to lower case (#362, #412) +08. Add module combination doc (#349, #352, #345) +09. Add docs about how to freeze the weight of backbone or neck (#418) +10. Add don't used pre-training weights doc in `how_to.md` (#404) +11. Add docs about how to set the random seed (#386) +12. Translate `rtmdet_description.md` document to English (#353) +13. Add doc of `yolov6_description.md` (#382, #372) + +### Bug Fixes + +01. Fix bugs in the output annotation file when `--class-id-txt` is set (#430) +02. Fix batch inference bug in `YOLOv5` head (#413) +03. Fix typehint in some heads (#415, #416, #443) +04. Fix RuntimeError of `torch.cat()` expected a non-empty list of Tensors (#376) +05. Fix the device inconsistency error in `YOLOv7` training (#397) +06. Fix the `scale_factor` and `pad_param` value in `LetterResize` (#387) +07. Fix docstring graph rendering error of readthedocs (#400) +08. Fix AssertionError when `YOLOv6` from training to val (#378) +09. Fix CI error due to `np.int` and legacy builder.py (#389) +10. Fix MMDeploy rewriter (#366) +11. Fix MMYOLO unittest scope bug (#351) +12. Fix `pad_param` error (#354) +13. Fix twice head inference bug (#342) +14. Fix customize dataset training (#428) + +### Improvements + +01. Update `useful_tools.md` (#384) +02. update the English version of `custom_dataset.md` (#381) +03. Remove context argument from the rewriter function (#395) +04. deprecating `np.bool` type alias (#396) +05. Add new video link for custom dataset (#365) +06. Export onnx for model only (#361) +07. Add MMYOLO regression test yml (#359) +08. Update video tutorials in `article.md` (#350) +09. Add deploy demo (#343) +10. Optimize the vis results of large images in debug mode (#346) +11. Improve args for `browse_dataset` and support `RepeatDataset` (#340, #338) + +### Contributors + +A total of 28 developers contributed to this release. + +Thank @RangeKing, @PeterH0323, @Nioolek, @triple-Mu, @matrixgame2018, @xin-li-67, @tang576225574, @kitecats, @Seperendity, @diplomatist, @vaew, @wzr-skn, @VoyagerXvoyagerx, @MambaWong, @tianleiSHI, @caj-github, @zhubochao, @lvhan028, @dsghaonan, @lyviva, @yuewangg, @wang-tf, @satuoqaq, @grimoire, @RunningLeon, @hanrui1sensetime, @RangiLyu, @hhaAndroid + ## v0.2.0(1/12/2022) ### Highlights diff --git a/docs/en/user_guides/yolov5_tutorial.md b/docs/en/user_guides/yolov5_tutorial.md index c225757f..ff9d703a 100644 --- a/docs/en/user_guides/yolov5_tutorial.md +++ b/docs/en/user_guides/yolov5_tutorial.md @@ -12,7 +12,7 @@ conda install pytorch torchvision -c pytorch pip install -U openmim mim install "mmengine>=0.3.1" mim install "mmcv>=2.0.0rc1,<2.1.0" -mim install "mmdet>=3.0.0rc3,<3.1.0" +mim install "mmdet>=3.0.0rc5,<3.1.0" git clone https://github.com/open-mmlab/mmyolo.git cd mmyolo # Install albumentations diff --git a/docs/zh_cn/advanced_guides/index.rst b/docs/zh_cn/advanced_guides/index.rst index 81810a88..02b06e61 100644 --- a/docs/zh_cn/advanced_guides/index.rst +++ b/docs/zh_cn/advanced_guides/index.rst @@ -1,3 +1,11 @@ +模块组合 +************************ + +.. toctree:: + :maxdepth: 1 + + module_combination.md + 数据流 ************************ diff --git a/docs/zh_cn/article.md b/docs/zh_cn/article.md index 6c999e5c..706f11d0 100644 --- a/docs/zh_cn/article.md +++ b/docs/zh_cn/article.md @@ -7,18 +7,14 @@ ### 文章 - [社区协作,简洁易用,快来开箱新一代 YOLO 系列开源库](https://zhuanlan.zhihu.com/p/575615805) - - [MMYOLO 社区倾情贡献,RTMDet 原理社区开发者解读来啦!](https://zhuanlan.zhihu.com/p/569777684) - - [玩转 MMYOLO 基础类第一期: 配置文件太复杂?继承用法看不懂?配置全解读来了](https://zhuanlan.zhihu.com/p/577715188) - - [玩转 MMYOLO 工具类第一期: 特征图可视化](https://zhuanlan.zhihu.com/p/578141381?) - - [玩转 MMYOLO 实用类第二期:源码阅读和调试「必备」技巧文档](https://zhuanlan.zhihu.com/p/580885852) - - [玩转 MMYOLO 基础类第二期:工程文件结构简析](https://zhuanlan.zhihu.com/p/584807195) - - [玩转 MMYOLO 实用类第二期:10分钟换遍主干网络文档](https://zhuanlan.zhihu.com/p/585641598) +- [MMYOLO 自定义数据集从标注到部署保姆级教程](https://zhuanlan.zhihu.com/p/595497726) +- [满足一切需求的 MMYOLO 可视化:测试过程可视化](https://zhuanlan.zhihu.com/p/593179372) ### 视频 diff --git a/docs/zh_cn/get_started.md b/docs/zh_cn/get_started.md index c1371f37..32b158bd 100644 --- a/docs/zh_cn/get_started.md +++ b/docs/zh_cn/get_started.md @@ -6,7 +6,8 @@ | MMYOLO version | MMDetection version | MMEngine version | MMCV version | | :------------: | :----------------------: | :----------------------: | :---------------------: | -| main | mmdet>=3.0.0rc3, \<3.1.0 | mmengine>=0.3.1, \<1.0.0 | mmcv>=2.0.0rc0, \<2.1.0 | +| main | mmdet>=3.0.0rc5, \<3.1.0 | mmengine>=0.3.1, \<1.0.0 | mmcv>=2.0.0rc0, \<2.1.0 | +| 0.3.0 | mmdet>=3.0.0rc5, \<3.1.0 | mmengine>=0.3.1, \<1.0.0 | mmcv>=2.0.0rc0, \<2.1.0 | | 0.2.0 | mmdet>=3.0.0rc3, \<3.1.0 | mmengine>=0.3.1, \<1.0.0 | mmcv>=2.0.0rc0, \<2.1.0 | | 0.1.3 | mmdet>=3.0.0rc3, \<3.1.0 | mmengine>=0.3.1, \<1.0.0 | mmcv>=2.0.0rc0, \<2.1.0 | | 0.1.2 | mmdet>=3.0.0rc2, \<3.1.0 | mmengine>=0.3.0, \<1.0.0 | mmcv>=2.0.0rc0, \<2.1.0 | @@ -54,7 +55,7 @@ conda install pytorch torchvision cpuonly -c pytorch pip install -U openmim mim install "mmengine>=0.3.1" mim install "mmcv>=2.0.0rc1,<2.1.0" -mim install "mmdet>=3.0.0rc3,<3.1.0" +mim install "mmdet>=3.0.0rc5,<3.1.0" ``` **注意:** @@ -214,7 +215,7 @@ pip install "mmcv>=2.0.0rc1" -f https://download.openmmlab.com/mmcv/dist/cu116/t !pip3 install openmim !mim install "mmengine==0.1.0" !mim install "mmcv>=2.0.0rc1,<2.1.0" -!mim install "mmdet>=3.0.0.rc1" +!mim install "mmdet>=3.0.0rc5,<3.1.0" ``` **步骤 2.** 使用源码安装 MMYOLO: diff --git a/docs/zh_cn/notes/changelog.md b/docs/zh_cn/notes/changelog.md index ac5df1dc..7f5a9b3d 100644 --- a/docs/zh_cn/notes/changelog.md +++ b/docs/zh_cn/notes/changelog.md @@ -1,5 +1,72 @@ # 更新日志 +## v0.3.0 (8/1/2023) + +### 亮点 + +1. 实现了 [RTMDet](https://github.com/open-mmlab/mmyolo/blob/dev/configs/rtmdet/README.md) 的快速版本。RTMDet-s 8xA100 训练只需要 14 个小时,训练速度相比原先版本提升 2.6 倍。 +2. 支持 [PPYOLOE](https://github.com/open-mmlab/mmyolo/blob/dev/configs/ppyoloe/README.md) 训练。 +3. 支持 [YOLOv5](https://github.com/open-mmlab/mmyolo/blob/dev/configs/yolov5/crowdhuman/yolov5_s-v61_8xb16-300e_ignore_crowdhuman.py) 的 `iscrowd` 属性训练。 +4. 支持 [YOLOv5 正样本分配结果可视化](https://github.com/open-mmlab/mmyolo/blob/dev/projects/assigner_visualization/README.md) +5. 新增 [YOLOv6 原理和实现全解析文档](https://github.com/open-mmlab/mmyolo/blob/dev/docs/zh_cn/algorithm_descriptions/yolov6_description.md) + +### 新特性 + +01. 新增 `crowdhuman` 数据集 (#368) +02. EasyDeploy 中支持 TensorRT 推理 (#377) +03. 新增 `YOLOX` 结构图描述 (#402) +04. 新增视频推理脚本 (#392) +05. EasyDeploy 中支持 `YOLOv7` 部署 (#427) +06. 支持从 CLI 中的特定检查点恢复训练 (#393) +07. 将元信息字段设置为小写(#362、#412) +08. 新增模块组合文档 (#349, #352, #345) +09. 新增关于如何冻结 backbone 或 neck 权重的文档 (#418) +10. 在 `how_to.md` 中添加不使用预训练权重的文档 (#404) +11. 新增关于如何设置随机种子的文档 (#386) +12. 将 `rtmdet_description.md` 文档翻译成英文 (#353) + +### Bug 修复 + +01. 修复设置 `--class-id-txt` 时输出注释文件中的错误 (#430) +02. 修复 `YOLOv5` head 中的批量推理错误 (#413) +03. 修复某些 head 的类型提示(#415、#416、#443) +04. 修复 expected a non-empty list of Tensors 错误 (#376) +05. 修复 `YOLOv7` 训练中的设备不一致错误(#397) +06. 修复 `LetterResize` 中的 `scale_factor` 和 `pad_param` 值 (#387) +07. 修复 readthedocs 的 docstring 图形渲染错误 (#400) +08. 修复 `YOLOv6` 从训练到验证时的断言错误 (#378) +09. 修复 `np.int` 和旧版 builder.py 导致的 CI 错误 (#389) +10. 修复 MMDeploy 重写器 (#366) +11. 修复 MMYOLO 单元测试错误 (#351) +12. 修复 `pad_param` 错误 (#354) +13. 修复 head 推理两次的错误(#342) +14. 修复自定义数据集训练 (#428) + +### 完善 + +01. 更新 `useful_tools.md` (#384) +02. 更新英文版 `custom_dataset.md` (#381) +03. 重写函数删除上下文参数 (#395) +04. 弃用 `np.bool` 类型别名 (#396) +05. 为自定义数据集添加新的视频链接 (#365) +06. 仅为模型导出 onnx (#361) +07. 添加 MMYOLO 回归测试 yml (#359) +08. 更新 `article.md` 中的视频教程 (#350) +09. 添加部署 demo (#343) +10. 优化 debug 模式下大图的可视化效果(#346) +11. 改进 `browse_dataset` 的参数并支持 `RepeatDataset` (#340, #338) + +### 视频 + +1. 发布了 [基于 sahi 的大图推理](https://www.bilibili.com/video/BV1EK411R7Ws/) +2. 发布了 [自定义数据集从标注到部署保姆级教程](https://www.bilibili.com/video/BV1RG4y137i5) + +### 贡献者 + +总共 28 位开发者参与了本次版本 + +谢谢 @RangeKing, @PeterH0323, @Nioolek, @triple-Mu, @matrixgame2018, @xin-li-67, @tang576225574, @kitecats, @Seperendity, @diplomatist, @vaew, @wzr-skn, @VoyagerXvoyagerx, @MambaWong, @tianleiSHI, @caj-github, @zhubochao, @lvhan028, @dsghaonan, @lyviva, @yuewangg, @wang-tf, @satuoqaq, @grimoire, @RunningLeon, @hanrui1sensetime, @RangiLyu, @hhaAndroid + ## v0.2.0(1/12/2022) ### 亮点 diff --git a/docs/zh_cn/overview.md b/docs/zh_cn/overview.md index 1515b038..6856b132 100644 --- a/docs/zh_cn/overview.md +++ b/docs/zh_cn/overview.md @@ -51,8 +51,9 @@ MMYOLO 文件结构和 MMDetection 完全一致。为了能够充分复用 MMDet 5. 参考以下教程深入了解: - - [数据流](https://mmyolo.readthedocs.io/zh_CN/latest/advanced_guides/index.html#id1) + - [模块组合](https://mmyolo.readthedocs.io/zh_CN/latest/advanced_guides/index.html#id1) + - [数据流](https://mmyolo.readthedocs.io/zh_CN/latest/advanced_guides/index.html#id2) - [How to](https://mmyolo.readthedocs.io/zh_CN/latest/advanced_guides/index.html#how-to) - - [插件](https://mmyolo.readthedocs.io/zh_CN/latest/advanced_guides/index.html#id3) + - [插件](https://mmyolo.readthedocs.io/zh_CN/latest/advanced_guides/index.html#id4) 6. [解读文章和资源汇总](article.md) diff --git a/docs/zh_cn/user_guides/yolov5_tutorial.md b/docs/zh_cn/user_guides/yolov5_tutorial.md index 20049899..411c4cb4 100644 --- a/docs/zh_cn/user_guides/yolov5_tutorial.md +++ b/docs/zh_cn/user_guides/yolov5_tutorial.md @@ -12,7 +12,7 @@ conda install pytorch torchvision -c pytorch pip install -U openmim mim install "mmengine>=0.3.1" mim install "mmcv>=2.0.0rc1,<2.1.0" -mim install "mmdet>=3.0.0rc3,<3.1.0" +mim install "mmdet>=3.0.0rc5,<3.1.0" git clone https://github.com/open-mmlab/mmyolo.git cd mmyolo # Install albumentations diff --git a/mmyolo/__init__.py b/mmyolo/__init__.py index 67367292..757c4084 100644 --- a/mmyolo/__init__.py +++ b/mmyolo/__init__.py @@ -14,7 +14,7 @@ mmengine_minimum_version = '0.3.1' mmengine_maximum_version = '1.0.0' mmengine_version = digit_version(mmengine.__version__) -mmdet_minimum_version = '3.0.0rc3' +mmdet_minimum_version = '3.0.0rc5' mmdet_maximum_version = '3.1.0' mmdet_version = digit_version(mmdet.__version__) diff --git a/mmyolo/models/task_modules/assigners/batch_yolov7_assigner.py b/mmyolo/models/task_modules/assigners/batch_yolov7_assigner.py index 2bfb0562..6709968e 100644 --- a/mmyolo/models/task_modules/assigners/batch_yolov7_assigner.py +++ b/mmyolo/models/task_modules/assigners/batch_yolov7_assigner.py @@ -254,6 +254,9 @@ class BatchYOLOv7Assigner(nn.Module): _mlvl_decoderd_bboxes = torch.cat(_mlvl_decoderd_bboxes, dim=0) num_pred_positive = _mlvl_decoderd_bboxes.shape[0] + if num_pred_positive == 0: + continue + # scaled xywh batch_input_shape_wh = pred_results[0].new_tensor( batch_input_shape[::-1]).repeat((1, 2)) diff --git a/mmyolo/version.py b/mmyolo/version.py index f823adab..f3c663b4 100644 --- a/mmyolo/version.py +++ b/mmyolo/version.py @@ -1,6 +1,6 @@ # Copyright (c) OpenMMLab. All rights reserved. -__version__ = '0.2.0' +__version__ = '0.3.0' from typing import Tuple diff --git a/projects/example_project/README.md b/projects/example_project/README.md index 52ca748f..24c84d98 100644 --- a/projects/example_project/README.md +++ b/projects/example_project/README.md @@ -37,20 +37,51 @@ You should claim whether this is based on the pre-trained weights, which are con | Method | Backbone | Pretrained Model | Training set | Test set | #epoch | box AP | Download | | :---------------------------------------------------------------------------: | :-------------------: | :--------------: | :------------: | :----------: | :----: | :----: | :----------------------: | -| [YOLOv5 dummy](configs/yolov5_s_dummy-backbone_v61_syncbn_8xb16-300e_coco.py) | DummyYOLOv5CSPDarknet | - | COCO2017 Train | COCO2017 Val | 12 | 0.8853 | [model](<>) \| [log](<>) | +| [YOLOv5 dummy](configs/yolov5_s_dummy-backbone_v61_syncbn_8xb16-300e_coco.py) | DummyYOLOv5CSPDarknet | - | COCO2017 Train | COCO2017 Val | 300 | 37.7 | [model](<>) \| [log](<>) | ## Citation ```latex -@article{Ren_2017, - title={Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks}, - journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, - publisher={Institute of Electrical and Electronics Engineers (IEEE)}, - author={Ren, Shaoqing and He, Kaiming and Girshick, Ross and Sun, Jian}, - year={2017}, - month={Jun}, +@software{glenn_jocher_2022_7002879, + author = {Glenn Jocher and + Ayush Chaurasia and + Alex Stoken and + Jirka Borovec and + NanoCode012 and + Yonghye Kwon and + TaoXie and + Kalen Michael and + Jiacong Fang and + imyhxy and + Lorna and + Colin Wong and + 曾逸夫(Zeng Yifu) and + Abhiram V and + Diego Montes and + Zhiqiang Wang and + Cristi Fati and + Jebastin Nadar and + Laughing and + UnglvKitDe and + tkianai and + yxNONG and + Piotr Skalski and + Adam Hogan and + Max Strobel and + Mrinal Jain and + Lorenzo Mammana and + xylieong}, + title = {{ultralytics/yolov5: v6.2 - YOLOv5 Classification + Models, Apple M1, Reproducibility, ClearML and + Deci.ai integrations}}, + month = aug, + year = 2022, + publisher = {Zenodo}, + version = {v6.2}, + doi = {10.5281/zenodo.7002879}, + url = {https://doi.org/10.5281/zenodo.7002879} } ``` diff --git a/projects/example_project/configs/yolov5_s_dummy-backbone_v61_syncbn_8xb16-300e_coco.py b/projects/example_project/configs/yolov5_s_dummy-backbone_v61_syncbn_8xb16-300e_coco.py index bdbdcffd..55b43bb3 100644 --- a/projects/example_project/configs/yolov5_s_dummy-backbone_v61_syncbn_8xb16-300e_coco.py +++ b/projects/example_project/configs/yolov5_s_dummy-backbone_v61_syncbn_8xb16-300e_coco.py @@ -1,4 +1,4 @@ -_base_ = ['../../../configs/yolov5/yolov5_s-v61_syncbn_8xb16-300e_coco.py'] +_base_ = '../../../configs/yolov5/yolov5_s-v61_syncbn_8xb16-300e_coco.py' custom_imports = dict(imports=['projects.example_project.dummy']) diff --git a/projects/misc/custom_dataset/yolov7_tiny_syncbn_fast_1xb32-100e_cat.py b/projects/misc/custom_dataset/yolov7_tiny_syncbn_fast_1xb32-100e_cat.py new file mode 100644 index 00000000..fff59cb3 --- /dev/null +++ b/projects/misc/custom_dataset/yolov7_tiny_syncbn_fast_1xb32-100e_cat.py @@ -0,0 +1,78 @@ +_base_ = '../yolov7/yolov7_tiny_syncbn_fast_8x16b-300e_coco.py' + +max_epochs = 100 +data_root = './data/cat/' + +work_dir = './work_dirs/yolov7_tiny_syncbn_fast_1xb32-100e_cat' + +load_from = 'https://download.openmmlab.com/mmyolo/v0/yolov7/yolov7_tiny_syncbn_fast_8x16b-300e_coco/yolov7_tiny_syncbn_fast_8x16b-300e_coco_20221126_102719-0ee5bbdf.pth' # noqa + +train_batch_size_per_gpu = 32 +train_num_workers = 4 # train_num_workers = nGPU x 4 + +save_epoch_intervals = 2 + +# base_lr_default * (your_bs / default_bs) +base_lr = 0.01 / 4 + +anchors = [ + [(68, 69), (154, 91), (143, 162)], # P3/8 + [(242, 160), (189, 287), (391, 207)], # P4/16 + [(353, 337), (539, 341), (443, 432)] # P5/32 +] + +class_name = ('cat', ) +num_classes = len(class_name) +metainfo = dict(classes=class_name, palette=[(220, 20, 60)]) + +train_cfg = dict( + max_epochs=max_epochs, + val_begin=20, + val_interval=save_epoch_intervals, + dynamic_intervals=[(max_epochs - 10, 1)]) + +model = dict( + bbox_head=dict( + head_module=dict(num_classes=num_classes), + prior_generator=dict(base_sizes=anchors), + loss_cls=dict(loss_weight=0.5 * + (num_classes / 80 * 3 / _base_.num_det_layers)))) + +train_dataloader = dict( + batch_size=train_batch_size_per_gpu, + num_workers=train_num_workers, + dataset=dict( + _delete_=True, + type='RepeatDataset', + times=5, + dataset=dict( + type=_base_.dataset_type, + data_root=data_root, + metainfo=metainfo, + ann_file='annotations/trainval.json', + data_prefix=dict(img='images/'), + filter_cfg=dict(filter_empty_gt=False, min_size=32), + pipeline=_base_.train_pipeline))) + +val_dataloader = dict( + dataset=dict( + metainfo=metainfo, + data_root=data_root, + ann_file='annotations/trainval.json', + data_prefix=dict(img='images/'))) + +test_dataloader = val_dataloader + +val_evaluator = dict(ann_file=data_root + 'annotations/trainval.json') +test_evaluator = val_evaluator + +optim_wrapper = dict(optimizer=dict(lr=base_lr)) + +default_hooks = dict( + checkpoint=dict( + type='CheckpointHook', + interval=save_epoch_intervals, + max_keep_ckpts=2, + save_best='auto'), + param_scheduler=dict(max_epochs=max_epochs), + logger=dict(type='LoggerHook', interval=10)) diff --git a/setup.cfg b/setup.cfg index c62d88cc..d30673d0 100644 --- a/setup.cfg +++ b/setup.cfg @@ -18,4 +18,4 @@ SPLIT_BEFORE_EXPRESSION_AFTER_OPENING_PAREN = true [codespell] skip = *.ipynb quiet-level = 3 -ignore-words-list = patten,nd,ty,mot,hist,formating,winn,gool,datas,wan,confids,TOOD,tood,ba,warmup,elease +ignore-words-list = patten,nd,ty,mot,hist,formating,winn,gool,datas,wan,confids,tood,ba,warmup,elease,dota