Update GitHub banner redirect (#13542)

* Update README.md

Signed-off-by: Muhammad Rizwan Munawar <muhammadrizwanmunawar123@gmail.com>

* Auto-format by https://ultralytics.com/actions

* Update README.md

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* Auto-format by https://ultralytics.com/actions

* Update README.md

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* Update README.zh-CN.md

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* Update README.md

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* Auto-format by https://ultralytics.com/actions

* Update README.md

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* Update README.md

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* Update README.zh-CN.md

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>

---------

Signed-off-by: Muhammad Rizwan Munawar <muhammadrizwanmunawar123@gmail.com>
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
pull/13540/head^2
Muhammad Rizwan Munawar 2025-03-23 15:35:00 +05:00 committed by GitHub
parent 8cc449636d
commit ba1a483345
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 84 additions and 84 deletions

View File

@ -1,7 +1,7 @@
<div align="center">
<p>
<a href="https://www.ultralytics.com/events/yolovision" target="_blank">
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png"></a>
<a href="https://www.ultralytics.com/blog/all-you-need-to-know-about-ultralytics-yolo11-and-its-applications" target="_blank">
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png" alt="Ultralytics YOLO banner"></a>
</p>
[中文](https://docs.ultralytics.com/zh) | [한국어](https://docs.ultralytics.com/ko) | [日本語](https://docs.ultralytics.com/ja) | [Русский](https://docs.ultralytics.com/ru) | [Deutsch](https://docs.ultralytics.com/de) | [Français](https://docs.ultralytics.com/fr) | [Español](https://docs.ultralytics.com/es) | [Português](https://docs.ultralytics.com/pt) | [Türkçe](https://docs.ultralytics.com/tr) | [Tiếng Việt](https://docs.ultralytics.com/vi) | [العربية](https://docs.ultralytics.com/ar)
@ -70,9 +70,9 @@ See the [YOLOv5 Docs](https://docs.ultralytics.com/yolov5/) for full documentati
Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a [**Python>=3.8.0**](https://www.python.org/) environment, including [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/).
```bash
git clone https://github.com/ultralytics/yolov5 # clone
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
pip install -r requirements.txt # install
```
</details>
@ -106,16 +106,16 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc.
`detect.py` runs inference on a variety of sources, downloading [models](https://github.com/ultralytics/yolov5/tree/master/models) automatically from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
```bash
python detect.py --weights yolov5s.pt --source 0 # webcam
img.jpg # image
vid.mp4 # video
screen # screenshot
path/ # directory
list.txt # list of images
list.streams # list of streams
'path/*.jpg' # glob
'https://youtu.be/LNwODJXcvt4' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
python detect.py --weights yolov5s.pt --source 0 # webcam
python detect.py --weights yolov5s.pt --source img.jpg # image
python detect.py --weights yolov5s.pt --source vid.mp4 # video
python detect.py --weights yolov5s.pt --source screen # screenshot
python detect.py --weights yolov5s.pt --source path/ # directory
python detect.py --weights yolov5s.pt --source list.txt # list of images
python detect.py --weights yolov5s.pt --source list.streams # list of streams
python detect.py --weights yolov5s.pt --source 'path/*.jpg' # glob
python detect.py --weights yolov5s.pt --source 'https://youtu.be/LNwODJXcvt4' # YouTube
python detect.py --weights yolov5s.pt --source 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
```
</details>
@ -126,11 +126,11 @@ python detect.py --weights yolov5s.pt --source 0 #
The commands below reproduce YOLOv5 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) results. [Models](https://github.com/ultralytics/yolov5/tree/master/models) and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) download automatically from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases). Training times for YOLOv5n/s/m/l/x are 1/2/4/6/8 days on a V100 GPU ([Multi-GPU](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/) times faster). Use the largest `--batch-size` possible, or pass `--batch-size -1` for YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092). Batch sizes shown for V100-16GB.
```bash
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128
yolov5s 64
yolov5m 40
yolov5l 24
yolov5x 16
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5s.yaml --batch-size 64
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5m.yaml --batch-size 40
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5l.yaml --batch-size 24
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5x.yaml --batch-size 16
```
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
@ -289,8 +289,8 @@ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train
Validate YOLOv5s-seg mask mAP on COCO dataset:
```bash
bash data/scripts/get_coco.sh --val --segments # download COCO val segments split (780MB, 5000 images)
python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # validate
bash data/scripts/get_coco.sh --val --segments # download COCO val segments split (780MB, 5000 images)
python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # validate
```
### Predict
@ -380,8 +380,8 @@ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/trai
Validate YOLOv5m-cls accuracy on ImageNet-1k dataset:
```bash
bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images)
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate
bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images)
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate
```
### Predict

View File

@ -1,7 +1,7 @@
<div align="center">
<p>
<a href="https://www.ultralytics.com/events/yolovision" target="_blank">
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png"></a>
<a href="https://www.ultralytics.com/blog/all-you-need-to-know-about-ultralytics-yolo11-and-its-applications" target="_blank">
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png" alt="Ultralytics YOLO banner"></a>
</p>
[中文](https://docs.ultralytics.com/zh) | [한국어](https://docs.ultralytics.com/ko) | [日本語](https://docs.ultralytics.com/ja) | [Русский](https://docs.ultralytics.com/ru) | [Deutsch](https://docs.ultralytics.com/de) | [Français](https://docs.ultralytics.com/fr) | [Español](https://docs.ultralytics.com/es) | [Português](https://docs.ultralytics.com/pt) | [Türkçe](https://docs.ultralytics.com/tr) | [Tiếng Việt](https://docs.ultralytics.com/vi) | [العربية](https://docs.ultralytics.com/ar)
@ -68,9 +68,9 @@ pip install ultralytics
克隆 repo并要求在 [**Python>=3.8.0**](https://www.python.org/) 环境中安装 [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) ,且要求 [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/) 。
```bash
git clone https://github.com/ultralytics/yolov5 # clone
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
pip install -r requirements.txt # install
```
</details>
@ -104,16 +104,16 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc.
`detect.py` 在各种来源上运行推理, [模型](https://github.com/ultralytics/yolov5/tree/master/models) 自动从 最新的YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) 中下载,并将结果保存到 `runs/detect`
```bash
python detect.py --weights yolov5s.pt --source 0 # webcam
img.jpg # image
vid.mp4 # video
screen # screenshot
path/ # directory
list.txt # list of images
list.streams # list of streams
'path/*.jpg' # glob
'https://youtu.be/LNwODJXcvt4' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
python detect.py --weights yolov5s.pt --source 0 # webcam
python detect.py --weights yolov5s.pt --source img.jpg # image
python detect.py --weights yolov5s.pt --source vid.mp4 # video
python detect.py --weights yolov5s.pt --source screen # screenshot
python detect.py --weights yolov5s.pt --source path/ # directory
python detect.py --weights yolov5s.pt --source list.txt # list of images
python detect.py --weights yolov5s.pt --source list.streams # list of streams
python detect.py --weights yolov5s.pt --source 'path/*.jpg' # glob
python detect.py --weights yolov5s.pt --source 'https://youtu.be/LNwODJXcvt4' # YouTube
python detect.py --weights yolov5s.pt --source 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
```
</details>
@ -125,11 +125,11 @@ python detect.py --weights yolov5s.pt --source 0 #
将自动的从 YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) 中下载。 YOLOv5n/s/m/l/x 在 V100 GPU 的训练时间为 1/2/4/6/8 天( [多GPU](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training/) 训练速度更快)。 尽可能使用更大的 `--batch-size` ,或通过 `--batch-size -1` 实现 YOLOv5 [自动批处理](https://github.com/ultralytics/yolov5/pull/5092) 。下方显示的 batchsize 适用于 V100-16GB。
```bash
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128
yolov5s 64
yolov5m 40
yolov5l 24
yolov5x 16
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5s.yaml --batch-size 64
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5m.yaml --batch-size 40
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5l.yaml --batch-size 24
python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5x.yaml --batch-size 16
```
<img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
@ -290,8 +290,8 @@ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train
在 COCO 数据集上验证 YOLOv5s-seg mask mAP
```bash
bash data/scripts/get_coco.sh --val --segments # 下载 COCO val segments 数据集 (780MB, 5000 images)
python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # 验证
bash data/scripts/get_coco.sh --val --segments # 下载 COCO val segments 数据集 (780MB, 5000 images)
python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # 验证
```
### 预测
@ -380,8 +380,8 @@ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/trai
在 ImageNet-1k 数据集上验证 YOLOv5m-cls 的准确性:
```bash
bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images)
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate
bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images)
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate
```
### 预测

View File

@ -45,7 +45,7 @@ That's it! You're done 😎
To enable ClearML experiment tracking, simply install the ClearML pip package.
```bash
pip install clearml>=1.2.0
pip install clearml
```
This will enable integration with the YOLOv5 training script. Every training run from now on, will be captured and stored by the ClearML experiment manager.
@ -144,7 +144,7 @@ clearml-data close
Now that you have a ClearML dataset, you can very simply use it to train custom YOLOv5 🚀 models!
```bash
python train.py --img 640 --batch 16 --epochs 3 --data clearml://<your_dataset_id> --weights yolov5s.pt --cache
python train.py --img 640 --batch 16 --epochs 3 --data clearml:// yolov5s.pt --cache YOUR_DATASET_ID --weights
```
## 👀 Hyperparameter Optimization
@ -177,7 +177,7 @@ In short: every experiment tracked by the experiment manager contains enough inf
You can turn any machine (a cloud VM, a local GPU machine, your own laptop ... ) into a ClearML agent by simply running:
```bash
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
clearml-agent daemon --queue QUEUES_TO_LISTEN_TO [--docker]
```
### Cloning, Editing And Enqueuing

View File

@ -102,12 +102,12 @@ Logging Models to Comet is disabled by default. To enable it, pass the `save-per
```shell
python train.py \
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--save-period 1
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--save-period 1
```
## Logging Model Predictions
@ -122,12 +122,12 @@ Here is an [example project using the Panel](https://www.comet.com/examples/come
```shell
python train.py \
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--bbox_interval 2
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--bbox_interval 2
```
### Controlling the number of Prediction Images logged to Comet
@ -136,12 +136,12 @@ When logging predictions from YOLOv5, Comet will log the images associated with
```shell
env COMET_MAX_IMAGE_UPLOADS=200 python train.py \
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--bbox_interval 1
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--bbox_interval 1
```
### Logging Class Level Metrics
@ -150,11 +150,11 @@ Use the `COMET_LOG_PER_CLASS_METRICS` environment variable to log mAP, precision
```shell
env COMET_LOG_PER_CLASS_METRICS=true python train.py \
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt
```
## Uploading a Dataset to Comet Artifacts
@ -165,12 +165,12 @@ The dataset be organized in the way described in the [YOLOv5 documentation](http
```shell
python train.py \
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--upload_dataset
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--upload_dataset
```
You can find the uploaded dataset in the Artifacts tab in your Comet Workspace <img width="1073" alt="artifact-1" src="https://user-images.githubusercontent.com/7529846/186929193-162718bf-ec7b-4eb9-8c3b-86b3763ef8ea.png">
@ -192,11 +192,11 @@ Then pass this file to your training script in the following way
```shell
python train.py \
--img 640 \
--batch 16 \
--epochs 5 \
--data artifact.yaml \
--weights yolov5s.pt
--img 640 \
--batch 16 \
--epochs 5 \
--data artifact.yaml \
--weights yolov5s.pt
```
Artifacts also allow you to track the lineage of data as it flows through your Experimentation workflow. Here you can see a graph that shows you all the experiments that have used your uploaded dataset. <img width="1391" alt="artifact-4" src="https://user-images.githubusercontent.com/7529846/186929264-4c4014fa-fe51-4f3c-a5c5-f6d24649b1b4.png">
@ -211,7 +211,7 @@ This will restore the run to its state before the interruption, which includes r
```shell
python train.py \
--resume "comet://<your run path>"
--resume "comet://<your run path>"
```
## Hyperparameter Search with the Comet Optimizer