Apply `ruff==0.9.0` formatting (#13486)

* Refactor code for speed and clarity

* Auto-format by https://ultralytics.com/actions

---------

Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
pull/13488/head
Glenn Jocher 2025-01-10 01:42:04 +01:00 committed by GitHub
parent 6981c2742a
commit 86fd1ab270
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
20 changed files with 53 additions and 84 deletions

View File

@ -208,7 +208,7 @@ YOLOv5 has been designed to be super easy to get started and simple to learn. We
<summary>Figure Notes</summary>
- **COCO AP val** denotes mAP@0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536.
- **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32.
- **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/) V100 instance at batch-size 32.
- **EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8.
- **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
@ -235,7 +235,7 @@ YOLOv5 has been designed to be super easy to get started and simple to learn. We
- All checkpoints are trained to 300 epochs with default settings. Nano and Small models use [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) hyps, all others use [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml).
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.<br>Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
- **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance. NMS times (~1 ms/img) not included.<br>Reproduce by `python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/) instance. NMS times (~1 ms/img) not included.<br>Reproduce by `python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **TTA** [Test Time Augmentation](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/) includes reflection and scale augmentations.<br>Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
</details>

View File

@ -207,7 +207,7 @@ YOLOv5 超级容易上手,简单易学。我们优先考虑现实世界的结
<summary>图表笔记</summary>
- **COCO AP val** 表示 mAP@0.5:0.95 指标,在 [COCO val2017](http://cocodataset.org) 数据集的 5000 张图像上测得, 图像包含 256 到 1536 各种推理大小。
- **显卡推理速度** 为在 [COCO val2017](http://cocodataset.org) 数据集上的平均推理时间,使用 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100实例batchsize 为 32 。
- **显卡推理速度** 为在 [COCO val2017](http://cocodataset.org) 数据集上的平均推理时间,使用 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/) V100实例batchsize 为 32 。
- **EfficientDet** 数据来自 [google/automl](https://github.com/google/automl) batchsize 为32。
- **复现命令** 为 `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
@ -234,7 +234,7 @@ YOLOv5 超级容易上手,简单易学。我们优先考虑现实世界的结
- 所有模型都使用默认配置,训练 300 epochs。n和s模型使用 [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) ,其他模型都使用 [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml) 。
- \*\*mAP<sup>val</sup>\*\*在单模型单尺度上计算,数据集使用 [COCO val2017](http://cocodataset.org) 。<br>复现命令 `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
- **推理速度**在 COCO val 图像总体时间上进行平均得到,测试环境使用[AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/)实例。 NMS 时间 (大约 1 ms/img) 不包括在内。<br>复现命令 `python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **推理速度**在 COCO val 图像总体时间上进行平均得到,测试环境使用[AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p4/)实例。 NMS 时间 (大约 1 ms/img) 不包括在内。<br>复现命令 `python val.py --data coco.yaml --img 640 --task speed --batch 1`
- **TTA** [测试时数据增强](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation/) 包括反射和尺度变换。<br>复现命令 `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
</details>

View File

@ -192,7 +192,7 @@ def run(
vid_writer[i].write(im0)
# Print time (inference-only)
LOGGER.info(f"{s}{dt[1].dt * 1E3:.1f}ms")
LOGGER.info(f"{s}{dt[1].dt * 1e3:.1f}ms")
# Print results
t = tuple(x.t / seen * 1e3 for x in dt) # speeds per image

View File

@ -201,10 +201,10 @@ def train(opt, device):
scaler = amp.GradScaler(enabled=cuda)
val = test_dir.stem # 'val' or 'test'
LOGGER.info(
f'Image sizes {imgsz} train, {imgsz} test\n'
f'Using {nw * WORLD_SIZE} dataloader workers\n'
f"Image sizes {imgsz} train, {imgsz} test\n"
f"Using {nw * WORLD_SIZE} dataloader workers\n"
f"Logging results to {colorstr('bold', save_dir)}\n"
f'Starting {opt.model} training on {data} dataset with {nc} classes for {epochs} epochs...\n\n'
f"Starting {opt.model} training on {data} dataset with {nc} classes for {epochs} epochs...\n\n"
f"{'Epoch':>10}{'GPU_mem':>10}{'train_loss':>12}{f'{val}_loss':>12}{'top1_acc':>12}{'top5_acc':>12}"
)
for epoch in range(epochs): # loop over the dataset multiple times
@ -290,13 +290,13 @@ def train(opt, device):
# Train complete
if RANK in {-1, 0} and final_epoch:
LOGGER.info(
f'\nTraining complete ({(time.time() - t0) / 3600:.3f} hours)'
f"\nTraining complete ({(time.time() - t0) / 3600:.3f} hours)"
f"\nResults saved to {colorstr('bold', save_dir)}"
f'\nPredict: python classify/predict.py --weights {best} --source im.jpg'
f'\nValidate: python classify/val.py --weights {best} --data {data_dir}'
f'\nExport: python export.py --weights {best} --include onnx'
f"\nPredict: python classify/predict.py --weights {best} --source im.jpg"
f"\nValidate: python classify/val.py --weights {best} --data {data_dir}"
f"\nExport: python export.py --weights {best} --include onnx"
f"\nPyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '{best}')"
f'\nVisualize: https://netron.app\n'
f"\nVisualize: https://netron.app\n"
)
# Plot examples

View File

@ -309,7 +309,7 @@ def run(
vid_writer[i].write(im0)
# Print time (inference-only)
LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms")
LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1e3:.1f}ms")
# Print results
t = tuple(x.t / seen * 1e3 for x in dt) # speeds per image

View File

@ -1471,12 +1471,12 @@ def run(
else ""
)
LOGGER.info(
f'\nExport complete ({time.time() - t:.1f}s)'
f"\nExport complete ({time.time() - t:.1f}s)"
f"\nResults saved to {colorstr('bold', file.parent.resolve())}"
f"\nDetect: python {dir / ('detect.py' if det else 'predict.py')} --weights {f[-1]} {h}"
f"\nValidate: python {dir / 'val.py'} --weights {f[-1]} {h}"
f"\nPyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '{f[-1]}') {s}"
f'\nVisualize: https://netron.app'
f"\nVisualize: https://netron.app"
)
return f # return list of exported files/dirs

View File

@ -245,7 +245,7 @@ def run(
vid_writer[i].write(im0)
# Print time (inference-only)
LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms")
LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1e3:.1f}ms")
# Print results
t = tuple(x.t / seen * 1e3 for x in dt) # speeds per image

View File

@ -325,10 +325,10 @@ def train(hyp, opt, device, callbacks):
compute_loss = ComputeLoss(model, overlap=overlap) # init loss class
# callbacks.run('on_train_start')
LOGGER.info(
f'Image sizes {imgsz} train, {imgsz} val\n'
f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n'
f"Image sizes {imgsz} train, {imgsz} val\n"
f"Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n"
f"Logging results to {colorstr('bold', save_dir)}\n"
f'Starting training for {epochs} epochs...'
f"Starting training for {epochs} epochs..."
)
for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
# callbacks.run('on_train_epoch_start')
@ -405,7 +405,7 @@ def train(hyp, opt, device, callbacks):
# Log
if RANK in {-1, 0}:
mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
mem = f"{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G" # (GB)
mem = f"{torch.cuda.memory_reserved() / 1e9 if torch.cuda.is_available() else 0:.3g}G" # (GB)
pbar.set_description(
("%11s" * 2 + "%11.4g" * 6)
% (f"{epoch}/{epochs - 1}", mem, *mloss, targets.shape[0], imgs.shape[-1])
@ -740,9 +740,9 @@ def main(opt, callbacks=Callbacks()):
# Plot results
plot_evolve(evolve_csv)
LOGGER.info(
f'Hyperparameter evolution finished {opt.evolve} generations\n'
f"Hyperparameter evolution finished {opt.evolve} generations\n"
f"Results saved to {colorstr('bold', save_dir)}\n"
f'Usage example: $ python train.py --hyp {evolve_yaml}'
f"Usage example: $ python train.py --hyp {evolve_yaml}"
)

View File

@ -357,10 +357,10 @@ def train(hyp, opt, device, callbacks):
compute_loss = ComputeLoss(model) # init loss class
callbacks.run("on_train_start")
LOGGER.info(
f'Image sizes {imgsz} train, {imgsz} val\n'
f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n'
f"Image sizes {imgsz} train, {imgsz} val\n"
f"Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n"
f"Logging results to {colorstr('bold', save_dir)}\n"
f'Starting training for {epochs} epochs...'
f"Starting training for {epochs} epochs..."
)
for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
callbacks.run("on_train_epoch_start")
@ -434,7 +434,7 @@ def train(hyp, opt, device, callbacks):
# Log
if RANK in {-1, 0}:
mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
mem = f"{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G" # (GB)
mem = f"{torch.cuda.memory_reserved() / 1e9 if torch.cuda.is_available() else 0:.3g}G" # (GB)
pbar.set_description(
("%11s" * 2 + "%11.4g" * 5)
% (f"{epoch}/{epochs - 1}", mem, *mloss, targets.shape[0], imgs.shape[-1])
@ -880,9 +880,9 @@ def main(opt, callbacks=Callbacks()):
# Plot results
plot_evolve(evolve_csv)
LOGGER.info(
f'Hyperparameter evolution finished {opt.evolve} generations\n'
f"Hyperparameter evolution finished {opt.evolve} generations\n"
f"Results saved to {colorstr('bold', save_dir)}\n"
f'Usage example: $ python train.py --hyp {evolve_yaml}'
f"Usage example: $ python train.py --hyp {evolve_yaml}"
)

View File

@ -197,15 +197,7 @@ def random_perspective(
else: # affine
im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114))
# Visualize
# import matplotlib.pyplot as plt
# ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
# ax[0].imshow(im[:, :, ::-1]) # base
# ax[1].imshow(im2[:, :, ::-1]) # warped
# Transform label coordinates
n = len(targets)
if n:
if n := len(targets):
use_segments = any(x.any() for x in segments) and len(segments) == n
new = np.zeros((n, 4))
if use_segments: # warp segments

View File

@ -356,8 +356,7 @@ class LoadImages:
else:
self.cap = None
assert self.nf > 0, (
f"No images or videos found in {p}. "
f"Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}"
f"No images or videos found in {p}. Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}"
)
def __iter__(self):
@ -713,8 +712,8 @@ class LoadImagesAndLabels(Dataset):
cache = mem_required * (1 + safety_margin) < mem.available # to cache or not to cache, that is the question
if not cache:
LOGGER.info(
f'{prefix}{mem_required / gb:.1f}GB RAM required, '
f'{mem.available / gb:.1f}/{mem.total / gb:.1f}GB available, '
f"{prefix}{mem_required / gb:.1f}GB RAM required, "
f"{mem.available / gb:.1f}/{mem.total / gb:.1f}GB available, "
f"{'caching images ✅' if cache else 'not caching images ⚠️'}"
)
return cache
@ -774,8 +773,7 @@ class LoadImagesAndLabels(Dataset):
index = self.indices[index] # linear, shuffled, or image_weights
hyp = self.hyp
mosaic = self.mosaic and random.random() < hyp["mosaic"]
if mosaic:
if mosaic := self.mosaic and random.random() < hyp["mosaic"]:
# Load mosaic
img, labels = self.load_mosaic(index)
shapes = None
@ -1162,8 +1160,7 @@ def verify_image_label(args):
segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in lb] # (cls, xy1...)
lb = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh)
lb = np.array(lb, dtype=np.float32)
nl = len(lb)
if nl:
if nl := len(lb):
assert lb.shape[1] == 5, f"labels require 5 columns, {lb.shape[1]} columns detected"
assert (lb >= 0).all(), f"negative label values {lb[lb < 0]}"
assert (lb[:, 1:] <= 1).all(), f"non-normalized or out of bounds coordinates {lb[:, 1:][lb[:, 1:] > 1]}"

View File

@ -173,8 +173,7 @@ def user_config_dir(dir="Ultralytics", env_var="YOLOV5_CONFIG_DIR"):
"""Returns user configuration directory path, preferring environment variable `YOLOV5_CONFIG_DIR` if set, else OS-
specific.
"""
env = os.getenv(env_var)
if env:
if env := os.getenv(env_var):
path = Path(env) # use environment variable
else:
cfg = {"Windows": "AppData/Roaming", "Linux": ".config", "Darwin": "Library/Application Support"} # 3 OS dirs
@ -496,9 +495,9 @@ def check_file(file, suffix=""):
assert Path(file).exists() and Path(file).stat().st_size > 0, f"File download failed: {url}" # check
return file
elif file.startswith("clearml://"): # ClearML Dataset ID
assert (
"clearml" in sys.modules
), "ClearML is not installed, so cannot use ClearML dataset. Try running 'pip install clearml'."
assert "clearml" in sys.modules, (
"ClearML is not installed, so cannot use ClearML dataset. Try running 'pip install clearml'."
)
return file
else: # search
files = []

View File

@ -41,11 +41,9 @@ def construct_dataset(clearml_info_string):
with open(yaml_filenames[0]) as f:
dataset_definition = yaml.safe_load(f)
assert set(
dataset_definition.keys()
).issuperset(
{"train", "test", "val", "nc", "names"}
), "The right keys were not found in the yaml file, make sure it at least has the following keys: ('train', 'test', 'val', 'nc', 'names')"
assert set(dataset_definition.keys()).issuperset({"train", "test", "val", "nc", "names"}), (
"The right keys were not found in the yaml file, make sure it at least has the following keys: ('train', 'test', 'val', 'nc', 'names')"
)
data_dict = {
"train": (

View File

@ -88,14 +88,12 @@ class CometLogger:
self.upload_dataset = self.opt.upload_dataset or COMET_UPLOAD_DATASET
self.resume = self.opt.resume
# Default parameters to pass to Experiment objects
self.default_experiment_kwargs = {
"log_code": False,
"log_env_gpu": True,
"log_env_cpu": True,
"project_name": COMET_PROJECT_NAME,
}
self.default_experiment_kwargs.update(experiment_kwargs)
} | experiment_kwargs
self.experiment = self._get_experiment(self.comet_mode, run_id)
self.experiment.set_name(self.opt.name)

View File

@ -18,7 +18,7 @@ if str(ROOT) not in sys.path:
RANK = int(os.getenv("RANK", -1))
DEPRECATION_WARNING = (
f"{colorstr('wandb')}: WARNING ⚠️ wandb is deprecated and will be removed in a future release. "
f'See supported integrations at https://github.com/ultralytics/yolov5#integrations.'
f"See supported integrations at https://github.com/ultralytics/yolov5#integrations."
)
try:

View File

@ -148,8 +148,7 @@ class ComputeLoss:
b, a, gj, gi = indices[i] # image, anchor, gridy, gridx
tobj = torch.zeros(pi.shape[:4], dtype=pi.dtype, device=self.device) # target obj
n = b.shape[0] # number of targets
if n:
if n := b.shape[0]:
# pxy, pwh, _, pcls = pi[b, a, gj, gi].tensor_split((2, 4, 5), dim=1) # faster, requires torch 1.8.0
pxy, pwh, _, pcls = pi[b, a, gj, gi].split((2, 2, 1, self.nc), 1) # target-subset of predictions
@ -175,10 +174,6 @@ class ComputeLoss:
t[range(n), tcls[i]] = self.cp
lcls += self.BCEcls(pcls, t) # BCE
# Append targets to text file
# with open('targets.txt', 'a') as file:
# [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
obji = self.BCEobj(pi[..., 4], tobj)
lobj += obji * self.balance[i] # obj loss
if self.autobalance:

View File

@ -69,16 +69,8 @@ def random_perspective(
else: # affine
im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114))
# Visualize
# import matplotlib.pyplot as plt
# ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
# ax[0].imshow(im[:, :, ::-1]) # base
# ax[1].imshow(im2[:, :, ::-1]) # warped
# Transform label coordinates
n = len(targets)
new_segments = []
if n:
if n := len(targets):
new = np.zeros((n, 4))
segments = resample_segments(segments) # upsample
for i, segment in enumerate(segments):

View File

@ -132,9 +132,7 @@ class LoadImagesAndLabelsAndMasks(LoadImagesAndLabels): # for training/testing
index = self.indices[index] # linear, shuffled, or image_weights
hyp = self.hyp
mosaic = self.mosaic and random.random() < hyp["mosaic"]
masks = []
if mosaic:
if mosaic := self.mosaic and random.random() < hyp["mosaic"]:
# Load mosaic
img, labels, segments = self.load_mosaic(index)
shapes = None
@ -180,6 +178,7 @@ class LoadImagesAndLabelsAndMasks(LoadImagesAndLabels): # for training/testing
)
nl = len(labels) # number of labels
masks = []
if nl:
labels[:, 1:5] = xyxy2xywhn(labels[:, 1:5], w=img.shape[1], h=img.shape[0], clip=True, eps=1e-3)
if self.overlap:

View File

@ -61,8 +61,7 @@ class ComputeLoss:
b, a, gj, gi = indices[i] # image, anchor, gridy, gridx
tobj = torch.zeros(pi.shape[:4], dtype=pi.dtype, device=self.device) # target obj
n = b.shape[0] # number of targets
if n:
if n := b.shape[0]:
pxy, pwh, _, pcls, pmask = pi[b, a, gj, gi].split((2, 2, 1, self.nc, nm), 1) # subset of predictions
# Box regression

View File

@ -121,9 +121,9 @@ def select_device(device="", batch_size=0, newline=True):
os.environ["CUDA_VISIBLE_DEVICES"] = "-1" # force torch.cuda.is_available() = False
elif device: # non-cpu device requested
os.environ["CUDA_VISIBLE_DEVICES"] = device # set environment variable - must be before assert is_available()
assert torch.cuda.is_available() and torch.cuda.device_count() >= len(
device.replace(",", "")
), f"Invalid CUDA '--device {device}' requested, use '--device cpu' or pass valid CUDA device(s)"
assert torch.cuda.is_available() and torch.cuda.device_count() >= len(device.replace(",", "")), (
f"Invalid CUDA '--device {device}' requested, use '--device cpu' or pass valid CUDA device(s)"
)
if not cpu and not mps and torch.cuda.is_available(): # prefer GPU if available
devices = device.split(",") if device else "0" # range(torch.cuda.device_count()) # i.e. 0,1,6,7
@ -380,7 +380,7 @@ def smart_optimizer(model, name="Adam", lr=0.001, momentum=0.9, decay=1e-5):
optimizer.add_param_group({"params": g[1], "weight_decay": 0.0}) # add g1 (BatchNorm2d weights)
LOGGER.info(
f"{colorstr('optimizer:')} {type(optimizer).__name__}(lr={lr}) with parameter groups "
f'{len(g[1])} weight(decay=0.0), {len(g[0])} weight(decay={decay}), {len(g[2])} bias'
f"{len(g[1])} weight(decay=0.0), {len(g[0])} weight(decay={decay}), {len(g[2])} bias"
)
return optimizer