yolov5/utils/loggers/clearml/README.md

12 KiB
Raw Blame History

Ultralytics logo

ClearML Integration with Ultralytics YOLO

ClearMLClearML

About ClearML

ClearML is an open-source MLOps platform designed to streamline your machine learning workflow and maximize productivity. Integrating ClearML with Ultralytics YOLO unlocks a robust suite of tools for experiment tracking, data management, and scalable deployment:

You can use ClearML's experiment manager alone or combine these features into a comprehensive MLOps pipeline.

ClearML scalars dashboard

🦾 Setting Up ClearML

ClearML requires a server to track experiments and data. You have two main options:

  1. ClearML Hosted Service: Sign up for a free account at app.clear.ml.
  2. Self-Hosted Server: Deploy your own ClearML server using the official setup guide. The server is open-source, ensuring data privacy and control.

To get started:

  1. Install the ClearML Python package:

    pip install clearml
    

    Note: The clearml package is included in the YOLO requirements.

  2. Connect the ClearML SDK to your server:
    Create credentials (Settings → Workspace → Create new credentials), then run:

    clearml-init
    

    Follow the prompts to complete setup.

For a general Ultralytics setup, see the Quickstart Guide.

🚀 Training YOLO with ClearML

When the clearml package is installed, experiment tracking is automatically enabled for every YOLO training run. All experiment details are captured and stored in the ClearML experiment manager.

To customize your project or task name in ClearML, use the --project and --name arguments. By default, the project is YOLO and the task is Training. ClearML uses / as a delimiter for subprojects.

Example Training Command:

# Train YOLO on COCO128 dataset for 3 epochs
python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache

Example with Custom Project and Task Names:

# Train with custom project and experiment names
python train.py --project my_yolo_project --name experiment_001 --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache

ClearML automatically logs:

  • Source code and uncommitted changes
  • Installed Python packages
  • Hyperparameters and configuration settings
  • Model checkpoints (use --save-period n to save every n epochs)
  • Console output logs
  • Performance metrics (precision, recall, losses, learning rates, mAP0.5, mAP0.5:0.95)
  • System details (hardware specs, runtime, creation date)
  • Generated plots (label correlogram, confusion matrix)
  • Images with bounding boxes per epoch
  • Mosaic augmentation previews per epoch
  • Validation images per epoch

All this information can be visualized in the ClearML UI. You can customize table views, sort experiments by metrics, and compare multiple runs. This enables advanced features like hyperparameter optimization and remote execution.

🔗 Dataset Version Management

Versioning your datasets independently from code is essential for reproducibility and collaboration. ClearML's Data Versioning Tool streamlines this process. YOLO supports ClearML dataset version IDs, automatically downloading data as needed. The dataset ID is saved as a task parameter, ensuring traceability for every experiment.

ClearML Dataset Interface

Prepare Your Dataset

YOLO uses YAML files to define dataset configurations. By default, datasets are expected in the ../datasets directory relative to the repository root. For example, the COCO128 dataset structure:

../
├── yolov5/          # Your YOLO repository clone
└── datasets/
    └── coco128/
        ├── images/
        ├── labels/
        ├── LICENSE
        └── README.txt

Ensure your custom dataset follows a similar structure.

Next, ⚠️ copy the corresponding dataset .yaml file into the root of your dataset folder. This file contains essential information (path, train, test, val, nc, names) required by ClearML.

../
└── datasets/
    └── coco128/
        ├── images/
        ├── labels/
        ├── coco128.yaml  # <---- Place the YAML file here!
        ├── LICENSE
        └── README.txt

Upload Your Dataset

Navigate to your dataset's root directory and use the clearml-data CLI tool:

cd ../datasets/coco128
clearml-data sync --project YOLO_Datasets --name coco128 --folder .

Alternatively, use the following commands:

# Create a new dataset entry in ClearML
clearml-data create --project YOLO_Datasets --name coco128

# Add the dataset files (use '.' for the current directory)
clearml-data add --files .

# Finalize and upload the dataset version
clearml-data close

Tip: Use --parent <parent_dataset_id> with clearml-data create to link versions and avoid re-uploading unchanged files.

Run Training Using a ClearML Dataset

Once your dataset is versioned in ClearML, you can use it for training by providing the dataset ID via the --data argument with the clearml:// prefix:

# Replace YOUR_DATASET_ID with the actual ID from ClearML
python train.py --img 640 --batch 16 --epochs 3 --data clearml://YOUR_DATASET_ID --weights yolov5s.pt --cache

👀 Hyperparameter Optimization

With experiments and data versioned, you can leverage ClearML for hyperparameter optimization. ClearML captures all necessary information (code, packages, environment), making experiments fully reproducible. Its HPO tools clone an existing experiment, modify hyperparameters, and rerun it automatically.

To run HPO locally, use the provided script utils/loggers/clearml/hpo.py. You'll need the ID of a previously run training task (the "template task") to clone. Update the script with this ID and run:

# Install Optuna for advanced optimization strategies (optional)
# pip install optuna

# Run the HPO script
python utils/loggers/clearml/hpo.py

The script uses Optuna by default if installed, or falls back to RandomSearch. You can modify task.execute_locally() to task.execute() in the script to enqueue HPO tasks for a remote ClearML agent.

HPO in ClearML UI

🤯 Remote Execution (Advanced)

ClearML Agent enables you to execute experiments on remote machines, including on-premise servers or cloud GPUs such as AWS, Google Cloud, or Azure. The agent listens to task queues, reproduces the experiment environment, runs the task, and reports results back to the ClearML server.

Learn more about ClearML Agent:

Turn any machine into a ClearML agent by running:

# Replace QUEUES_TO_LISTEN_TO with your queue name(s)
clearml-agent daemon --queue QUEUES_TO_LISTEN_TO [--docker] # Use --docker to run in a Docker container

Cloning, Editing, and Enqueuing Tasks

You can manage remote execution directly from the ClearML web UI:

  1. Clone: Right-click an existing experiment to clone it.
  2. Edit: Modify hyperparameters or other settings in the cloned task.
  3. Enqueue: Right-click the modified task and select "Enqueue" to assign it to a specific queue for an agent to pick up.

Enqueue a task from the ClearML UI

Executing a Task Remotely via Code

You can also modify your training script to automatically enqueue tasks for remote execution. Add task.execute_remotely() after the ClearML logger is initialized in train.py:

# Inside train.py, after logger initialization...
if RANK in {-1, 0}:
    # Initialize loggers
    loggers = Loggers(save_dir, weights, opt, hyp, LOGGER)

    # Check if ClearML logger is active and enqueue the task
    if loggers.clearml:
        # Specify the queue name for the remote agent
        loggers.clearml.task.execute_remotely(queue_name="my_remote_queue")  # <------ ADD THIS LINE
        # data_dict might be populated by ClearML if using a ClearML dataset
        data_dict = loggers.clearml.data_dict

Running the script with this modification will package the code and its environment and send it to the specified queue, rather than executing locally.

Autoscaling Workers

ClearML provides Autoscalers that automatically manage cloud resources (AWS, GCP, Azure). They spin up new virtual machines as ClearML agents when tasks appear in a queue, and shut them down when the queue is empty, optimizing cost.

Watch the Autoscalers getting started video:

Watch the ClearML Autoscalers video

🤝 Contributing

Contributions to enhance the ClearML integration are welcome! Please see the Ultralytics Contributing Guide for details on how to get involved.


Ultralytics open-source contributors