[ClearML](https://clear.ml/) is an [open-source](https://github.com/clearml/clearml) MLOps platform designed to streamline your machine learning workflow and save valuable time ⏱️. Integrating ClearML with [Ultralytics YOLOv5](https://docs.ultralytics.com/models/yolov5/) allows you to leverage a powerful suite of tools:
- **Experiment Management:** 🔨 Track every YOLOv5 [training run](https://docs.ultralytics.com/modes/train/), including parameters, metrics, and outputs. See the [Ultralytics ClearML integration guide](https://docs.ultralytics.com/integrations/clearml/) for more details.
- **Data Versioning:** 🔧 Version and easily access your custom training data using the integrated ClearML Data Versioning Tool, similar to concepts in [DVC integration](https://docs.ultralytics.com/integrations/dvc/).
- **Remote Execution:** 🔦 [Remotely train and monitor](https://docs.ultralytics.com/hub/cloud-training/) your YOLOv5 models using ClearML Agent.
- **Hyperparameter Optimization:** 🔬 Achieve optimal [Mean Average Precision (mAP)](https://docs.ultralytics.com/guides/yolo-performance-metrics/) using ClearML's [Hyperparameter Optimization](https://docs.ultralytics.com/guides/hyperparameter-tuning/) capabilities.
- **Model Deployment:** 🔭 Turn your trained YOLOv5 model into an API with just a few commands using ClearML Serving, complementing [Ultralytics deployment options](https://docs.ultralytics.com/guides/model-deployment-options/).
You can choose to use only the experiment manager or combine multiple tools into a comprehensive [MLOps](https://www.ultralytics.com/glossary/machine-learning-operations-mlops) pipeline.
1.**ClearML Hosted Service:** Sign up for a free account at [app.clear.ml](https://app.clear.ml/).
2.**Self-Hosted Server:** Set up your own ClearML server. Find instructions [here](https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server). The server is also open-source, ensuring data privacy.
2. Connect the ClearML SDK to your server. [Create credentials](https://app.clear.ml/settings/workspace-configuration) (Settings -> Workspace -> Create new credentials), then run the following command and follow the prompts:
That's it! You're ready to integrate ClearML with your YOLOv5 projects 😎. For a general Ultralytics setup, see the [Quickstart Guide](https://docs.ultralytics.com/quickstart/).
ClearML experiment tracking is automatically enabled when the `clearml` package is installed. Every YOLOv5 [training run](https://docs.ultralytics.com/modes/train/) will be captured and stored in the ClearML experiment manager.
To customize the project or task name in ClearML, use the `--project` and `--name` arguments when running `train.py`. By default, the project is `YOLOv5` and the task is `Training`. Note that ClearML uses `/` as a delimiter for subprojects.
This wealth of information 🤯 can be visualized in the ClearML UI. You can customize table views, sort experiments by metrics like mAP, and directly compare multiple runs. This detailed tracking enables advanced features like hyperparameter optimization and remote execution.
Versioning your [datasets](https://docs.ultralytics.com/datasets/) separately from code is crucial for reproducibility and collaboration. ClearML's Data Versioning Tool helps manage this process. YOLOv5 supports using ClearML dataset version IDs, automatically downloading the data if needed. The dataset ID used is saved as a task parameter, ensuring you always know which data version was used for each experiment.
YOLOv5 uses [YAML](https://www.ultralytics.com/glossary/yaml) files to define dataset configurations. By default, datasets are expected in the `../datasets` directory relative to the repository root. For example, the [COCO128 dataset](https://docs.ultralytics.com/datasets/detect/coco128/) structure looks like this:
Next, ⚠️**copy the corresponding dataset `.yaml` file into the root of your dataset folder**⚠️. This file contains essential information (`path`, `train`, `test`, `val`, `nc`, `names`) that ClearML needs.
Once your dataset is versioned in ClearML, you can easily use it for training by providing the dataset ID via the `--data` argument with the `clearml://` prefix:
With experiments and data versioned, you can leverage ClearML for [Hyperparameter Optimization (HPO)](https://docs.ultralytics.com/guides/hyperparameter-tuning/). Since ClearML captures all necessary information (code, packages, environment), experiments are fully reproducible. ClearML's HPO tools clone an existing experiment, modify its hyperparameters, and automatically rerun it.
To run HPO locally, use the provided script `utils/loggers/clearml/hpo.py`. You'll need the ID of a previously run training task (the "template task") to clone. Update the script with this ID and run it.
The script uses [Optuna](https://optuna.org/) by default if installed; otherwise, it falls back to `RandomSearch`. You can modify `task.execute_locally()` to `task.execute()` in the script to enqueue the HPO tasks for a remote ClearML agent.

ClearML Agent allows you to execute experiments on remote machines (e.g., powerful on-site servers, cloud GPUs like [AWS](https://aws.amazon.com/), [GCP](https://cloud.google.com/), or [Azure](https://azure.microsoft.com/)). The agent listens to task queues, reproduces the experiment environment, runs the task, and reports results back to the ClearML server.
Alternatively, you can modify your training script to automatically enqueue tasks for remote execution. Add `task.execute_remotely()` after the ClearML logger is initialized in `train.py`:
Running the script with this modification will package the code and its environment and send it to the specified queue, rather than executing locally.
### Autoscaling Workers
ClearML also provides Autoscalers that automatically manage cloud resources (AWS, GCP, Azure). They spin up new virtual machines and configure them as ClearML agents when tasks appear in a queue, then shut them down when the queue is empty, optimizing cost.
Contributions to enhance the ClearML integration are welcome! Please see the [Ultralytics Contributing Guide](https://docs.ultralytics.com/help/contributing/) for more information on how to get involved.