Copy wandb param dict before training to avoid overwrites (#7317)

* Copy wandb param dict before training to avoid overwrites.

Copy the hyperparameter dict retrieved from wandb configuration before passing it to `train()`. Training overwrites parameters in the dictionary (eg scaling obj/box/cls gains), which causes the values reported in wandb to not match the input values. This is confusing as it makes it hard to reproduce a run, and also throws off wandb's Bayesian sweep algorithm.

* Cleanup

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
Nick Martin 2022-04-06 09:35:33 -07:00 committed by GitHub
parent 245d6459a9
commit a88a81469a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -16,8 +16,8 @@ from utils.torch_utils import select_device
def sweep():
wandb.init()
# Get hyp dict from sweep agent
hyp_dict = vars(wandb.config).get("_items")
# Get hyp dict from sweep agent. Copy because train() modifies parameters which confused wandb.
hyp_dict = vars(wandb.config).get("_items").copy()
# Workaround: get necessary opt args
opt = parse_opt(known=True)