MMEngine supports training models with CPU, single GPU, multiple GPUs in single machine and multiple machines. When multiple GPUs are available in the environment, we can use the following command to enable multiple GPUs in single machine or multiple machines to shorten the training time of the model.
When users switch from `single GPU` training to `multiple GPUs` training, no changes need to be made. [Runner](mmengine.runner.Runner.wrap_model) will use [MMDistributedDataParallel](mmengine.model.MMDistributedDataParallel) by default to wrap the model, thereby supporting multiple GPUs training.
If you want to pass more parameters to MMDistributedDataParallel or use your own `CustomDistributedDataParallel`, you can set `model_wrapper_cfg`.
### Pass More Parameters to MMDistributedDataParallel
For example, setting `find_unused_parameters` to `True`: