[Fix] Fix mix training on Ascend NPU (#3215)

## Motivation

Address an issue where mix training is not enabled when optimizer_config
is present in the config on Ascend NPU

## Modification

Previously, mix training was not enabled when optimizer_config was
present in the configuration on Ascend NPU. This commit addresses the
issue by ensuring that mix training is enabled under these
circumstances.

## Use cases 

It has been validated on the
knet_s3_upernet_swin-l_8x2_640x640_adamw_80k_ade20k.py config.

## Checklist

1. Pre-commit or other linting tools are used to fix the potential lint
issues.
2. The modification is covered by complete unit tests. If not, please
add more unit test to ensure the correctness.
3. If the modification has potential influence on downstream projects,
this PR should be tested with downstream projects, like MMDet or
MMDet3D.
4. The documentation has been modified accordingly, like docstring or
example tutorials.
This commit is contained in:
xuuyangg 2023-07-22 14:07:31 +08:00 committed by GitHub
parent 0beaf69047
commit f67ef9c128
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -137,9 +137,9 @@ def train_segmentor(model,
meta=meta)) meta=meta))
if cfg.device == 'npu' and not is_npu_support_full_precision(): if cfg.device == 'npu' and not is_npu_support_full_precision():
optimiter_config = dict(type='Fp16OptimizerHook', loss_scale='dynamic') cfg.optimizer_config = cfg.optimizer_config or {}
cfg.optimizer_config = optimiter_config if \ cfg.optimizer_config['type'] = 'Fp16OptimizerHook'
not cfg.optimizer_config else cfg.optimizer_config cfg.optimizer_config['loss_scale'] = 'dynamic'
# register hooks # register hooks
runner.register_training_hooks(cfg.lr_config, cfg.optimizer_config, runner.register_training_hooks(cfg.lr_config, cfg.optimizer_config,