mirror of
https://github.com/open-mmlab/mmsegmentation.git
synced 2025-06-03 22:03:48 +08:00
[Fix] Fix mix training on Ascend NPU (#3215)
## Motivation Address an issue where mix training is not enabled when optimizer_config is present in the config on Ascend NPU ## Modification Previously, mix training was not enabled when optimizer_config was present in the configuration on Ascend NPU. This commit addresses the issue by ensuring that mix training is enabled under these circumstances. ## Use cases It has been validated on the knet_s3_upernet_swin-l_8x2_640x640_adamw_80k_ade20k.py config. ## Checklist 1. Pre-commit or other linting tools are used to fix the potential lint issues. 2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness. 3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMDet3D. 4. The documentation has been modified accordingly, like docstring or example tutorials.
This commit is contained in:
parent
0beaf69047
commit
f67ef9c128
@ -137,9 +137,9 @@ def train_segmentor(model,
|
|||||||
meta=meta))
|
meta=meta))
|
||||||
|
|
||||||
if cfg.device == 'npu' and not is_npu_support_full_precision():
|
if cfg.device == 'npu' and not is_npu_support_full_precision():
|
||||||
optimiter_config = dict(type='Fp16OptimizerHook', loss_scale='dynamic')
|
cfg.optimizer_config = cfg.optimizer_config or {}
|
||||||
cfg.optimizer_config = optimiter_config if \
|
cfg.optimizer_config['type'] = 'Fp16OptimizerHook'
|
||||||
not cfg.optimizer_config else cfg.optimizer_config
|
cfg.optimizer_config['loss_scale'] = 'dynamic'
|
||||||
|
|
||||||
# register hooks
|
# register hooks
|
||||||
runner.register_training_hooks(cfg.lr_config, cfg.optimizer_config,
|
runner.register_training_hooks(cfg.lr_config, cfg.optimizer_config,
|
||||||
|
Loading…
x
Reference in New Issue
Block a user