mmpretrain/docs/en/device/npu.md

5.5 KiB

NPU (HUAWEI Ascend)

Usage

General Usage

Please refer to the building documentation of MMCV to install MMCV and MMEngine on NPU devices.

Here we use 8 NPUs on your computer to train the model with the following command:

bash ./tools/dist_train.sh configs/resnet/resnet50_8xb32_in1k.py 8

Also, you can use only one NPU to train the model with the following command:

python ./tools/train.py configs/resnet/resnet50_8xb32_in1k.py

Models Results

Model Top-1 (%) Top-5 (%) Config Download
ResNet-50 76.40 93.21 config log
ResNetXt-32x4d-50 77.48 93.75 config log
HRNet-W18 77.06 93.57 config log
ResNetV1D-152 79.41 94.48 config log
SE-ResNet-50 77.65 93.74 config log
ShuffleNetV2 1.0x 69.52 88.79 config log
MobileNetV2 71.74 90.28 config log
MobileNetV3-Small 67.09 87.17 config log
*CSPResNeXt50 77.25 93.46 config log
*EfficientNet-B4 75.73 92.91 config log
**DenseNet121 72.53 90.85 config log

Notes:

  • If not specially marked, the results are almost same between results on the NPU and results on the GPU with FP32.
  • (*) The training results of these models are lower than the results on the readme in the corresponding model, mainly because the results on the readme are directly the weight of the timm of the eval, and the results on this side are retrained according to the config with mmcls. The results of the config training on the GPU are consistent with the results of the NPU.
  • (**) The accuracy of this model is slightly lower because config is a 4-card config, we use 8 cards to run, and users can adjust hyperparameters to get the best accuracy results.

All above models are provided by Huawei Ascend group.