From fc0b75f6702b33a52a416d167c2461c6e9ef0c22 Mon Sep 17 00:00:00 2001 From: littletomatodonkey <2120160898@bit.edu.cn> Date: Thu, 6 May 2021 15:22:38 +0800 Subject: [PATCH] Update train_with_DALI_en.md --- docs/en/extension/train_with_DALI_en.md | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/docs/en/extension/train_with_DALI_en.md b/docs/en/extension/train_with_DALI_en.md index cc3e28680..8318b93ce 100644 --- a/docs/en/extension/train_with_DALI_en.md +++ b/docs/en/extension/train_with_DALI_en.md @@ -49,8 +49,14 @@ python -m paddle.distributed.launch \ ## Train with FP16 -On the basis of the above, using FP16 half-precision can further improve the training speed, just add fields in the start training command `AMP.use_pure_fp16=True`: +On the basis of the above, using FP16 half-precision can further improve the training speed, you can refer the following command. ```shell -python tools/static/train.py -c configs/ResNet/ResNet50.yaml -o use_dali=True -o AMP.use_pure_fp16=True +export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 +export FLAGS_fraction_of_gpu_memory_to_use=0.8 + +python -m paddle.distributed.launch \ + --gpus="0,1,2,3,4,5,6,7" \ + tools/static/train.py \ + -c configs/ResNet/ResNet50_fp16.yaml ```