fix slim bugs
parent
a0ebe2e3cb
commit
077111ef1c
|
@ -66,7 +66,7 @@ cd PaddleClas
|
|||
以CPU为例,若使用GPU,则将命令中改成`cpu`改成`gpu`
|
||||
|
||||
```bash
|
||||
python3.7 deploy/slim/slim.py -m train -c ppcls/configs/slim/ResNet50_vd_quantalization.yaml -o Global.device=cpu
|
||||
python3.7 deploy/slim/slim.py -m train -c ppcls/configs/slim/ResNet50_vd_quantization.yaml -o Global.device=cpu
|
||||
```
|
||||
|
||||
其中`yaml`文件解析详见[参考文档](../../docs/zh_CN/tutorials/config_description.md)。为了保证精度,`yaml`文件中已经使用`pretrained model`.
|
||||
|
@ -81,7 +81,7 @@ python3.7 -m paddle.distributed.launch \
|
|||
--gpus="0,1,2,3" \
|
||||
deploy/slim/slim.py \
|
||||
-m train \
|
||||
-c ppcls/configs/slim/ResNet50_vd_quantalization.yaml
|
||||
-c ppcls/configs/slim/ResNet50_vd_quantization.yaml
|
||||
```
|
||||
|
||||
##### 3.1.2 离线量化
|
||||
|
@ -131,7 +131,7 @@ python3.7 -m paddle.distributed.launch \
|
|||
python3.7 deploy/slim/slim.py \
|
||||
-m export \
|
||||
-c ppcls/configs/slim/ResNet50_vd_prune.yaml \
|
||||
-o Global.save_inference_dir=./inference
|
||||
-o Global.save_inference_dir=./inference
|
||||
```
|
||||
|
||||
|
||||
|
|
|
@ -67,7 +67,7 @@ The training command is as follow:
|
|||
If using GPU, change the `cpu` to `gpu` in the following command.
|
||||
|
||||
```bash
|
||||
python3.7 deploy/slim/slim.py -m train -c ppcls/configs/slim/ResNet50_vd_quantalization.yaml -o Global.device=cpu
|
||||
python3.7 deploy/slim/slim.py -m train -c ppcls/configs/slim/ResNet50_vd_quantization.yaml -o Global.device=cpu
|
||||
```
|
||||
|
||||
The description of `yaml` file can be found in this [doc](../../docs/en/tutorials/config_en.md). To get better accuracy, the `pretrained model`is used in `yaml`.
|
||||
|
@ -82,7 +82,7 @@ python3.7 -m paddle.distributed.launch \
|
|||
--gpus="0,1,2,3" \
|
||||
deploy/slim/slim.py \
|
||||
-m train \
|
||||
-c ppcls/configs/slim/ResNet50_vd_quantalization.yaml
|
||||
-c ppcls/configs/slim/ResNet50_vd_quantization.yaml
|
||||
```
|
||||
|
||||
##### 3.1.2 Offline quantization
|
||||
|
@ -132,7 +132,7 @@ After getting the compressed model, we can export it as inference model for pred
|
|||
python3.7 deploy/slim/slim.py \
|
||||
-m export \
|
||||
-c ppcls/configs/slim/ResNet50_vd_prune.yaml \
|
||||
-o Global.save_inference_dir=./inference
|
||||
-o Global.save_inference_dir=./inference
|
||||
```
|
||||
|
||||
### 5. Deploy
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
PADDLE_TRAINER_ENDPOINTS:
|
||||
127.0.0.1:51466
|
||||
127.0.0.1:58283
|
||||
127.0.0.1:34005
|
||||
127.0.0.1:58331
|
|
@ -42,7 +42,7 @@ Optimizer:
|
|||
momentum: 0.9
|
||||
lr:
|
||||
name: Cosine
|
||||
learning_rate: 0.1
|
||||
learning_rate: 0.01
|
||||
regularizer:
|
||||
name: 'L2'
|
||||
coeff: 0.00007
|
||||
|
|
Loading…
Reference in New Issue