mmclassification/demo/ipu_train_example.sh
Hu Di b4eefe4794
[Enhance] Support training on IPU and add fine-tuning configs of ViT. (#723)
* implement training and evaluation on IPU

* fp16 SOTA

* Tput reaches 5600

* 123

* add poptorch dataloder

* change ipu_replicas to ipu-replicas

* add noqa to config long line(website)

* remove ipu dataloder test code

* del one blank line in test_builder

* refine the dataloder initialization

* fix a typo

* refine args for dataloder

* remove an annoted line

* process one more conflict

* adjust code structure in mmcv.ipu

* adjust ipu code structure in mmcv

* IPUDataloader to IPUDataLoader

* align with mmcv

* adjust according to mmcv

* mmcv code structre fixed

Co-authored-by: hudi <dihu@graphcore.ai>
2022-04-29 22:22:19 +08:00

10 lines
607 B
Bash

# get SOTA accuracy 81.2 for 224 input ViT fine-tuning, reference is below:
# https://github.com/google-research/vision_transformer#available-vit-models
# cfg: vit-base-p16_ft-4xb544_in1k-224_ipu train model in fp16 precision
# 8 epoch, 2176 batch size, 16 IPUs, 4 replicas, model Tput = 5600 images, training time 0.6 hour roughly
cfg_name=vit-base-p16_ft-4xb544_in1k-224_ipu
python3 tools/train.py configs/vision_transformer/${cfg_name}.py --ipu-replicas 4 --no-validate &&
python3 tools/test.py configs/vision_transformer/${cfg_name}.py work_dirs/${cfg_name}/latest.pth --metrics accuracy --device ipu