diff --git a/README.md b/README.md index d0972e1..120cb89 100644 --- a/README.md +++ b/README.md @@ -177,7 +177,7 @@ Usually, we set python -m torch.distributed.launch --nproc_per_node=8 tools/train_net.py --config-file configs/pretrain/mq-glip-t.yaml --use-tensorboard OUTPUT_DIR 'OUTPUT/MQ-GLIP-TINY/' ``` To conduct pre-training, one should first extract vision queries before start training following the above [instruction](#vision-query-extraction). -To pre-train on custom datasets, please specify ``DATASETS.TRAIN`` and ``VISION_SUPPORT.SUPPORT_BANK_PATH`` in the config file. +To pre-train on custom datasets, please specify ``DATASETS.TRAIN`` and ``VISION_SUPPORT.SUPPORT_BANK_PATH`` in the config file. More details can be found in [CUSTOMIZED_PRETRAIN.md](CUSTOMIZED_PRETRAIN.md). ## Finetuning-free Evaluation **Take MQ-GLIP-T as an example.**