2.0 KiB
2.0 KiB
reproduce the REDS dataset results
1. Data Preparation
Download the train set and place it in ./datasets/REDS/train
:
- google drive (link and link) or SNU CVLab Server (link and link)
- it should be like
./datasets/REDS/train/train_blur_jpeg
and./datasets/REDS/train/train_sharp
python scripts/data_preparation/reds.py
to make the data into lmdb format.
Download the evaluation data (in lmdb format) and place it in ./datasets/REDS/val/
:
- google drive or 百度网盘,
- it should be like
./datasets/REDS/val/blur_300.lmdb
and./datasets/REDS/val/sharp_300.lmdb
2. Training
-
NAFNet-REDS-width64:
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/REDS/NAFNet-width64.yml --launcher pytorch
-
8 gpus by default. Set
--nproc_per_node
to # of gpus for distributed validation.
3. Evaluation
Download the pretrain model in ./experiments/pretrained_models/
- NAFNet-REDS-width64: google drive or 百度网盘
Testing on REDS dataset
- NAFNet-REDS-width64:
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/REDS/NAFNet-width64.yml --launcher pytorch
- Test by a single gpu by default. Set
--nproc_per_node
to # of gpus for distributed validation.