reds data preparation
parent
1956ada2be
commit
b4ad651472
|
@ -0,0 +1,61 @@
|
|||
# reproduce the GoPro dataset results
|
||||
|
||||
|
||||
|
||||
### 1. Data Preparation
|
||||
|
||||
##### Download the train set and place it in ```./datasets/GoPro/train```:
|
||||
|
||||
* google drive [link](https://drive.google.com/file/d/1zgALzrLCC_tcXKu_iHQTHukKUVT1aodI/view?usp=sharing) or 百度网盘 [link](https://pan.baidu.com/s/1fdsn-M5JhxCL7oThEgt1Sw), (提取码: 9d26)
|
||||
* it should be like ```./datasets/GoPro/train/input ``` and ```./datasets/GoPro/train/target```
|
||||
* ```python scripts/data_preparation/gopro.py``` to crop the train image pairs to 512x512 patches and make the data into lmdb format.
|
||||
|
||||
##### Download the evaluation data (in lmdb format) and place it in ```./datasets/GoPro/test/```:
|
||||
|
||||
* google drive [link](https://drive.google.com/file/d/1abXSfeRGrzj2mQ2n2vIBHtObU6vXvr7C/view?usp=sharing) or 百度网盘 [link](https://pan.baidu.com/s/1oZtEtYB7-2p3fCIspky_mw), (提取码: rmv9)
|
||||
* it should be like ```./datasets/GoPro/test/input.lmdb``` and ```./datasets/GoPro/test/target.lmdb```
|
||||
|
||||
|
||||
|
||||
### 2. Training
|
||||
|
||||
* NAFNet-GoPro-width32:
|
||||
|
||||
```
|
||||
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/GoPro/NAFNet-width32.yml --launcher pytorch
|
||||
```
|
||||
|
||||
* NAFNet-GoPro-width64:
|
||||
|
||||
```
|
||||
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/GoPro/NAFNet-width64.yml --launcher pytorch
|
||||
```
|
||||
|
||||
* 8 gpus by default. Set ```--nproc_per_node``` to # of gpus for distributed validation.
|
||||
|
||||
|
||||
|
||||
|
||||
### 3. Evaluation
|
||||
|
||||
|
||||
##### Download the pretrain model in ```./experiments/pretrained_models/```
|
||||
* **NAFNet-GoPro-width32**: google drive [link](https://drive.google.com/file/d/1Fr2QadtDCEXg6iwWX8OzeZLbHOx2t5Bj/view?usp=sharing) or 百度网盘 [link](https://pan.baidu.com/s/1AbgG0yoROHmrRQN7dgzDvQ), (提取码: so6v)
|
||||
* **NAFNet-GoPro-width64**: google drive [link](https://drive.google.com/file/d/1S0PVRbyTakYY9a82kujgZLbMihfNBLfC/view?usp=sharing) or 百度网盘 [link](https://pan.baidu.com/s/1g-E1x6En-PbYXm94JfI1vg), (提取码: wnwh)
|
||||
|
||||
|
||||
|
||||
##### Testing on GoPro dataset
|
||||
|
||||
* NAFNet-GoPro-width32:
|
||||
```
|
||||
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/GoPro/NAFNet-width32.yml --launcher pytorch
|
||||
```
|
||||
|
||||
* NAFNet-GoPro-width64:
|
||||
```
|
||||
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/GoPro/NAFNet-width64.yml --launcher pytorch
|
||||
```
|
||||
|
||||
* Test by a single gpu by default. Set ```--nproc_per_node``` to # of gpus for distributed validation.
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
# reproduce the REDS dataset results
|
||||
|
||||
|
||||
|
||||
### 1. Data Preparation
|
||||
|
||||
##### Download the train set and place it in ```./datasets/REDS/train```:
|
||||
|
||||
* google drive ([link](https://drive.google.com/file/d/1VTXyhwrTgcaUWklG-6Dh4MyCmYvX39mW/view) and [link](https://drive.google.com/file/d/1YLksKtMhd2mWyVSkvhDaDLWSc1qYNCz-/view)) or SNU CVLab Server ([link](http://data.cv.snu.ac.kr:8008/webdav/dataset/REDS/train_blur_jpeg.zip) and [link](http://data.cv.snu.ac.kr:8008/webdav/dataset/REDS/train_sharp.zip))
|
||||
* it should be like ```./datasets/REDS/train/train_blur_jpeg ``` and ```./datasets/REDS/train/train_sharp```
|
||||
* ```python scripts/data_preparation/reds.py``` to make the data into lmdb format.
|
||||
|
||||
##### Download the evaluation data (in lmdb format) and place it in ```./datasets/REDS/val/```:
|
||||
|
||||
* google drive [link](https://drive.google.com/file/d/1_WPxX6mDSzdyigvie_OlpI-Dknz7RHKh/view?usp=sharing) or 百度网盘 [link](https://pan.baidu.com/s/1yUGdGFHQGCB5LZKt9dVecw), (提取码: ikki)
|
||||
* it should be like ```./datasets/REDS/val/blur_300.lmdb``` and ```./datasets/REDS/val/sharp_300.lmdb```
|
||||
|
||||
|
||||
|
||||
### 2. Training
|
||||
|
||||
* NAFNet-REDS-width64:
|
||||
|
||||
```
|
||||
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/REDS/NAFNet-width64.yml --launcher pytorch
|
||||
```
|
||||
|
||||
* 8 gpus by default. Set ```--nproc_per_node``` to # of gpus for distributed validation.
|
||||
|
||||
|
||||
|
||||
|
||||
### 3. Evaluation
|
||||
|
||||
|
||||
##### Download the pretrain model in ```./experiments/pretrained_models/```
|
||||
* **NAFNet-REDS-width64**: google drive [link](https://drive.google.com/file/d/14D4V4raNYIOhETfcuuLI3bGLB-OYIv6X/view?usp=sharing) or 百度网盘 [link](https://pan.baidu.com/s/1vg89ccbpIxg3mK9IONBfGg) (提取码: 9fas)
|
||||
|
||||
|
||||
|
||||
##### Testing on REDS dataset
|
||||
|
||||
* NAFNet-REDS-width64:
|
||||
```
|
||||
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/REDS/NAFNet-width64.yml --launcher pytorch
|
||||
```
|
||||
|
||||
* Test by a single gpu by default. Set ```--nproc_per_node``` to # of gpus for distributed validation.
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
# reproduce the SIDD dataset results
|
||||
|
||||
|
||||
|
||||
### 1. Data Preparation
|
||||
|
||||
##### Download the train set and place it in ```./datasets/SIDD/Data```:
|
||||
|
||||
* google drive [link](https://drive.google.com/file/d/1UHjWZzLPGweA9ZczmV8lFSRcIxqiOVJw/view?usp=sharing) or 百度网盘 [link](https://pan.baidu.com/s/1EnBVjrfFBiXIRPBgjFrifg), (提取码: sl6h)
|
||||
* ```python scripts/data_preparation/sidd.py``` to crop the train image pairs to 512x512 patches and make the data into lmdb format.
|
||||
|
||||
##### Download the evaluation data (in lmdb format) and place it in ```./datasets/SIDD/val/```:
|
||||
|
||||
* google drive [link](https://drive.google.com/file/d/1gZx_K2vmiHalRNOb1aj93KuUQ2guOlLp/view?usp=sharing) or 百度网盘 [link](https://pan.baidu.com/s/1I9N5fDa4SNP0nuHEy6k-rw), (提取码: 59d7)
|
||||
* it should be like ```./datasets/SIDD/val/input_crops.lmdb``` and ```./datasets/SIDD/val/gt_crops.lmdb```
|
||||
|
||||
|
||||
|
||||
### 2. Training
|
||||
|
||||
* NAFNet-SIDD-width32:
|
||||
|
||||
```
|
||||
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/SIDD/NAFNet-width32.yml --launcher pytorch
|
||||
```
|
||||
|
||||
* NAFNet-SIDD-width64:
|
||||
|
||||
```
|
||||
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/SIDD/NAFNet-width64.yml --launcher pytorch
|
||||
```
|
||||
|
||||
* 8 gpus by default. Set ```--nproc_per_node``` to # of gpus for distributed validation.
|
||||
|
||||
|
||||
|
||||
|
||||
### 3. Evaluation
|
||||
|
||||
|
||||
##### Download the pretrain model in ```./experiments/pretrained_models/```
|
||||
|
||||
* **NAFNet-SIDD-width32**: google drive [link](https://drive.google.com/file/d/1lsByk21Xw-6aW7epCwOQxvm6HYCQZPHZ/view?usp=sharing) or 百度网盘 [link](https://pan.baidu.com/s/1Xses38SWl-7wuyuhaGNhaw) (提取码: um97)
|
||||
|
||||
* **NAFNet-SIDD-width64**: google drive [link](https://drive.google.com/file/d/14Fht1QQJ2gMlk4N1ERCRuElg8JfjrWWR/view?usp=sharing) or 百度网盘 [link](https://pan.baidu.com/s/198kYyVSrY_xZF0jGv9U0sQ) (提取码: dton)
|
||||
|
||||
|
||||
|
||||
##### Testing on SIDD dataset
|
||||
|
||||
* NAFNet-SIDD-width32:
|
||||
|
||||
```
|
||||
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/SIDD/NAFNet-width32.yml --launcher pytorch
|
||||
```
|
||||
|
||||
* NAFNet-SIDD-width64:
|
||||
|
||||
```
|
||||
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/SIDD/NAFNet-width64.yml --launcher pytorch
|
||||
```
|
||||
|
||||
* Test by a single gpu by default. Set ```--nproc_per_node``` to # of gpus for distributed validation.
|
||||
|
||||
|
||||
|
83
readme.md
83
readme.md
|
@ -33,77 +33,30 @@ python setup.py develop --no_cuda_ext
|
|||
```
|
||||
|
||||
### Quick Start
|
||||
* Image Denoise Demo: [<a href="https://colab.research.google.com/drive/1dkO5AyktmBoWwxBwoKFUurIDn0m4qDXT?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>](https://colab.research.google.com/drive/1dkO5AyktmBoWwxBwoKFUurIDn0m4qDXT?usp=sharing)
|
||||
* Image Deblur Demo: [<a href="https://colab.research.google.com/drive/1yR2ClVuMefisH12d_srXMhHnHwwA1YmU?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>](https://colab.research.google.com/drive/1yR2ClVuMefisH12d_srXMhHnHwwA1YmU?usp=sharing)
|
||||
* Image Denoise Colab Demo: [<a href="https://colab.research.google.com/drive/1dkO5AyktmBoWwxBwoKFUurIDn0m4qDXT?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>](https://colab.research.google.com/drive/1dkO5AyktmBoWwxBwoKFUurIDn0m4qDXT?usp=sharing)
|
||||
* Image Deblur Colab Demo: [<a href="https://colab.research.google.com/drive/1yR2ClVuMefisH12d_srXMhHnHwwA1YmU?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>](https://colab.research.google.com/drive/1yR2ClVuMefisH12d_srXMhHnHwwA1YmU?usp=sharing)
|
||||
|
||||
|
||||
|
||||
### Results and Pre-trained Models
|
||||
|
||||
| name | Dataset|PSNR|SSIM| model (gdrive) | model (百度网盘) |
|
||||
|:----|:----|:----|:----|:----|-----|
|
||||
|NAFNet-GoPro-width32|GoPro|32.8705|0.9606|[link](https://drive.google.com/file/d/1Fr2QadtDCEXg6iwWX8OzeZLbHOx2t5Bj/view?usp=sharing)|[link](https://pan.baidu.com/s/1AbgG0yoROHmrRQN7dgzDvQ) (提取码: so6v)|
|
||||
|NAFNet-GoPro-width64|GoPro|33.7103|0.9668|[link](https://drive.google.com/file/d/1S0PVRbyTakYY9a82kujgZLbMihfNBLfC/view?usp=sharing)|[link](https://pan.baidu.com/s/1g-E1x6En-PbYXm94JfI1vg) (提取码: wnwh)|
|
||||
|NAFNet-SIDD-width32|SIDD|39.9672|0.9599|[link](https://drive.google.com/file/d/1lsByk21Xw-6aW7epCwOQxvm6HYCQZPHZ/view?usp=sharing)|[link](https://pan.baidu.com/s/1Xses38SWl-7wuyuhaGNhaw) (提取码: um97)|
|
||||
|NAFNet-SIDD-width64|SIDD|40.3045|0.9614|[link](https://drive.google.com/file/d/14Fht1QQJ2gMlk4N1ERCRuElg8JfjrWWR/view?usp=sharing)|[link](https://pan.baidu.com/s/198kYyVSrY_xZF0jGv9U0sQ) (提取码: dton)|
|
||||
|NAFNet-REDS-width64|REDS|29.0903|0.8671|[link](https://drive.google.com/file/d/14D4V4raNYIOhETfcuuLI3bGLB-OYIv6X/view?usp=sharing)|[link](https://pan.baidu.com/s/1vg89ccbpIxg3mK9IONBfGg) (提取码: 9fas)|
|
||||
|
||||
### Image Restoration Tasks
|
||||
---
|
||||
|
||||
<details><summary>Image Denoise - SIDD dataset (Click to expand) </summary>
|
||||
|
||||
* prepare data
|
||||
|
||||
* ```mkdir ./datasets/SIDD ```
|
||||
|
||||
* download the SIDD-Medium sRGB Dataset in [here](https://www.eecs.yorku.ca/~kamel/sidd/dataset.php) and unzip it. Move Data (./SIDD_Medium_Srgb/Data) set to ./datasets/SIDD/ or make a soft link. Download [val](https://www.eecs.yorku.ca/~kamel/sidd/benchmark.php) files (ValidationNoisyBlocksSrgb.mat and ValidationGtBlocksSrgb.mat) in ./datasets/SIDD/ .
|
||||
* it should be like:
|
||||
|
||||
```bash
|
||||
./datasets/SIDD/Data
|
||||
./datasets/SIDD/ValidationNoisyBlocksSrgb.mat
|
||||
./datasets/SIDD/ValidationGtBlocksSrgb.mat
|
||||
```
|
||||
|
||||
* ```python scripts/data_preparation/sidd.py```
|
||||
* crop the train image pairs to 512x512 patches
|
||||
| Task | Dataset | Instructions | Visualization Results |
|
||||
| :----------------------------------- | :------ | :---------------------- | :----------------------------------------------------------- |
|
||||
| Image Deblurring | GoPro | [link](./docs/GoPro.md) | [gdrive](https://drive.google.com/file/d/1S8u4TqQP6eHI81F9yoVR0be-DLh4cNgb/view?usp=sharing) | [百度云盘](https://pan.baidu.com/s/1yNYQhznChafsbcfHO44aHQ) (提取码: 96ii) |
|
||||
| Image Denoising | SIDD | [link](./docs/SIDD.md) | [gdrive](https://drive.google.com/file/d/1rbBYD64bfvbHOrN3HByNg0vz6gHQq7Np/view?usp=sharing) \| [百度云盘](https://pan.baidu.com/s/1wIubY6SeXRfZHpp6bAojqQ) (提取码: hu4t) |
|
||||
| Image Deblurring with JPEG artifacts | REDS | [link](./docs/REDS.md) | [gdrive](https://drive.google.com/file/d/1FwHWYPXdPtUkPqckpz-WBitpVyPuXFRi/view?usp=sharing) \| [百度云盘](https://pan.baidu.com/s/17T30w5xAtBQQ2P3wawLiVA) (提取码: put5) |
|
||||
|
||||
|
||||
* eval
|
||||
* download [pretrained model](https://drive.google.com/file/d/14Fht1QQJ2gMlk4N1ERCRuElg8JfjrWWR/view) to ./experiments/pretrained_models/NAFNet-SIDD-width64.pth
|
||||
* ```python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/test.py -opt options/test/SIDD/NAFNet-width64.yml --launcher pytorch ```
|
||||
* distributed evaluation. Set nproc_per_node to 1 for single gpu evaluation.
|
||||
* ```calc_psnr(pred, gt)``` rather than ```calc_psnr(pred.round(), gt)``` to avoid the PSNR loss caused by the "round()" operation, following HINet, MPRNet, and etc.
|
||||
|
||||
* train
|
||||
* ```python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/SIDD/NAFNet-width64.yml --launcher pytorch```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Image Deblur - GoPro dataset (Click to expand) </summary>
|
||||
|
||||
* prepare data
|
||||
|
||||
* ```mkdir ./datasets/GoPro ```
|
||||
|
||||
* download the [train](https://drive.google.com/drive/folders/1AsgIP9_X0bg0olu2-1N6karm2x15cJWE) set to ./datasets/GoPro/train and [test](https://drive.google.com/drive/folders/1a2qKfXWpNuTGOm2-Jex8kfNSzYJLbqkf) set to ./datasets/GoPro/test (refer to [MPRNet](https://github.com/swz30/MPRNet))
|
||||
* it should be like:
|
||||
|
||||
```bash
|
||||
./datasets/
|
||||
./datasets/GoPro/
|
||||
./datasets/GoPro/train/
|
||||
./datasets/GoPro/train/input/
|
||||
./datasets/GoPro/train/target/
|
||||
./datasets/GoPro/test/
|
||||
./datasets/GoPro/test/input/
|
||||
./datasets/GoPro/test/target/
|
||||
```
|
||||
|
||||
* ```python scripts/data_preparation/gopro.py```
|
||||
* crop the train image pairs to 512x512 patches.
|
||||
|
||||
|
||||
* eval
|
||||
* download [pretrained model](https://drive.google.com/file/d/1S0PVRbyTakYY9a82kujgZLbMihfNBLfC/view?usp=sharing) to ./experiments/pretrained_models/NAFNet-GoPro-width64.pth
|
||||
* ```python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/test.py -opt options/test/GoPro/NAFNet-width64.yml --launcher pytorch```
|
||||
* distributed evaluation. Set nproc_per_node to 1 for single gpu evaluation.
|
||||
|
||||
* train
|
||||
* ```python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/GoPro/NAFNet-width64.yml --launcher pytorch```
|
||||
|
||||
</details>
|
||||
|
||||
### Citations
|
||||
If NAFNet helps your research or work, please consider citing NAFNet.
|
||||
|
|
Loading…
Reference in New Issue