The official pytorch implementation of the paper **[NAFSSR: Stereo Image Super-Resolution Using NAFNet](https://arxiv.org/abs/2204.08714)**.
You can get more infomation about NAFSSR with folloing links: [[video](https://drive.google.com/file/d/16w33zrb3UI0ZIhvvdTvGB2MP01j0zJve/view)]/[[slides](https://data.vision.ee.ethz.ch/cvl/ntire22/slides/Chu_NAFSSR_slides.pdf)]/[[poster](https://data.vision.ee.ethz.ch/cvl/ntire22/posters/Chu_NAFSSR_poster.pdf)].
>This paper proposes a simple baseline named NAFSSR for stereo image super-resolution. We use a stack of NAFNet's Block (NAFBlock) for intra-view feature extraction and combine it with Stereo Cross Attention Modules (SCAM) for cross-view feature interaction.
<imgsrc=../figures/NAFSSR_arch.jpg>
>NAFSSR outperforms the state-of-the-art methods on the KITTI 2012, KITTI 2015, Middlebury, and Flickr1024 datasets. With NAFSSR, we won **1st place** in the [NTIRE 2022 Stereo Image Super-resolution Challenge](https://codalab.lisn.upsaclay.fr/competitions/1598).
<palign="center">
<imgsrc=../figures/NAFSSR_params.jpgwidth=70%>
</p>
# Reproduce the Stereo SR Results
## 1. Data Preparation
Follow previous works, our models are trained with Flickr1024 and Middlebury datasets, which is exactly the same as <ahref="https://github.com/YingqianWang/iPASSR">iPASSR</a>. Please visit their homepage and follow their instructions to download and prepare the datasets.
#### Download and prepare the train set and place it in ```./datasets/StereoSR```
#### Download and prepare the evaluation data and place it in ```./datasets/StereoSR/test```
The structure of `datasets` directory should be like
```
datasets
├── StereoSR
│ ├── patches_x2
│ │ ├── 000001
│ │ ├── 000002
│ │ ├── ...
│ │ ├── 298142
│ │ └── 298143
│ ├── patches_x4
│ │ ├── 000001
│ │ ├── 000002
│ │ ├── ...
│ │ ├── 049019
│ │ └── 049020
| ├── test
│ | ├── Flickr1024
│ │ │ ├── hr
│ │ │ ├── lr_x2
│ │ │ └── lr_x4
│ | ├── KITTI2012
│ │ │ ├── hr
│ │ │ ├── lr_x2
│ │ │ └── lr_x4
│ | ├── KITTI2015
│ │ │ ├── hr
│ │ │ ├── lr_x2
│ │ │ └── lr_x4
│ │ └── Middlebury
│ │ ├── hr
│ │ ├── lr_x2
│ │ └── lr_x4
```
## 2. Evaluation
#### Download the pretrain model in ```./experiments/pretrained_models/```
| name | scale |#Params|PSNR|SSIM| pretrained models | configs |