This project explores deep learning models for restoring blurry low-resolution images. It compares a custom DnCNN-based super-resolution model with a transformer-style NAFNet architecture. Both were implemented and evaluated on the GOPRO and RealBlur-R datasets using Colab Pro (A100 GPU).
- DnCNN-SR: Residual CNN + PixelShuffle-based upsampling
- NAFNet: Transformer-inspired architecture (implemented, not used for demo)
- Metrics: PSNR, SSIM, LPIPS
- Losses: MSE, perceptual (VGG), LPIPS
- Training: AMP, early stopping, ReduceLROnPlateau
- Output visualization and metric summary
You can quickly run the pretrained DnCNN model using:
No training needed — just load the weights and run on your own images.
Model | PSNR (↑) | SSIM (↑) | LPIPS (↓) |
---|---|---|---|
DnCNN (demo) | 26.80 | 0.8020 | 0.2313 |
NAFNet (implemented) | 26.73 | 0.8002 | 0.2377 |
Joint model | 24.63 | 0.8670 | N/A |
DnCNN showed the best perceptual and numerical performance. NAFNet was successfully implemented but not used in the final visualization due to training instability.
Left: LR input (padded, 360x640) | Center: 2x SR output (DnCNN, 720x1280) | Right: HR ground truth
PSNR: 26.80 | SSIM: 0.8020 | LPIPS: 0.2313
We originally tried this cascade:
DnCNN → UNet → EDSR
While promising in theory, this chain:
- Suffered from compounding artifacts
- Was harder to converge
- Did not outperform DnCNN alone in PSNR/SSIM/LPIPS
📌 Conclusion: well-designed single models + quality upsampling outperform deep cascades in image restoration.
image-restoration/
├── old_joint_model_code/ # Original full pipeline code archive
├── results/ # Output samples + originals + result visualizations + metrics_results
├── LICENSE
├── README.md # You're reading it
├── demo.ipynb # Run DnCNN on test images (quick start)
├── dncnn_sr.ipynb # Full DnCNN model training + results
├── nafnet.ipynb # Full NAFNet implementation + training (optional)
└── requirements.txt
You can download pretrained DnCNN weights here: Google Drive
pip install -r requirements.txt
Then launch demo.ipynb
to run DnCNN on your own input images.
This repo includes a pretrained model and demo script. You do not need to train anything to test results.
- torch
- torchvision
- lpips
- tqdm
- matplotlib
- scikit-image
- opencv-python
Maintained by Hank Song For questions, feel free to open an issue or reach out.