Deep Atrous Guided Filter
for Image Restoration in Under Display Cameras

Abstract & Method

overview

Under Display Cameras (UDC) present a promising opportunity for phone manufacturers to achieve bezel-free displays by positioning the camera behind semi-transparent OLED screens. Unfortunately, such imaging systems suffer from severe image degradation due to light attenuation and diffraction effects.


Method overview. Hover mouse pointer to see details.


In this work, we present Deep Atrous Guided Filter (DAGF), a two-stage, end-to-end approach for image restoration in UDC systems. A Low-Resolution Network (LRNet) first restores image quality at low-resolution, which is subsequently used by the Guided Filter Network as a filtering input to produce a high-resolution output. Besides the initial downsampling, our low-resolution network uses multiple, parallel atrous convolutions to preserve spatial resolution and emulates multi-scale processing.

Our approach's ability to directly train on megapixel images results in significant performance improvement. We additionally propose a simple simulation scheme to pre-train our model and boost performance. Our overall framework ranks 2nd and 5th in the RLQ-TOD'20 UDC Challenge for POLED and TOLED displays, respectively.

Key Contributions


  • Our approach, DAGF, uses a novel combination of atrous convolutions in conjunction with a trainable guided filter framework, and is capable of directly training on megapixel images.
  • We show that directly training on megapixel inputs provides DAGF with superior context information, allowing us to significantly outperform existing methods. This is particularly evident on the severely degraded POLED measurements.
  • Availability of sufficient data is often a constraining factor when designing learning-based restoration methods in imaging systems. We propose a simple simulation scheme to pre-train our model and further boost performance.

Simulation Procedure

overview
Some simulated outputs. Notice how closely the simulated measurements resemble real ones.


Availability of data, even with clever schemes like monitor acquisition, can be a constraining factor while designing learning based approaches in imaging pipelines. Instead, we propose a simple simulation scheme to cheaply generate training data.

We train a shallow version of DAGF to transform clean DIV2K images to various display measurements (glass, POLED or TOLED). We use this data to pre-train DAGF, which provides us a performance boost of 0.3 to 0.5 dB in PSNR. See our paper for more details.

Results

overview
Qualitative comparison on POLED measurements.


DAGF's ability to directly train on megapixel images and hence aggregate contextual information over large receptive fields leads to a superior restoration.

We show significant improvement over exisitng state-of-the-art image-restoration methods, which are designed for tasks such as deraining, dehazing and image transformation. Such methods lack sufficient input context for a challenging scenario such as UDC.

This is more evident on the POLED dataset, where line, colour and blur artefacts can be seen in the baselines.

overview
Qualitative comparison on TOLED measurements.


The baselines perform better on the moderatly degraded TOLED measurements, but DAGF still surpasses them visually and metric-wise.

Talk at RLQ Workshop, ECCV 2020



To be presented at RLQ TOD Workshop.
 [Slides]

Citation


Openaccess version will be updated soon.

Acknowledgements

We are grateful to Prasan Shedligiri, Salman Siddique and Mohit Lamba from the Computational Imaging Lab at IIT Madras for providing valuable feedback. We would also like to thank Genesis Cloud for providing additional compute hours.

This website template was borrowed from Michaël Gharbi and Matthew Tannick.