Image reconstruction under visual disruption caused by rain

Tang, Lai Meng (2021) Image reconstruction under visual disruption caused by rain. PhD thesis, University of Glasgow.

Full text available as:
[thumbnail of 2021TangLaiMengPhD.pdf] PDF
Download (9MB)

Abstract

This thesis contributes to single-image reconstruction under visual disruption caused by rain in the following areas:
1. Parameterization of a Convolutional Autoencoder (CAE) for small images [1]
2. Generation of a rain-free image using Cycle-Consistent Generative Adversarial Network
(CycleGAN) [2]
3. Rain removal across spatial frequencies using the Multi-Scale CycleGANs (MS-CycleGANs)
4. Rain removal at spatial frequency’s sub-bands using theWavelet-CycleGANs (W-CycleGANs)

Image reconstruction or restoration refers to reproducing a clean or disruption-free image from an original image corrupted with some form of noise or unwanted disturbance. The goal of image reconstruction is to remove such disruption from the original corrupted image while preserving the original detail of the image scene. In recent years, deep learning techniques have been proposed for removal of rain disruption, or rain removal. They were devised using the Convolutional Neural Network (CNN) [3], and a more recent type of deep learning network called the Generative Adversarial Network (GAN) [4]. Current state-of the-art deep learning rain removal method, called the Image De-raining Conditional Generative Adversarial Network (ID-CGAN) [5], has been shown to be unable to remove rain disruption completely, or preserving the original scene detail [2]. The focus of this research is to remove rain corruption from images without sacrificing the content of the scene, starting from the collection of real rain images to the testing methodologies developed for our Generative Adversarial Network (GAN) networks. This image rain removal or reconstruction research area has attracted much interest in the past decade as it forms an important aspect of outdoor vision systems where many computer vision algorithms could be affected by rain disruption, especially if only a single image is captured.

The first contribution of this thesis in the area of image reconstruction or restoration is the parameterization of a Convolutional Autoencoder (CAE). A framework for deriving an optimum set of CAE parameters for the reconstruction of small input images based on the standard Modified National Institute of Standards and Technology (MNIST) and Street View House Numbers (SVHN) data sets are proposed, using the quantitative mean squared error (MSE) and the qualitative 2Ds’ visualization of the neurons’ activation statistics and entropy at the hidden layers of the CAE. This methodology’s results show that for small 32x32 pixels’ input images, having 2560 neurons at the hidden layer (bottleneck layer) and 32 convolutional feature maps can result in optimum reconstruction performance or good representations of the input image in the latent space for the CAE [1].

The second contribution of this thesis is the generation of a rain-free image using the proposed CycleGAN [2]. Its network model was trained on the same set of 700 rain and rainfree image-pairs used by the recent ID-CGAN work [5]. In the ID-CGAN paper, there was a thorough comparison with other existing techniques like sparse dictionary-based method, convolutional-coding based method, etc. The results using synthetic rain training images have shown that the ID-CGAN method has outperformed all other existing techniques. Hence, our first proposed algorithm, the CycleGAN, is only compared to the ID-CGAN, using the same set of real rain images provided by the authors. The CycleGAN is a practical image’s style transfer approach that falls into the unpaired category, which is capable of transferring an image with rain to an image that is rain-free, without the use of training image-pairs. This is important as natural or real rain images don’t have their corresponding image-pairs that are rain-free. For comparison purpose, a real rain image data set was created. The real rain’s physical properties and phenomena [6] were used to streamline our testing conditions into five broad types of real rain disruption. This testing methodology covers most of the different outdoor rain distortion scenarios captured in the real rain image data set. Hence, we can compare both ID-CGAN and CycleGAN networks using only real rain images. The comparison results using both real and synthetic rain has shown that the CycleGAN method has outperformed the ID-CGAN which represents the state-of-the-art techniques for rain removal [2]. The Natural Image Quality Evaluator (NIQE) is also introduced as a quantitative measure [7] to analyze rain removal results as it can predict the quality of an image without relying on any prior knowledge of the image’s distortions. The results are presented in Chapter 6.

Subsequently, from the CycleGAN technique, the third contribution of the thesis is proposed based on the multi-scale representation of the CycleGAN, called the MS-CycleGANs technique. This proposed technique was built on the remaining gaps on rain removal using the CycleGAN. As highlighted in the rain removal paper using CycleGAN [2], the CycleGAN results could be further improved as its reconstructed output was still unable to remove the rain components at low frequency band and preserved as much original details of the scenes as possible. Hence, the MS-CycleGANs was introduced as a better algorithm than the CycleGAN, as it could train multiple CycleGANs to remove rain components at different spatial frequency bands. The implementation of the MS-CycleGANs is discussed after the CycleGAN, and its rain removal results are also compared to the CycleGAN. The results of the MS-CycleGANs framework has shown that the MS-CycleGANs can learn the characteristics between the rain and rain-free domain at different spatial frequency scales, which is essential for removing the individual frequency components of rain while preserving the scene details.

In the final contribution towards image reconstruction for removal of visual disruptions caused by rain across spatial frequency’s sub-bands, the W-CycleGANs is proposed and implemented to exploit the properties of wavelet transform such as orthogonality and signal localization, to improve the CycleGAN results. For a fair comparison with the CycleGAN, both the proposed multi-scale representations of CycleGAN networks, namely the MS-CycleGANs and the W-CycleGANs, were trained and tested on the same set of rain images used by the ID-CGAN work [5]. A qualitative visual comparison of rain-removed images, especially at the enlarged rain-removed regions, is performed for the ID-CGAN, CycleGAN, MS-CycleGANs and W-CycleGANs. The comparison results among them has demonstrated the superiority of both the MS-CycleGANs and W-CycleGANs in removing rain distortions.

Item Type: Thesis (PhD)
Qualification Level: Doctoral
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Colleges/Schools: College of Science and Engineering > School of Computing Science
Supervisor's Name: Lim, Dr. Li Hong Idris and Siebert, Dr. Paul
Date of Award: 2021
Depositing User: Theses Team
Unique ID: glathesis:2021-82400
Copyright: Copyright of this thesis is held by the author.
Date Deposited: 24 Aug 2021 12:56
Last Modified: 24 Aug 2021 12:56
Thesis DOI: 10.5525/gla.thesis.82400
URI: https://theses.gla.ac.uk/id/eprint/82400
Related URLs:

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year