IMAGE DENOISING BASED ON LEAST-SQUARES GENERATIVE ADVERSARIAL NETWORKS

Authors

  • Austin Olom Ogar Department of Computer Science, ABU Zaria
  • Mustapha Aminu Bagiwa Department of Computer Science, ABU Zaria
  • Muhammed Abdullahi Department of Computer Science, ABU Zaria

Abstract

Digital Image Denoising (noise removal) is one of the fundamental steps taken for the restoration of an actual/ true image from its corresponding contaminated version. Image quality and reliability are vital processes that aid investigations, decisions, and judgments across a wide range of disciplines concerning various application domains like a diagnosis in medicine, digital evidence in multimedia forensics, and court of law, among many others. In the last few decades, it is observed that researchers in this field of Image Processing and Computer Vision adapted the traditional methods or approaches to the removal of noise from images. In recent times, advances in artificial intelligence have led to the adoption and popularity of deep learning methods. The Wasserstein Generative Adversarial Network (WGAN), is one of such popular and better approaches that are used. However, the problem with GAN is that after denoising an image, it introduces another variant of noise that was not originally contained in the given (contaminated) image. Against this backdrop, this study, seeking to proffer a solution to image denoising, based on the least squares generative adversarial networks (LSGAN), used a two-step framework to address the problem in WGAN. A Generator model using the srresnet framework was trained to predict the noise distribution over the input noisy images to ease the vanishing gradient and loss saturation. The noise patches were later used to create a paired training dataset which were then used to train a deep Convolutional Neural Network (DCNN) on denoising. The Least Squares Method was adapted as the loss function for the discriminator model. From the results of the study, it was discovered that the proposed model showed improved PSNR values when compared to the model of Zhong et al.; the top three results (from the ten test images for the study) for the proposed model against the existing model gave a PSNR value of 34.40 dB against 32.50dB for the baby image, a PSNR value of 33.73 dB against 31.12dB for the woman image, and a PSNR value of 32.54 dB against 29.10dB for the Zebra image.

Downloads

Published

2022-04-01

Issue

Section

ARTICLES