Jordanian Journal of Computers and Information Technology (Mar 2024)

CDRSHNET: VARIANCE-GUIDED MULTISCALE AND SELF-ATTENTION FUSION WITH HYBRID LOSS FUNCTION TO RESTORE TRAFFIC-SIGN IMAGES CAPTURED IN ADVERSE CONDITIONS

  • Milind Vijay Parse,
  • Dhanya Pramod

DOI
https://doi.org/10.5455/jjcit.71-1699613114
Journal volume & issue
Vol. 10, no. 1
pp. 74 – 92

Abstract

Read online

This paper proposes a CDRSHNet (CodecDirtyRainyShadowHazeNetwork) architecture with a fusion of self-attention (SA) and variance-guided multiscale attention (VGMA) mechanism to restore traffic sign images captured in challenging weather conditions including raindrops, shadows, haze, blurry images from dirty camera lenses and codec errors. The SA captures global dependencies whereas VGMA enhances the representation by emphasizing informative channels and spatial locations. To enhance the image quality hybrid loss function is proposed that combines the Gradient Magnitude Similarity Deviation (GMSD) and Charbonnier loss. The CDRSHNet is trained on a dataset of real and synthesized images. Its performance is evaluated on the average Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR) on Test RID (Real Image Dataset) and Test SID (Synthesized Image Dataset). CDRSHNet achieved an average SSIM of 0.978 and an average PSNR of 39.58 on Test RID. On Test SID, the average SSIM is 0.963, and the average PSNR is 39.46 [JJCIT 2024; 10(1.000): 74-92]

Keywords