IEEE Access (Jan 2022)
Lighting-and Personal Characteristic-Aware Markov Random Field Model for Facial Image Relighting System
Abstract
A learning-based image relighting framework is proposed for automatically changing the lighting conditions of facial images from one lighting source to another. Given only a 2D unseen facial testing image, the framework automatically infers the highlight or shadow areas in the relighted image in accordance with the specific facial characteristics of the input image using a learned non-parametric Markov random field model of the facial appearance correlation between the source and target lighting conditions. The proposed framework first decomposes the input image into its global and local components, where these components relate mainly to the lighting and detailed facial appearance characteristics of the image, respectively. The two components are then processed independently to ease the problem of insufficient training samples and to properly analyze the local contrast, overall lighting direction effects, and personal feature characteristics of the unseen subject. Specifically, the global and local components are processed by a lighting–aware classifier and a personal characteristic–aware classifier, respectively, in order to determine the semantic factors of the facial region. The semantic factors of the facial region are then used to update the Markov random field model of the facial appearance correlation between the source and target lighting conditions and to produce lighting enhancement matrices for the lighting components and facial characteristic components, respectively. Finally, the lighting enhancement matrices are applied to the original decomposition images, which are then integrated to obtain the final relighted result. The experimental results show that the proposed image relighting framework generates vivid and recognizable results despite the scarcity of training samples. Furthermore, the relighted results successfully simulate the individual lighting effects produced by the specific personal characteristics of the input image, such as the nose and cheek shadows. The effectiveness of the proposed framework is demonstrated by means of face verification tests, in which input images taken under side-lighting conditions are transformed to normal lighting conditions and are then matched with a dataset of ground truth images also taken under normal lighting conditions.
Keywords