IEEE Access (Jan 2023)
Conditional Generative Adversarial Network Model for Conversion of 2 Dimensional Radiographs Into 3 Dimensional Views
Abstract
The inefficacy of 2-Dimensional techniques in visualizing all perspectives of an organ may lead to inaccurate diagnosis of a disease or deformity. This raises a need for adopting 3-Dimensional medical images. But, the high expense, exposure to a high volume of harmful radiations, and limited availability of machinery for capturing images are limiting factors in implementing 3-Dimensional medical imaging for the whole populace. Thus, the conversion of 2-Dimensional images to 3-Dimensional images gained high popularity in the field of medical imaging. However, numerous research works offer the potential for the reconstruction of 3-Dimensional images. But, none of these provides the visualization of all angles of view from 0° to 360° for a 2-Dimensional input image such as X-ray and dual-energy X-ray absorptiometry. Also, these techniques fail to handle noisy and deformed input images. The purpose of this research is to propose a tailored Conditional Adversarial Network Model for the translation of 2-Dimensional images of bones into their corresponding 3-Dimensional view. The model is preceded by pre-processing techniques for dataset cleaning, noise removal, and converting the dataset to a uniform format. Further, the efficacy of the model is improved by determining the optimal values of model parameters, employing the customized activation function, and optimizers. Additionally, the visual quality of the generated 3-Dimensional images is evaluated to showcase the degree of quality degradation while translating. The experimental results obtained on the real-life datasets collected from hospitals across India prove the efficacy of the proposed model in generating 3-Dimensional images. The generated images are similar in quality to the input image and also effective in retaining the information available in an input image.
Keywords