IEEE Access (Jan 2020)
Light-Field Raw Data Synthesis From RGB-D Images: Pushing to the Extreme
Abstract
Light-field raw data captured by a state-of-the-art light-field camera is limited in its spatial and angular resolutions due to the camera's optical hardware. In this paper, we propose an all-software algorithm to synthesize light-field raw data from a single RGB-D input image, which is driven largely by the need in the research area of light-field data compression. Our synthesis algorithm consists of three key steps: (1) each pixel of the input image is regarded as a spot lighting source that emits directional light rays with an equal strength; (2) the optical path of each directional light ray through the camera's main lens as well as the corresponding micro lens is considered as accurately as possible; and (3) the occlusion of light rays among objects at different distances within the input image is handled with the depth information. The spatial and angular resolutions of our synthesized light-field data can be scaled up when the input RGB-D image has a higher and higher spatial resolution. Meanwhile, for a given input image with a fixed size, we pay a special attention to what would be the extreme we can push the parameters involved in our synthesis algorithm, such as the number of rays emitted from each pixel, the number of micro lenses, and the number of sensors associated with each micro lens. The usefulness of our synthesized data is validated by refocusing, all-in-focus, and sub-aperture reconstructions. In particular, all-in-focus images are evaluated objectively by computing the structural similarity (SSIM) index, which allows us to reach the goal of pushing to the extreme through selecting various parameters mentioned above.
Keywords