IET Computer Vision (Jun 2024)

A point‐image fusion network for event‐based frame interpolation

  • Chushu Zhang,
  • Wei An,
  • Ye Zhang,
  • Miao Li

DOI
https://doi.org/10.1049/cvi2.12220
Journal volume & issue
Vol. 18, no. 4
pp. 439 – 447

Abstract

Read online

Abstract Temporal information in event streams plays a critical role in event‐based video frame interpolation as it provides temporal context cues complementary to images. Most previous event‐based methods first transform the unstructured event data to structured data formats through voxelisation, and then employ advanced CNNs to extract temporal information. However, voxelisation inevitably leads to information loss, and processing the sparse voxels introduces severe computation redundancy. To address these limitations, this study proposes a point‐image fusion network (PIFNet). In our PIFNet, rich temporal information from the events can be directly extracted at the point level. Then, a fusion module is designed to fuse complementary cues from both points and images for frame interpolation. Extensive experiments on both synthetic and real datasets demonstrate that our PIFNet achieves state‐of‐the‐art performance with high efficiency.

Keywords