IEEE Access (Jan 2020)

Foldover Features for Dynamic Object Behaviour Description in Microscopic Videos

  • Xialin Li,
  • Chen Li,
  • Frank Kulwa,
  • Md Mamunur Rahaman,
  • Wenwei Zhao,
  • Xue Wang,
  • Dan Xue,
  • Yudong Yao,
  • Yilin Cheng,
  • Jindong Li,
  • Shouliang Qi,
  • Tao Jiang

DOI
https://doi.org/10.1109/ACCESS.2020.3003993
Journal volume & issue
Vol. 8
pp. 114519 – 114540

Abstract

Read online

A behavior description helps analyze tiny objects, similar objects, objects with weak visual information, and objects with similar visual information. It plays a fundamental role in the identification and classification of dynamic objects in microscopic videos. To this end, we propose foldover features to describe the behavior of dynamic objects. Foldover is defined as: Each frame of an object's motion is superimposed on the same spatial plane in the spacetime order of the motion, the result of the superposition is the foldover of the object's motion. Foldover of an object contains temporal information, spatial information, behavior features and static features. Therefore, the features extracted based on the foldover of the object are the foldover features. In this work, we first generate foldover for each object in microscopic videos in X, Y and Z directions, respectively. Then, we extract foldover features from the X, Y and Z directions with statistical methods, respectively. The core content of this paper is to construct the foldovers and extract the foldover features. Through these two steps, the temporal information, spatial information, behavior features and static features of the object are enhanced and included in the foldover features. Furthermore, the description of the behavior of dynamic objects by the foldover features is strengthened. Finally, we use four different classifiers to test the effectiveness of the proposed foldover features. In the experiment, we use a microscopic sperm video dataset to evaluate the proposed foldover features, including three types of 1374 sperms, and obtain the highest classification accuracy of 96.5%.

Keywords