IEEE Access (Jan 2021)

ContexedNet: Context–Aware Ear Detection in Unconstrained Settings

  • Ziga Emersic,
  • Diego Susanj,
  • Blaz Meden,
  • Peter Peer,
  • Vitomir Struc

DOI
https://doi.org/10.1109/ACCESS.2021.3121792
Journal volume & issue
Vol. 9
pp. 145175 – 145190

Abstract

Read online

Ear detection represents one of the key components of contemporary ear recognition systems. While significant progress has been made in the area of ear detection over recent years, most of the improvements are direct results of advances in the field of visual object detection. Only a limited number of techniques presented in the literature are domain–specific and designed explicitly with ear detection in mind. In this paper, we aim to address this gap and present a novel detection approach that does not rely only on general ear (object) appearance, but also exploits contextual information, i.e., face–part locations, to ensure accurate and robust ear detection with images captured in a wide variety of imaging conditions. The proposed approach is based on a Contex t–aware ${E}$ ar ${D}$ etection Net work (ContexedNet) and poses ear detection as a semantic image segmentation problem. ContexedNet consists of two processing paths: i) a context–provider that extracts probability maps corresponding to the locations of facial parts from the input image, and ii) a dedicated ear segmentation model that integrates the computed probability maps into a context–aware segmentation-based ear detection procedure. ContexedNet is evaluated in rigorous experiments on the AWE and UBEAR datasets and shown to ensure competitive performance when evaluated against state–of–the–art ear detection models from the literature. Additionally, because the proposed contextualization is model agnostic, it can also be utilized with other ear detection techniques to improve performance.

Keywords