IEEE Access (Jan 2024)

Translated Pattern-Based Eye-Writing Recognition Using Dilated Causal Convolution Network

  • Zakariyya Abdullahi Bature,
  • Sunusi Bala Abdullahi,
  • Werapon Chiracharit,
  • Kosin Chamnongthai

DOI
https://doi.org/10.1109/ACCESS.2024.3390746
Journal volume & issue
Vol. 12
pp. 59079 – 59092

Abstract

Read online

Recently, eye-writing has been used as a novel language communication method, in which the paths of eye movement are detected for character recognition. However, instability of the eyes causes gaze points to form a character with a non-uniform shape and distinct writing style across participants. The non-uniformity affected the performance of the recognition algorithms and set back the applicability of eye-writing. In this paper root translation and dilated causal convolutional (DCC) layers are utilized to model the non-uniformity in eye-writing patterns. The root translation shifted the pattern to have uniform root gaze points by obtaining the difference between the initial gaze points and subsequent gaze points. The translated patterns were used to train the temporal convolution network (TCN) having three stacked DCC layers with different filter and dilation factors. The DCC layers extract temporal dependencies in the pattern by convolving a particular gaze point with a certain previous gaze point within its receptive field. To evaluate the performance of the proposed method, a dataset of 36 eye-writing characters comprising 26 English and 10 Arabic numerals was recorded from 20 participants using the Tobii eye tracker. The evaluation results depicted that our proposed method achieved an accuracy of 96.20% on our newly designed English and Arabic numeral datasets. The proposed method outperforms conventional methods by achieving 98.81%, 97.76%, and 93.51% on HideMyGaze, Complex gaze gesture, and Isolated Japanese Katakana datasets, respectively.

Keywords