Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) (Jun 2021)

Analisis Perbandingan Algoritma Klasifikasi MLP dan CNN pada Dataset American Sign Language

  • Mohammad Farid Naufal,
  • Sesilia Shania,
  • Jessica Millenia,
  • Stefan Axel,
  • Juan Timothy Soebroto,
  • Rizka Febrina P.,
  • Mirella Mercifia

DOI
https://doi.org/10.29207/resti.v5i3.3009
Journal volume & issue
Vol. 5, no. 3
pp. 489 – 495

Abstract

Read online

People who have hearing loss (deafness) or speech impairment (hearing impairment) usually use sign language to communicate. One of the most basic and flexible sign languages ​​is the Alphabet Sign Language to spell out the words you want to pronounce. Sign language uses hand, finger, and face movements to speak the user's thoughts. However, for alphabetical sign language, facial expressions are not used but only gestures or symbols formed using fingers and hands. In fact, there are still many people who don't understand the meaning of sign language. The use of image classification can help people more easily learn and translate sign language. Image classification accuracy is the main problem in this case. This research conducted a comparison of image classification algorithms, namely Convolutional Neural Network (CNN) and Multilayer Perceptron (MLP) to recognize American Sign Language (ASL) except the letters "J" and "Z" because movement is required for both. This is done to see the effect of the convolution and pooling stages on CNN on the resulting accuracy value and F1 Score in the ASL dataset. Based on the comparison, the use of CNN which begins with Gaussian Low Pass Filtering preprocessing gets the best accuracy of 96.93% and F1 Score 96.97%

Keywords