Alexandria Engineering Journal (Dec 2017)

A novel Adaptive Fractional Deep Belief Networks for speaker emotion recognition

  • Kasiprasad Mannepalli,
  • Panyam Narahari Sastry,
  • Maloji Suman

Journal volume & issue
Vol. 56, no. 4
pp. 485 – 497

Abstract

Read online

Due to the rapid development of human computer interaction systems, the recognition of emotion becomes a challenging task. Various handheld devices such as smart phones and PCs are utilized to recognize the human emotion from the speech. But, the recognition of emotion is burdensome to the human computer interaction system since it differs according to the speaker. To resolve this problem, the Adaptive Fractional Deep Belief Network (AFDBN) is proposed in this paper. Initially, the spectral features are extracted from the input speech signal. The features obtained are the tonal power ratio, spectral flux, pitch chroma and MFCC. The extracted feature set is then given into the network for the classification. Thus, the AFDBN is newly designed by the fractional theory and Deep belief network. Then, the proposed AFDBN method is used to find out the optimal weights which are used to recognize the emotion efficiently. Finally, the experimental results are evaluated and its performance is analyzed by the evaluation metrics which is compared with the existing systems. The outcome of the proposed method attains 99.17% accuracy for Berlin database and 97.74% for Telugu database. Keywords: Speech signal, Speaker emotion recognition, Classification, MFCC, Deep Belief Networks