Convolutional Neural Network Architectures for Gender, Emotional Detection from Speech and Speaker Diarization

Authors

  • Thaer Mufeed Taha ATISP research unit, École Nationale d’Électronique et de Télécommunications de Sfax (ENET'Com) University Sfax – Tunisia
  • Zaineb Ben Messaoud
  • Mondher Frikha Advanced Technologies of Image and Signal Processing’ research lab, Higher Institute of Information and Communication Technologies, Carthage University, Tunisia https://orcid.org/0000-0003-2584-5141

DOI:

https://doi.org/10.3991/ijim.v18i03.43013

Keywords:

Deep learning, gender recognition, speaker diarization, voice recogni-tion, and emotional speech.

Abstract


This paper introduces three system architectures for speaker identification that aim to overcome the limitations of diarization and voice-based biometric systems. Diarization systems utilize unsupervised algorithms to segment audio data based on the time boundaries of utterances, but they do not distinguish individual speakers. On the other hand, voice-based biometric systems can only identify individuals in recordings with a single speaker. Identifying speakers in recordings of natural conversations can be challenging, especially when emotional shifts can alter voice characteristics, making gender identification difficult. To address this issue, the proposed architectures include techniques for gender, emotion, and diarization at either the segment or group level. The evaluation of these architectures utilized two speech databases, namely VoxCeleb and RAVDESS (Ryerson audio-visual database of emotional speech and song) datasets. The findings reveal that the proposed approach outperforms the strategy level in terms of recognition results, despite the real-time processing advantage of the latter. The challenge of identifying multiple speakers engaging in a conversation while considering emotional changes that impact speech is effectively addressed by the proposed architectures. The data indicates that the gender and emotion classification of diarization achieves an accuracy of over 98 percent. These results suggest that the proposed speech-based approach can achieve highly accurate speaker identification.

Author Biography

Mondher Frikha, Advanced Technologies of Image and Signal Processing’ research lab, Higher Institute of Information and Communication Technologies, Carthage University, Tunisia

Prof. Dr. Mondher Frikha   is currently a full professor at the National School of Electronics and Telecommunications, University of Sfax, Tunisia. He is also a director of the ‘Advanced Technologies of Image and Signal Processing’ research lab. His research interests include digital signal and image processing, Speech and audio processing, pattern recognition and IA applications. He received the master of apllied sciences in electrical engineering from the university of Ottawa Canada in 1991. He then worked as a head project at the  Industriel Land Agency in Tunisia. In 2003, he started poursuing his graduate research and obtained in 2007 his PhD degree from the National School of Engineering of Sfax, Tunisia. His academic mailing adress is: mondher.frikha@enetcom.usf.tn

Downloads

Published

2024-02-09

How to Cite

Taha, T. M., Ben Messaoud, Z., & Frikha , M. (2024). Convolutional Neural Network Architectures for Gender, Emotional Detection from Speech and Speaker Diarization. International Journal of Interactive Mobile Technologies (iJIM), 18(03), pp. 88–103. https://doi.org/10.3991/ijim.v18i03.43013

Issue

Section

Papers