Mobile Application for Continuous Recognition and Classification of Sign Language Images through Deep Learning

Authors

DOI:

https://doi.org/10.3991/ijim.v19i07.52853

Keywords:

Continuous Sign, Sign Language, LSTM, deep learning, mobile application

Abstract


Throughout the world, sign languages (SL) present significant challenges for effective communication in everyday environments and technological applications. In the field of SL recognition (SLR) using artificial intelligence (AI), two approaches have been developed: isolated SLR (ISLR) and continuous SLR (CSLR). To overcome the limitations of CSLR in SL, we developed a mobile application that integrates an AI-based algorithm in Python, designed to capture and analyze sign sequences through the device’s camera. The application facilitates the creation of a continuous database containing 14 dynamic signs, with 240 videos per sign, resulting in a total of 3360 videos and 50,400 frames. We used a neural network model based on the long short-term memory (LSTM) architecture to improve accuracy in sign identification and promote inclusive communication in digital environments. The model achieved 99.80% accuracy during training and 99.40% in testing, with overall accuracy, recall, and F1-score metrics above 99%. These results evidence the effectiveness of the mobile application and the LSTM model in recognizing, classifying, and translating basic SLP utterances in real time, demonstrating its ability to generalize and avoid overfitting and contributing to more inclusive and accessible communication.

Downloads

Published

2025-04-11

How to Cite

Briones Cerquín, A. D., Tumay Guevara, J. A., & Ovalle, C. (2025). Mobile Application for Continuous Recognition and Classification of Sign Language Images through Deep Learning. International Journal of Interactive Mobile Technologies (iJIM), 19(07), pp. 4–21. https://doi.org/10.3991/ijim.v19i07.52853

Issue

Section

Papers