Mobile Application for Continuous Recognition and Classification of Sign Language Images through Deep Learning
DOI:
https://doi.org/10.3991/ijim.v19i07.52853Keywords:
Continuous Sign, Sign Language, LSTM, deep learning, mobile applicationAbstract
Throughout the world, sign languages (SL) present significant challenges for effective communication in everyday environments and technological applications. In the field of SL recognition (SLR) using artificial intelligence (AI), two approaches have been developed: isolated SLR (ISLR) and continuous SLR (CSLR). To overcome the limitations of CSLR in SL, we developed a mobile application that integrates an AI-based algorithm in Python, designed to capture and analyze sign sequences through the device’s camera. The application facilitates the creation of a continuous database containing 14 dynamic signs, with 240 videos per sign, resulting in a total of 3360 videos and 50,400 frames. We used a neural network model based on the long short-term memory (LSTM) architecture to improve accuracy in sign identification and promote inclusive communication in digital environments. The model achieved 99.80% accuracy during training and 99.40% in testing, with overall accuracy, recall, and F1-score metrics above 99%. These results evidence the effectiveness of the mobile application and the LSTM model in recognizing, classifying, and translating basic SLP utterances in real time, demonstrating its ability to generalize and avoid overfitting and contributing to more inclusive and accessible communication.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Angel Diego Briones Cerquín, Johan Alonso Tumay Guevara, Christian Ovalle

This work is licensed under a Creative Commons Attribution 4.0 International License.

