Mobile Application for the Inclusion of People with Hearing Disabilities in Peru Using LSTM and GPT-4

Authors

DOI:

https://doi.org/10.3991/ijoe.v22i01.58105

Keywords:

sign language recognition, mobile application, assistive technology, LSTM, GPT-4, deaf and hard-of-hearing

Abstract


Communication between hearing individuals and deaf and hard-of-hearing individuals is often limited due to the general lack of sign language proficiency among the hearing population, leading to social exclusion and negatively affecting quality of life. This study proposes a mobile application that functions as an assistive technology for sign language recognition. Additionally, the application generates contextual responses using ChatGPT-4 to facilitate communication between both groups. The system was developed in five phases: 1) choice of recurrent neural network technique, 2) selection of the machine learning model, 3) implementation of the LSTM model, 4) implementation of the LLM model, and 5) construction of the mobile application. Validation involved a control experiment (A) and a test experiment (B). In A, the average response time (RT) was 95.93s, without achieving communicative clarity (CC) due to confusion during the interaction. In B, the RT was 102.28s, achieving CC evidenced by a relaxed body posture. Finally, the results of the survey after experiment B revealed acceptance of the system by the listener participants. These findings confirm the system’s feasibility for facilitating communicative inclusion in real-world scenarios.

Downloads

Published

2026-01-22

How to Cite

Regalado-Morales, A., Fiestas, A., & Wong, L. (2026). Mobile Application for the Inclusion of People with Hearing Disabilities in Peru Using LSTM and GPT-4. International Journal of Online and Biomedical Engineering (iJOE), 22(01), pp. 114–132. https://doi.org/10.3991/ijoe.v22i01.58105

Issue

Section

Papers