Dynamic Sign Language Recognition Based on Real-Time Videos

Authors

  • Bushra A. Al-Mohimeed Department of Information Technology, College of Computer, Qassim University, Buray-dah, Saudi Arabia
  • Hessa O. Al-Harbi Department of Information Technology, College of Computer, Qassim University, Buray-dah, Saudi Arabia
  • Ghadah S. Al-Dubayan Department of Information Technology, College of Computer, Qassim University, Buray-dah, Saudi Arabia
  • Amal A. Al-Shargabi Department of Information Technology, College of Computer, Qassim University, Buray-dah, Saudi Arabia

DOI:

https://doi.org/10.3991/ijoe.v18i01.27581

Keywords:

Dynamic sign language, recognition, deep learning, convLSTM

Abstract


Sign language is the main communication tool for the deaf and hard of hearing. Deaf people cannot interact with others without a sign language interpreter. Accordingly, sign language recognition automation has become an important application in artificial intelligence and deep learning. Specifically, the recognition of Arabic sign language has been studied using many smart and traditional methods. This research provides a system to recognize dynamic Saudi sign language based on real time videos to solve this problem. We constructed a dataset for Saudi sign language in terms of videos in the proposed system. The dataset was then used to train a deep learning model using convolutional long short-term memory (convLSTM) to recognize the dynamic signs. Implementing such a system provides a platform for deaf people to interact with the rest of the world without an interpreter to reduce deaf isolation in society.

Downloads

Published

2022-01-26

How to Cite

Al-Mohimeed, B. A., Al-Harbi, H. O., Al-Dubayan, G. S., & Al-Shargabi, A. A. (2022). Dynamic Sign Language Recognition Based on Real-Time Videos. International Journal of Online and Biomedical Engineering (iJOE), 18(01), pp. 4–14. https://doi.org/10.3991/ijoe.v18i01.27581

Issue

Section

Papers