Bridging the Gap: Real-Time American Sign Language Recognition Using a Somatosensory Glove
DOI:
https://doi.org/10.56979/1002/2026/1201Keywords:
American Sign Language (ASL), Wearable Sensors, Somatosensory Glove, Flex and IMU Sensors, Machine Learning, XGBoostAbstract
Sign Language (SL) is a main language for millions of Deaf and Hard-of-Hearing (DHH) people – yet a huge communication barrier still exists, as almost all hearing people do not know SL. Vision-based SLR methods have come a long way, but they still face problems like illumination variations, background clutter, hand occlusion and privacy issue whereas commercial glove-based devices are often expensive and not as portable. This paper introduces a somatosensory glove-based ASL recognition system with wireless capability, able to recognize both static and dynamic American Sign Language (ASL) gestures by flex and inertial sensing fusion. The data were collected by a wired interface to allow noise-free and high-fidelity signal acquisition. Two custom datasets of 19 gestures including 15 static and 4 dynamic were collected from 16 participants respectively on the order of about 8000–9500 labelled samples. Three machine learning based models, XGBoost, RF and MLP were used to train the gesture classifier. For them, XGBoost obtained the most robust performance, achieving sample-level cross-validated accuracies of 97.6% and 99.2% for static and dynamic gestures, respectively. RF and MLP gave competitive baseline results. The results emphasize the power of low-cost wearable sensing and machine-learning-based classification and provide a viable, privacy-sensitive path to scalable near-real-time ASL recognition systems.
Downloads
Published
How to Cite
Issue
Section
License
This is an open Access Article published by Research Center of Computing & Biomedical Informatics (RCBI), Lahore, Pakistan under CCBY 4.0 International License



