ASLDetect: Arabic sign language detection using ResNet and U-Net like component

ASLDetect:使用类似 ResNet 和 U-Net 的组件进行阿拉伯语手语检测

阅读:1

Abstract

Sign languages are essential for communication among over 430 million deaf and hard-of-hearing individuals worldwide. However, recognizing Arabic Sign Language (ArSL) in real-world settings remains challenging due to issues like background noise, lighting variations, and hand occlusions. These limitations hinder the effectiveness of existing systems in applications such as assistive technologies and education. To tackle these challenges, we propose ASLDetect, a new model for ArSL recognition that leverages ResNet for feature extraction and a U-Net-based architecture for accurate gesture segmentation. Our method includes preprocessing steps like resizing images to 64  ×  64 pixels, normalization, and selective augmentation to improve robustness in diverse environments. We evaluated ASLDetect on two datasets: ArASL2018, which features plain backgrounds, and ArASL2021, which includes more complex and diverse environments. On ArASL2018, ASLDetect achieved an accuracy of 99.35%, surpassing ResNet34 (99.08%), T-SignSys (97.92%), and UrSL-CNN (0.98%). For ArASL2021, we applied transfer learning from our ArASL2018-trained model, significantly improving performance and reaching 86.84% accuracy-outperforming ResNet34 (82.5%), T-SignSys (58.98%), and UrSL-CNN (0.49%). These results highlight ASLDetect's accuracy, robustness, and adaptability.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。