Author : Kanika Rana, Khushseerat Kaur, Harpuneet Singh, Manpreet Singh, Shashank, Jassandeep Singh
Date of Publication :11th November 2024
Abstract: Sign language recognition using hand gestures, especially from signed digital languages, is a crucial approach to the ability of people who are deaf or hard of hearing to interact with the hearing community. Thus, this paper develops vision-based ASL fingerspelling gesture recognition towards converting signs into correlative text utilizing CNNs. The model can also identify a blank symbol and 27 signs, which consist of the ASL alphabet; its accuracy is 98%. Some of the system features consist of dataset creation to capture customized gestures, conversion into grayscale images, and blurring of images using the Gaussian Blur filter, and a Two-tiered Well-Distinguished Classification Algorithm to aid in separating similar movements. The real-time model uses Python, Tomas, Keras, and OpenCV as Machine learning frameworks. Also, an autocorrect functionality integrated into the software corrects the obtained text and makes the whole communication process more pleasant. As much as the system shows promising results for faces against ligh t coloured backgrounds and moderate lighting, there continue to be shortcomings when it comes to the varying conditions of the environment. Future enhancement includes subtraction of background and better pre-processing in complex environments. This model can be implemented to make it affordable and easily accessible to deaf and hard-of-hearing people, and it can significantly assist them in their attempts to communicate with hearing people in real time.
Reference :