Author(s) :
Volume/Issue :
Abstract :
Signalyze is an innovative sign language recognition system designed to bridge the communication gap between deaf and mute individuals and the wider community. Utilizing advanced technologies in computer vision, machine learning, and natural language processing, Signalyze enables accurate interpretation and translation of sign language gestures into readable text and audible speech. The system incorporates a series of sophisticated modules, including video processing, hand detection, gesture classification, emotion detection, and contextual understanding, enhanced by the integration of BERT (Bidirectional Encoder Representations from Transformers) for nuanced language comprehension. The core functionality begins with the user uploading a video of sign language gestures. The system processes the video to extract frames and detect hand movements using state-of-the-art computer vision techniques. These detected gestures are then classified into corresponding sign language words or phrases. Simultaneously, facial expressions are analyzed to detect emotions, adding an additional layer of contextual understanding. BERT is utilized to interpret the sequence of gestures and emotions, ensuring that the translation accurately reflects the intended meaning. The output is presented in both text and speech forms, providing a comprehensive and user-friendly communication tool. Signalyze aims to enhance accessibility and inclusivity by empowering deaf and mute individuals to communicate more effectively in various settings, including education, healthcare, and social interactions. This project not only demonstrates the practical application of AI and NLP in solving real-world problems but also emphasizes the importance of technological innovations in promoting equality and inclusivity in communication.Indexterms:OpenCV,TensorFlow,LabelImg,Gesturerecognition.
No. of Downloads :
0