VISION-BASED SIGN LANGUAGE INTERPRETATION USING DEEP LEARNING
Rishu Khadka
Abstract This research paper presents an intelligent vision-based sign language interpretation system powered by deep learning and computer vision techniques. The proposed system utilizes Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) to recognize and interpret sign language gestures in real-time from video input. The primary objective is to bridge the communication gap between deaf and hearing communities by providing an accurate, efficient, and accessible translation system. The system processes hand gestures, facial expressions, and body movements to interpret American Sign Language (ASL) and Indian Sign Language (ISL) with high accuracy. This research addresses challenges in gesture recognition, real-time processing, and contextual interpretation, demonstrating significant improvements over existing approaches with 94.7% recognition accuracy and sub-100ms latency for real-time interpretation. Keywords: Sign Language Recognition, Computer Vision, Deep Learning, CNN, LSTM, Gesture Recognition, Human-Computer Interaction

