International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 187 - Number 25 |
Year of Publication: 2025 |
Authors: Ravikiran V. |
![]() |
Ravikiran V. . Real-Time Sign Language Recognition and Translation using MediaPipe and LSTM-Based Deep Learning. International Journal of Computer Applications. 187, 25 ( Jul 2025), 10-14. DOI=10.5120/ijca2025925415
People with speech and hearing impairments find it hard to communicate with others. The Sign Language Gesture Recognition and Translation system in this paper is meant to help people with speech impairments interact easily with others. The system can pick out five important sign language gestures: Yes, No, I Love You, Thank You, Hello, and OK, and convert them to clear text messages right away. The main part of the system uses a Long Short-Term Memory (LSTM) neural network, Python, OpenCV, and Mediapipe to ensure both accurate and detailed hand tracking. Using computer vision, the program effectively reads each video frame from the webcam and detects the movements of hands very accurately and quickly. The fact that the interface just needs hand movements in front of any webcam means that this technology is easy to use and not expensive. By carrying out this research, we want to help the speech and hearing impaired communicate better, through easy and effective solutions. Since the results were encouraging, the system can be used in real-life situations and applied to a bigger range of sign vocabulary.