-
-
Notifications
You must be signed in to change notification settings - Fork 0
Features & Objectives
-
Real-Time Gesture Recognition:
Leverages OpenCV and cvzone for hand detection and tracking with minimal latency. -
Deep Learning-Based Classification:
Uses a CNN (pretrained model:trainedModel.h5) for accurate classification of sign language gestures. -
Dual Output:
Provides immediate text display and synthesized speech via the pyttsx3 text-to-speech engine. -
User-Friendly Interface:
An intuitive GUI built with Tkinter displays the live camera feed, recognized characters, and word suggestions.
-
Accurate Conversion:
Develop a system that reliably translates sign language gestures into text and audible speech. -
Enhanced Accessibility:
Empower individuals with hearing and speech impairments by facilitating real-time communication without the need for human interpreters. -
Modular and Scalable Design:
Ensure that the system is maintainable, with clear modules for gesture recognition, text processing, and UI management. -
Performance Optimization:
Achieve high accuracy and responsiveness under diverse lighting and environmental conditions.
This Wiki serves as the central documentation hub for the Sign Language to Speech Conversion project.
For updates, discussions, or inquiries:
- π Report issues or request features: GitHub Issues
- π¬ Join the discussion: GitHub Discussions
- π€ Contribute to the project: Contribution Guidelines
For any additional questions, please contact the project maintainers through the repository.