-
-
Notifications
You must be signed in to change notification settings - Fork 0
System Architecture & Design
The system architecture is designed to capture, process, and convert sign language gestures efficiently. It consists of several key components:
-
Data Acquisition Module:
Captures video input through a webcam and preprocesses the images for gesture detection. -
Gesture Recognition Module:
Utilizes a trained CNN model to classify hand gestures based on extracted image features. -
Output Generation Module:
Converts recognized gestures into text and feeds the text to a TTS engine for speech output. -
User Interface Module:
Developed with Tkinter, it displays the live feed, recognized symbols, and word suggestions in real time.
The project report provides multiple Data Flow Diagrams (DFDs) to illustrate the overall process:
- Level 0 (Context Diagram): Offers a high-level view of system interactions with external entities.
- Level 1 and Level 2 DFDs: Break down internal processes, detailing how input data is processed through various modules and eventually converted into output.
For detailed diagrams, please refer to the associated images in the repository.
This Wiki serves as the central documentation hub for the Sign Language to Speech Conversion project.
For updates, discussions, or inquiries:
- π Report issues or request features: GitHub Issues
- π¬ Join the discussion: GitHub Discussions
- π€ Contribute to the project: Contribution Guidelines
For any additional questions, please contact the project maintainers through the repository.