Sign Language To Text and Speech Converter & Vice Versa
- 0 Collaborators
The Sign Language to Text and Speech Converter and Vice Versa uses the Mediapipe library for real-time gesture recognition and oneAPI libraries such as oneDNN and OpenVINO to optimize the system's performance. This enables accurate and fast translation of sign language gestures into text and speech. ...learn more
Project status: Under Development
Groups
Student Developers for oneAPI
Intel Technologies
oneAPI
Overview / Usage
The Sign Language to Text and Speech & Vice Versa project is based on Long Short-Term Memory (LSTM) and MediaPipe. The goal of the project is to develop a system that can recognize sign language gestures and convert them into text and speech, and vice versa. This could be a valuable tool for individuals who are deaf or hard of hearing, as well as those who are learning sign language.
Methodology / Approach
To build the Sign Language to Text and Speech & Vice Versa project, the project could use the Intel oneAPI Base Toolkit. This toolkit includes the Intel Distribution of OpenVINO, which can be used to optimize the performance of MediaPipe models on Intel hardware. The project could also use the Intel oneAPI Deep Neural Network Library (oneDNN) to optimize the LSTM model's performance.
Furthermore, the project could use the Intel DevCloud for oneAPI to develop and test the system. The DevCloud provides access to Intel hardware, such as CPUs and GPUs, which can be used to accelerate the system's performance.
Overall, by using oneAPI tools and the DevCloud for development and testing, the Sign Language to Text and Speech & Vice Versa project can improve its performance and accuracy, making it a valuable tool for individuals who are deaf or hard of hearing.
Technologies Used
Python
oneDNN
Mediapipe
TensorFlow
oneVPL
OpenVINO
Documents and Presentations
Repository
https://github.com/MayurdhvajsinhJadeja/Sign-to-Text-Converter