ATHUL A V
Bengaluru, Karnataka
Bengaluru, Karnataka
The technology that we are proposing is a system that converts sign language to speech, allowing those who are deaf or hard of hearing to communicate more effectively with those who do not understand sign language. ...learn more
Project status: Under Development
oneAPI, Artificial Intelligence, Cloud
Intel Technologies
oneAPI
The technology that we are proposing is a system that converts sign language to speech, allowing those who are deaf or hard of hearing to communicate more effectively with those who do not understand sign language. The system would recognize and understand user-made signs using computer vision algorithms, and then provide spoken language output in real-time. This would entail establishing a user-friendly interface that enables people to input signs quickly as well as a machine learning model that can properly recognize a variety of sign language movements. The ultimate objective of this technology is to enhance accessibility and communication for those who use sign language as their primary form of communication.
The technology that we are proposing is a system that converts sign language to speech, allowing those who are deaf or hard of hearing to communicate more effectively with those who do not understand sign language. The system would recognize and understand user-made signs using computer vision algorithms, and then provide spoken language output in real-time. This would entail establishing a user-friendly interface that enables people to input signs quickly as well as a machine learning model that can properly recognize a variety of sign language movements. The ultimate objective of this technology is to enhance accessibility and communication for those who use sign language as their primary form of communication.
The technology that we are proposing is a system that converts sign language to speech, allowing those who are deaf or hard of hearing to communicate more effectively with those who do not understand sign language. The system would recognize and understand user-made signs using computer vision algorithms, and then provide spoken language output in real-time. This would entail establishing a user-friendly interface that enables people to input signs quickly as well as a machine learning model that can properly recognize a variety of sign language movements. The ultimate objective of this technology is to enhance accessibility and communication for those who use sign language as their primary form of communication.
https://github.com/anilaHannah/dataset
Bengaluru, Karnataka
Bengaluru, Karnataka
Bengaluru, Karnataka