MerrikiLingua

Anila Hannah

Anila Hannah

Bengaluru, Karnataka

The technology that we are proposing is a system that converts sign language to speech, allowing those who are deaf or hard of hearing to communicate more effectively with those who do not understand sign language. ...learn more

Project status: Under Development

oneAPI, Artificial Intelligence, Cloud

Intel Technologies
oneAPI

Code Samples [1]

Overview / Usage

The technology that we are proposing is a system that converts sign language to speech, allowing those who are deaf or hard of hearing to communicate more effectively with those who do not understand sign language. The system would recognize and understand user-made signs using computer vision algorithms, and then provide spoken language output in real-time. This would entail establishing a user-friendly interface that enables people to input signs quickly as well as a machine learning model that can properly recognize a variety of sign language movements. The ultimate objective of this technology is to enhance accessibility and communication for those who use sign language as their primary form of communication.

Methodology / Approach

The technology that we are proposing is a system that converts sign language to speech, allowing those who are deaf or hard of hearing to communicate more effectively with those who do not understand sign language. The system would recognize and understand user-made signs using computer vision algorithms, and then provide spoken language output in real-time. This would entail establishing a user-friendly interface that enables people to input signs quickly as well as a machine learning model that can properly recognize a variety of sign language movements. The ultimate objective of this technology is to enhance accessibility and communication for those who use sign language as their primary form of communication.

Technologies Used

The technology that we are proposing is a system that converts sign language to speech, allowing those who are deaf or hard of hearing to communicate more effectively with those who do not understand sign language. The system would recognize and understand user-made signs using computer vision algorithms, and then provide spoken language output in real-time. This would entail establishing a user-friendly interface that enables people to input signs quickly as well as a machine learning model that can properly recognize a variety of sign language movements. The ultimate objective of this technology is to enhance accessibility and communication for those who use sign language as their primary form of communication.

Repository

https://github.com/anilaHannah/dataset

Collaborators

3 Results

3 Results

Comments (0)