Blind's Eye
Divy Shah
Ahmedabad, Gujarat
- 0 Collaborators
“Blind’s Eye” is an android application that gives information about our surrounding environment into plain English Language. The application is developed for visually impaired people. ...learn more
Project status: Under Development
Overview / Usage
The blind peoples face many challenges daily in the crowded world around them like walking in the crowded street, going in the vegetable market and many similar places they have to depend on their sighted colleagues for crossing road and purchasing objects in supermarkets. This system helps them by describing their surrounding environment into plain English language and help them by giving information about what things is surrounding them and guide them for crossing roads or purchasing things from the supermarket.
This Application helps to the visually impaired peoples by better understandings of their surroundings in easy plain English speech output. It makes their life simpler by providing some hints of what is surrounding them and they easy to understands.
Methodology / Approach
workflow of our system is given below.
1 - First we train random (2000) images from Dataset- flickr30k (actually contained 30,000 images).
after the creation model, we test our model for some random images to check accuracy of our model.
basically this application takes the outside images and these images work as input for our model and then it generates the image caption with the help of our auto image captioning model and then generated caption goes as an input to the text-to-speech library. which is responsible to generate audio voice output which is used by the blind people to determine their surrounding environment.
Technologies Used
CNN deep learning auto image captioning algorithm.
Dataset - Flickr30k
Library - Text-to-speech (GTTS)