Mind-Approx

5 0
  • 0 Collaborators

This Project intends to extract facial features, body data (heartbeat and respiration rate) and eye movement of the driver using the camera to measure the attention of driver while driving and also to know where the driver focuses his/her attention when in particularly crowded situations and what type of movements do grabs the attention. So I will use this model to try to train a reinforcement learning agent to perform similarly in that situation. ...learn more

Project status: Under Development

Internet of Things, Artificial Intelligence

Groups
Student Developers for AI, DeepLearning, Movidius™ Neural Compute Group, Artificial Intelligence India

Intel Technologies
AI DevCloud / Xeon

Overview / Usage

I aim to predict the driver’s attention region and the movements and shape which drivers respond to quickly. The goal of this project is to estimate whether a person is paying attention while driving and also to find crucial data which can help in the development of self-driving cars.

This model will help in the development of self-driving cars by creating a monitoring system. self-driving cars can learn a lot from the state of the driver and can eventually develop themselves to take quick actions in emergency situations and to learn not only the physical aspect of driving but moral aspect of driving. This system will be integrated into cars as a system which detects anomalies in drivers behavior in similar conditions and alerts the driver. This architecture can find its use in many application where gauging the attentiveness of person and what causes it is quite necessary for better working.

Methodology / Approach

I propose a computer vision model based on a multi-branch deep architecture that integrates three sources of information: facial features, heart rate, and respiration rate. So my research will clearly classify whether the driver is paying attention to the intricacies of the driving. Suppose a driver is looking at the road while driving but not actually paying attention to the surroundings then this might be a cause of the accident. So to tackle this problem I have devised a solution in which my model will take into account the heartbeat and respiration rate of the driver to predict if the driver is stressing while driving or is going in a state of microsleep. Our system will be able to extract all these features without the need for any special medical equipment. We will only use the camera to extract all the features. After extracting all features (i.e facial features, heart rate, respiration rate) we will separately feed these features in their own LSTM network and then we will blend the output of these networks to get a classification about the attention of the driver.

I will also use the siamese network in order to map the broad scene of the road, the attention span, and region of focus that the driver often has for that kind of environment.

The whole model to learn to be a driver will again be a multibranch model that classifies different environment while driving based on visual input and driver's attention. One branch of this model will be attention prediction model and others will be environment classification model.

This trained model will then work as a base for developing a reinforcement learning agent to mimic humans in driving the car or any other vehicle in that type of road. This agent will be able to learn the patterns which help humans to take moral and logical decisions while driving. The best part of the agent will be that it will rule out those cases in which the driver is most prone to make rash decisions and cause an accident. This type of input to the agent will make the self-driving cars and any other autonomous agent much more human-like but may be better than that.

This RL agent will be based on the actor-critic model which tries to mimic concentration, shape recognition abilities and situation gauging abilities of humans and incorporate it with less prone accident prone visual inputs. It's like a transfer learning of the human brain to a smart autonomous agent.

This Project can have vast application in the brain-computer interface. The only difference will be the usage of EEG Machine instead of a camera.

Technologies Used

Intel optimized Libraries, Intel DevCloud

Comments (0)