Analysis and reconstruction of music percieved by the brain using EEG signals

6 0
  • 0 Collaborators

The project aims to study and analyse how our brain perceives music which we're listening to and the extraction of meaningful information from the brain signals (EEG). Reconstruction of the music using these EEG signals will also be done using Machine Learning models. ...learn more

Project status: Under Development

Artificial Intelligence

Groups
Student Developers for AI, DeepLearning, Artificial Intelligence India

Intel Technologies
Intel Opt ML/DL Framework, Intel Python, Movidius NCS

Code Samples [1]Links [2]

Overview / Usage

The Project aims to study and analyze how our brain perceives music, music influences the human mind in many ways like people listen to music to calm down, it is used for explaining different scenarios like suspenseful music but how does our brain perceive this music? is the big question.

Brain signals can be extracted using techniques like EEG, FMRI, etc. . These brain signals then can be used to extract meaningful information about the events going on in our brain when how we perceive our environment. In this project, I aim to analyze these EEG signals and try to reconstruct the music which a subject is listening to using these EEG signals. EEG signals are chosen and not other techniques like FMRI because audio data has high temporal features and EEG also has that, where as FMRI and other brain imaging techniques have spatial features and so EEG is the best choice.

This has a lot of applications and will be a step towards better understanding the brain, possible applications are -

  1. Using this to check how mind perceives things in a certain mood or during experiencing a certain emotion like in anger or happiness.
  2. This can be further used to detect emotions by considering the reconstructed signal.
  3. This can be used to test perceiving power and memory power, for example people with neurodegenerative diseases like dementia, ADHD, autism, etc.
  4. This can help in detection of diseases by further using data from those subjects and can also help in prevention by early detection.
  5. Further, this can be used to extend to data of words through which analysis of how brain perceives language can be done.

Methodology / Approach

Data collection will be done using a 32 channel EEG system, the subject will listen to a short music (without lyrics) followed by silence and then again a short music, EEG signals will be recorded throughout the whole process. To minimize noises the subject will have to close their eyes so as to prevent eye blinking which produces noise, to remove other noises various pre-processing techniques will be used.

Research has been done to find out that there is a high correlation between audio signals and EEG signals recorded (https://arxiv.org/pdf/1712.08336.pdf) which means the brain perceives the audio. To prove that we can use EEG signals to understand and reconstruct, an experiment has been done for reconstructing the images seen by the subject (http://crcv.ucf.edu/papers/camera_ready_acmmm_BNI08.pdf) and the results are highly satisfactory, thus combining these results the music reconstruction can be done.

To do this reconstruction Machine Learning will be used, Generative models like GANs, Variational autoencoders will be used combined with models like RNNs and LSTMs which have excellent results for temporal data.

Technologies Used

Technologies used will be Intel Optimized Python with Intel Optimized TensorFlow. Final prototype will be deployed on Intel Movidius to make it portable and affordable.

Repository

https://github.com/alishdipani/Music-Reconstruction-using-EEG-signals

Comments (0)