Emotion Recognition Model

1 0
  • 0 Collaborators

Visually impaired people lead a tough life, and not being able to see is least of their problems. There are so many experiences a visually impaired person fails to witness in his/her life, the colors, the imagery, faces of their loved ones and many more. So many visually impaired people go through the course of their life without ever witnessing a vision of a smile or happiness in the eyes of their loved ones. Encountering instances of smiling faces of loved ones bring out an inner joy in every being’s soul. Warmth of a smile and glee of happiness can only be felt truly if shared with a loved one. In an effort to aid visually impaired people feel this warmth and help them experience a fulfilled version of life, a template emotion detection system is proposed. ...learn more

Project status: Under Development

RealSense™, Artificial Intelligence

Intel Technologies
OpenVINO, BigDL

Overview / Usage

A useful method for communication taking into account
emotion recognition is presented. This was the main theme for
several research topics in the past. As emotion detection dataset
is not readily available, IADS dataset is taken into account,

which consisted of 60 audio stimuli encouraging three
categories of emotions-neutral, pleasant and unpleasant.
Our greatest inspiration driving this thought is that seeing
a smile on your adored ones is the way you encounter
genuine happiness. The originality of work goes with the
novel model. This work helps in communicating the feelings
of happiness which is otherwise difficult for visually
challenged people as they lack. The steps of emotion
detection, classification and processing detect the pleasant
valence values around them and communicate with the
visually challenged thereby transferring those emotions.
Thus, a system that bridges the gap for differently-abled that
‘sends’ pleasant emotions by not depriving them of ‘happy’
emotions in spite of them being unsighted is built.

Methodology / Approach

Following are the algorithmic steps:
A. Emotion Recognition
Emotions are the essence of human beings. They play a
huge role in our lives. With the advent of AI, automatic
emotion recognition has become the need of the hour which
very well acts as our motivation to progress in this direction.
Going by standard methods, emotions are understood from
text, speech, gesture or facial expression. But given the fact
that they come all under the control of our Central Nervous
System (CNS), they can be controlled by humans leading to
misinterpretation of those same emotions. Our work
addresses this issue by focusing on the ‘core’ emotions
which can be understood from electroencephalography
(EEG) signals. Among the small pool of algorithms,
Arousal-Valence dimensional model proves to be the most
effective in our case. Electroencephalography signals are
picked up from a human’s scalp and assessed in response to
several stimuli from basic IADS dataset.

B. Emotion Classification
Emotion Classification can be seen from two different
perspectives- Dimensional and Discrete. Discrete emotions
are the basic emotions like - anger, sadness, happiness,
pleasure, anticipation etc. All complex emotions can then be
formed by a mixture of these basic emotions. For example,
grief is a mixture of anger and sadness. This theory suggests
that any complex emotion can be expressed as a
combination of 6 basic emotions [16]- happiness, sadness,
anger, fear, disgust, and surprise.
Emotion Classification is also done using the famous
Arousal-Valence model [Fig. 1] proposed by Russell James.
Dimensional model of emotion classification is where you
place emotions in dimensions based on human response to
specific events thereby, giving full description of emotions.
Discrete models fail to quantify every emotion in classes or
specific labels. The dimensional model in use here [Fig. 1]
will be a two dimensional model that considers arousal/
intensity and level of valence.

C. Emotions Processing
Once the previous step is done and emotions are
detected, the processing part is done next. Here, help of the
International Affective Digitized Sounds (IADS) dataset to
differentiate between unpleasant, pleasant and neutral
emotions is taken. After thorough neurophysiological [17]
analysis, features have been extracted [18] for these three
type of emotions. Based on feature extraction
methodologies, further processing is carried out. For
example, if the laughter emotion is detected, it is simply put
into the pleasant category. But there could be more complex
scenarios wherein multiple emotions are in display (it’s
indistinguishable even with the help of EEG) like in the case
of sarcasm. In such cases, arousal and valence values of all
three differ only marginally.

Technologies Used

Open Vino Toolkit, Big DL architecture.

Comments (0)