Subject independent emotion recognition from EEG using VMD and deep learning

Subject independent emotion recognition from EEG using VMD and deep learning

Journal Pre-proofs Subject Independent Emotion recognition from EEG using VMD and Deep Learning Pallavi Pandey, K.R. Seeja PII: DOI: Reference: S1319...

1MB Sizes 0 Downloads 2 Views

Journal Pre-proofs Subject Independent Emotion recognition from EEG using VMD and Deep Learning Pallavi Pandey, K.R. Seeja PII: DOI: Reference:

S1319-1578(19)30999-1 JKSUCI 699

To appear in:

Journal of King Saud University - Computer and Information Sciences

Received Date: Revised Date: Accepted Date:

30 July 2019 27 September 2019 5 November 2019

Please cite this article as: Pandey, P., Seeja, K.R., Subject Independent Emotion recognition from EEG using VMD and Deep Learning, Journal of King Saud University - Computer and Information Sciences (2019), doi: https://

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

© 2019 Production and hosting by Elsevier B.V. on behalf of King Saud University.


Title: Subject Independent Emotion Recognition from EEG using VMD and Deep Learning Author Details 1.

Pallavi Pandey (First Author) Department of Computer Science & Engineering Indira Gandhi Delhi Technical University for Women Kashmere Gate, Delhi- 110006 Mob No: +91 9711673119 Email Address: [email protected] Orcid id: 0000-0002-1294-2949 2. Seeja K. R (Corresponding Author) Department of Computer Science & Engineering Indira Gandhi Delhi Technical University for Women Kashmere Gate, Delhi- 110006 Mob No: +91 9968217403 Email Address: [email protected] Orcid id: 0000-0001-6618-6758

Conflict of interest:


Subject Independent Emotion recognition from EEG using VMD and Deep Learning


Emotion recognition from Electroencephalography (EEG) is proved to be a good choice as it cannot be mimicked like speech signals or facial expressions. EEG signals of emotions are not unique and it varies from person to person as each one has different emotional responses to the same stimuli. Thus EEG signals are subject dependent and proved to be effective for subject dependent emotion recognition. However, subject independent emotion recognition plays an important role in situations like emotion recognition from paralyzed or burnt face, where EEG of emotions of the subjects before the incidents are not available to build the emotion recognition model. Hence there is a need to identify common EEG patterns corresponds to each emotion independent of the subjects. In this paper, a subject independent emotion recognition technique is proposed from EEG signals using Variational Mode Decomposition (VMD) as a feature extraction technique and Deep Neural Network as the classifier. The performance evaluation of the proposed method with the benchmark DEAP dataset shows that the combination of VMD and Deep Neural Network performs better compared to the state of the art techniques in subject-independent emotion recognition from EEG. Key words: Variational Mode Decomposition; Valence-Arousal model; Deep Neural Network; Affective computing; Intrinsic-mode functions

1. Introduction Emotion recognition is associated to the field of affective computing. Affective computing is the study of how computers process and recognize emotions. The major part of the emotions in human’s daily life is the management of attention (Dolan, 2002) and decision making (Lemer et al., 2015). An emotionally imbalance person may be less responsive to perform daily tasks. So it is clear that emotions play important role in one’s life. EEG recording is a non invasive method and hence researchers are using it to study the neural activity of the brain related to emotional responses. Emotion recognition from EEG has already been explored successfully by the researchers in subject dependent case (Liu and Sourina, 2014). In subject dependent approaches, EEG data from same user is used to train as well as to test the emotion recognition system. These types of emotion recognition work well for applications like online gaming where EEG data of the player can be captured to train the model and later on that model is used to evaluate the emotion/mood of the user while game playing. However, for recognizing emotions from people whose face is burned or paralyzed, the recorded EEG signal of the person before the incident needs to be available for performing the subject dependent emotion recognition. Most of the time, the need of the emotion recognition based on EEG data, comes after the incident when it becomes difficult to capture their facial expressions. In this situation a subject Independent EEG based emotion recognition system is required. Likewise, if a patient with depression comes for diagnosis, whose earlier data is unavailable when he was healthy, then also subject independent approach can be adopted. There are various applications of emotion detection like stress management (Kalas and Momin, 2016), anger management (Mohamed et al., 2012) and depression detection (Cai et al., 2018). Similarly driver’s state detection whether he is in anger/stress, sleepy or calm state (Dabbu et al., 2017). If the driver is not in calm state, some kind of alarm would be generated. Another is fear detection (Masood and Farooq, 2019) for ATM machine users. If user is in fear state, this means there is a threat to her/him and then ATM will not dispatch the money. In literature, we can find that not only EEG but other physiological signals like electro-myogram, facial muscle tension, blood pressure volume and a skin conductance (Picard et al., 2001; Chanel et al.,2011) are also used to detect emotions. Various techniques like Short Time Fourier Transform (STFT) (Lin et al., 2010; Ackermann et al., 2016), Multifractal detrend fluctuation analysis (Paul et al., 2015) and Empirical mode decomposition(EMD) (Zhuang et al., 2017; Mert and Aken, 2018) are used to extract various frequency bands from EEG. Several other features like power spectral density (PSD) (Lin et al., 2010; Ackermann et al., 2016) and discrete wavelet transform (Aydin et al., 2016; Pandey and Seeja, 2019a; Pandey and Seeja, 2019b) are also extracted from the EEG bands for emotion recognition. In a comparative study(Jatupaiboon et al., 2013) on various features of EEG for emotion recognition, PSD performed best as compared to other features like spectral power asymmetry, higher order spectra,

higher order crossing, common spatial patterns and asymmetric spatial patterns for classifying two emotions - happy and unhappy. The authors reported that beta and gamma frequency bands and channel T7 and T8 give good results as compared to other bands and other pairs of electrodes. Various classifiers (Alarcao and Fonseca,2017) are used for classifying emotions with EEG data. Out of which Support Vector Machine (SVM) (Wang et al., 2011; Shahabi and Moghimi, 2016 ) is the most reported in the literature. Most of them are subject dependent emotion classifiers. In some subject dependent approaches, linear SVM (Wang et al., 2014) performed well while in some other studies RBF SVM (Atkinson and Campos, 2016; Chanel et al., 2009), polynomial SVM(Lan et al., 2016) and adaptive SVM (Liu et al., 2013) have given good results. K-Nearest Neighbor (KNN) classifier (Xu and Plataniotis, 2012; Mohammadi et al., 2017) is also found to be effective in emotion classification with EEG data. There were few (Jirayucharoensak et al., 2014; Zeng and Lu, 2015) emotion classification attempts with deep learning too. Most of the existing work (Petrantonakis and Hadjileontiadis, 2010; Soleymani et al.,2012; Lin et al., 2014) on subject independent emotion classification is on self produced databases and hence it is very difficult to perform a comparative study on their performance. (Petrantonakis and Hadjileontiadis, 2010) used higher order crossing to extract features on a self-produced database with 16 subjects. They have classified six basic emotions using SVM classifier and obtained an accuracy of 83.33%. In another research (Soleymani et al.,2012) , an accuracy of 62.1% on three level of arousal and 50% on three level of valence has been obtained on a self created database using SVM. (Lin et al., 2014) created an EEG database with 26 subjects and worked on both approaches- subject dependent and independent. For subject independent, they got an accuracy of 61.09% for valence and 57.33% for arousal with SVM. Most of the emotion detection researches on DEAP databases are subject dependent. (Zhuang et al., 2017) have proposed a subject dependent emotion recognition method using eight electrodes data and they achieved an accuracy of 69.10% for valence and 71.99% for arousal with DEAP database. (Atkinson and Campos,2016) have used various statistical features of EEG and with SVM classifier on DEAP database and achieved an accuracy of 73.14% for valence and 73.16% for arousal. Further, with three levels of valence and arousal classification, they got an accuracy of 60.7% for arousal and 62.33% for valence. (Zhang et al., 2016) performed both subject dependent and independent classification for four emotions with SVM. They have taken data of 16 subjects from 30 electrodes from DEAP database. They got an accuracy of 62.59% for subject dependent classification and 58.75% for subject independent classification. (Li et al., 2018) have worked on two databases - DEAP and SEED. On DEAP, they achieved an accuracy of 59.06% for positive and negative emotions and on SEED data they got an accuracy of 83.33% with SVM. In the proposed method, Empirical Mode Decomposition (EMD) and Variational Mode Decomposition (VMD) are used to obtain intrinsic mode functions (IMF) from EEG data. For each IMF, two features namely Peak value of Power Spectral Density and First difference of the signal are calculated and then these features are fed into a Deep Neural Network for classification.


Material and Methods


EEG Data

EEG is a noninvasive procedure used to track electrical functions of the brain along the scalp and hence it is the first choice of researchers for several applications (Read and Innis, 2017). Human emotion recognition is one of those applications. A representative EEG signal may have amplitude of 10 microvolt to 100 microvolt and frequency range of 1 Hz to 100 Hz approximately. There are mainly five types of EEG frequency bands: Delta band which is less than 4 Hz, Theta band which is greater than or equal to 4 and less than 8 Hz, Alpha band which is greater than

or equal to 8 and less than or equal to 14, Beta band which is greater than 14 and less than 40 Hz and Gamma band is greater than or equal to 40 Hz. EEG signals can be recorded by two ways - mono-polar recording and bipolar recording. Mono-polar recording can be obtained by the voltage difference between the electrode placed on the scalp and reference electrode situated at ear lobe. In bipolar recording, voltage difference of two scalp electrodes is recorded. Subject will wear an electrode cap while watching the stimuli for fixed duration of time and the EEG recording will take place with EEG recording software. The electrodes in the electrode cap would be placed as suggested by 10/20 international electrode placement system (Acharya et al.,2016) as shown in figure 1. The numbers 10/20 are used to establish the constraint on the distances between neighboring electrodes. Constraint is that contiguous electrodes ought to be far off each other either ten percent or twenty percent of the total front to back or left to right distance of the skull. Head is partitioned into various lobes. Several lobe positions are represented by letters.

Figure 1. 10-20 Electrode-Placement system (Acharya et al.,2016)


Emotion Representation

Emotion can be represented using either categorical model or dimensional model (Mauss and Robinson, 2009). Emotions are labeled in categorical model like ‘surprised’ or ‘anger’ emotions. In dimensional-model (Lang, 1995) emotions are expressed in terms of several dimensions like Valence, Arousal, Dominance and Liking/Disliking. The two dimensional model using Valence and Arousal is shown in figure 2. Emotions are labeled as discrete points in this valence-arousal space. Valence axis goes from negative to positive and arousal axis goes from passive to active. In this modal, the space is divided into four quadrants. Valence and arousal values are rated on the scale which contains values 1 to 9. If the rating of valence scale is greater than 5 and arousal is greater 5 then the emotion could be ‘Excited’ or ‘Happy’ and falls in first quadrant. If the Valence is less than 5 and Arousal greater than 5 then the emotion could be ‘Angry’ or ‘Afraid’ and falls in second quadrant and so on.

Figure 2.Valence-Arousal model of emotions (Jirayucharoensak et al., 2014)


DEAP Database.

To the best of our knowledge, DEAP database (Koelstra et al., 2012) used in this work is the publically available EEG database for emotion recognition which contains maximum number of subjects. SEED and MAHNOB are other publically available EEG database for emotion recognition containing data from 15 and 30 subjects respectively. The DEAP database contains data of 16 male and 16 female subjects and therefore the data is not gender biased. The data is collected by allowing the participants to watch a one minute duration video by wearing an electrode cap and asking them to rate their emotions on valence and arousal scale. In this way, the EEG is recorded when the subjects were watching videos and they have rated how much valence and arousal they felt on valence arousal scale of 1 to 9 using self assessment manikin (Morris, 1995) as shown in figure 3.

Figure 3. Self-assessment-manikin used to rate videos (Morris, 1995). There were forty videos and for every video, this database holds readings of forty electrodes. Authors have already preprocessed the data. EEG was recorded at 512 Hz with 32 active AgCl electrodes placed on the subjects scalp according to the international 10/20 electrode placement system. Then the data is down sampled to 128 Hz. The statistics of the database is as follows: Number of electrodes - 40 (32 to record EEG, 8 to record other physiological signals)

Number of files - 32 .mat files (one for each subject) and a metadata file containing participants rating. Data dimension - 40 × 40 × 8064 for one subject (here first dimension shows video, second dimension shows electrodes position and third shows voltages.)


Feature Extraction

Intrinsic mode functions (IMF) of an EEG signal provide meaningful time-frequency information about the signal. In the proposed method, EMD and VMD techniques are used to compute the IMFs of EEG signal. When the signal is decomposed into its various oscillatory components, these components are called IMFs. After getting IMF signals of EEG data using EMD or VMD, two features namely peak value of PSD and first difference of the signal are calculated.

2.4.1 Empirical Mode Decomposition (EMD): In EMD (Huang et al., 1998), for a signal s (t ) , its IMFs are obtained by using a repetitive process called sifting. Each IMF must satisfy two conditions. First is that the number of zero crossing and extrema must be same or differ by at most one. Next, the mean value of the upper envelop defined by local maxima and lower envelop defined by local minima is zero. The complete process to find IMF involves the following steps: 1) Find out all minima and maxima in s (t ) . 2) Using interpolation, get the upper envelop

env max (t ) and lower envelop env min (t ) by connecting

maxima and minima respectively.

3) Compute the mean of

env max (t ) and env min (t ) as 𝑚(𝑡) =

envmax (t )  envmin (t ) 2

4) Obtained mean value is subtracted from the original signal s(𝑡) to get the details as 𝑑(𝑡) = 𝑠(𝑡) − 𝑚(𝑡) 5) Decide whether 𝑑(𝑡) satisfy the two basic conditions of IMF. 6) To obtain first IMF 𝐼(𝑡) , repeat the above steps from (a) to (e) until it satisfies the required two conditions of IMF. Now this obtained details signal d(t) will be the first IMF i.e. 𝐼(𝑡) = 𝑑(𝑡). 7) For next IMF, calculate the residue 𝑥(𝑡) = 𝑠(𝑡) − 𝐼(𝑡). This residue will be termed as a new signal. Now above steps will be repeated again. 8) The whole process will be continued until the residue signal will satisfy some stopping criteria (Say, it becomes constant). The original signal in terms of its decomposed IMF components is shown in Eqn.1 k

s (t )   I i (t )  x k (t )


i 1

In the above equation k represents the number of IMFs obtained and obtained using EMD is shown in figure 4.

I i (t ) is the ith IMF. A sample of six IMFs

Figure 4. First six IMFs using EMD (electrode fp1 of subject 1 for high valence).

2.4.2 Variational-Mode-Decomposition (VMD): VMD (Dragomiretskiy and Zosso, 2014) is a time-frequency decomposition approach. It is made to overcome the limitations present in EMD, listed as follows: i. EMD uses a recursive approach that does not allow backward error correction. ii. It is unable to handle noise properly. VMD does not use recursive approach; instead it uses concurrent approach to extract intrinsic mode functions from the signal. This is an adaptive method and decomposes the signal into k IMFs and gives set of modes uk with their respective centre frequencies wk. The sum of these modes represents the original signal. VMD is less susceptive to noise in comparison to EMD and it does not leave residual noise (Jiang et al., 2019). In VMD, identification of IMFs is considered as an optimization problem. The aim of optimization algorithm is to minimize the sum of bandwidths of IMFs such that the sum of all the u k is equal to main signal. This optimization problem can be formulated (Dragomiretskiy and Zosso, 2014) as given in equation 2. min{k || t[( (t )  j / t  u k (t )]e  jwk t || 22 } u k , wk


Such that  U k  S where

s is the original signal which has to be decomposed into k IMFs.


A sample of three IMFs obtained using VMD is shown in figure 5.

Figure 5. First three IMFs using VMD (electrode fp1 of subject 1 for high valence class)

2.4.3 Power-spectral-density(PSD): Power-spectral-density describes how power of a signal is distributed over frequency. In this research, Welch method (Rahi and Mehra,2014) is used to calculate PSD of IMF. Welch method: 1. Partition the signal s[0], s[1],…….,s[N-1] into k segments. 2. For each segment 1 to k, Compute a windowed discrete fourier transfer at a frequency v  i / M where  ( M / 2  1)  i  M / 2 as shown in equation 3.

S k (v)   S[m]w[m] exp(  j 2vm)



where m varies from (k-1)L to M+(k-1)L-1, w[m] is the window function, M represents segment size and L is the number of points that is shifted between the segments. 3.

For each segment, compute the modified periodogram value using equation 4.

Pk (v)  1 / w(abs( S k (v)) 2 where w 





[ m]

m 0


Estimate of the power spectral density by finding the average of the obtained periodogram values using equation 5. K

Ls (v)  1 / K  Pk (v) k 1


The number of points common to 2 adjacent segments is equal to (M-L) i.e. two adjacent segments will be overlapped by (M-L) points. After getting power spectral density signal, the peak value of it is selected as a feature. Figure 6(a) and 6(b) shows a sample PSD identified using Welch method for low valence class and high valence class respectively.

Figure 6. PSD of first IMF (method-VMD, electrode fp1,subject 1) (a) Low valence class (b) high valence class. Visualizing both the plots, it is clear that peak value of PSD of IMFs can discriminate both the classes and hence it is selected as feature for emotion recognition.

2.4.4 First Difference of IMF First difference represents the changes in time from one period to the next. If yt denotes the value at time t of the time series y then First Difference 

yt  yt 1 . Since in this research, we are analyzing the voltage variation of

EEG during a period of time (ie, during various emotional stimuli), we have selected First Difference of IMF also as a feature.


Support Vector Machine

SVM is a machine learning technique that classifies the data by drawing maximal marginal hyper planes which segregates the data very well. For non-separable data, it finds the hyper plane that maximizes the margin and minimizes the misclassification by introducing penalty term for misclassification. It maps the data by using kernel function. As described in section 1, various studies have suggested SVM as a better classifier for EEG based emotion classification tasks and hence it has been selected in this research for performance comparison with DNN.


Deep Neural Network

It is a machine learning technique that can be used to learn features as well as to perform classification tasks. The major attraction towards DNN is its ability to work well with large amount of data. If a neural network consists of more than three layers including input and output layer, this network comes under the category of deep neural network. A simple model of deep neural network with three hidden layers is shown in figure 7. The number of hidden layers and the number of neurons at each hidden layers can be varied. DNN also has the capability to detect stronger features from the data and hence it is selected in this research for the emotion recognition task.

Figure 7: Deep Neural Network


Proposed Methodology

The schematic diagram of the proposed emotion recognition methodology using EEG signals is shown in figure 8. The subjects will wear an electrode cap during the recording of EEG and they will watch videos. In this way the emotions will be induced and corresponding EEG will be recorded. Then the signals are preprocessed and artifacts are removed. Finally the data base of preprocessed EEG Data for emotion recognition is created. In this study the bench mark DEAP database is used that is already preprocessed. Then features of EEG are extracted by using EMD and VMD techniques. Obtained features are fed into the deep neural network to classify high/low valence and high/low arousal. These high/low valence and arousal indicators are used to obtain emotion label as described in the section 2.2.

Figure 8. Schematic diagram of the proposed method for emotion recognition from EEG The proposed methodology of subject independent emotion classification is outlined as follows: 1.

Data Collection EEG data corresponds to the electrodes fp1, fp2, F3 and F4 of all the 40 videos are selected from the DEAP database. Out of the 32 subjects, the data corresponds to 30 subjects are selected for training and 2 subjects are selected for testing in order to make it a pure subject independent approach.


Feature Extraction Two features are extracted from EEG signals – Peak value of PSD and First difference. These features are calculated from the IMFs of EEG signal. For each EEG signal, extract the IMFs using EMD or VMD. Then by using Welch method, the PSD of each IMF is calculated. Select the peak value of the PSD as a feature. Calculate the first difference of IMF as another feature as described in section 2.4.4.


Classification The two features calculated from the top IMFs are combined together to form the feature vector. Then the DNN classifier is trained with the feature vectors correspond to 30 subjects selected for training and then tested with the feature vectors correspond to the 2 subjects selected for testing.


Experiments and Results

The proposed methodology has been implemented in Python with tensor-flow platform. Gradient-descend based optimization algorithm is used and the activation function is Relu. At the output layer soft max function is used. DEAP database contains EEG recordings of 32 electrodes corresponds to 40 videos and 32 subjects. We have selected the EEG recording from four electrodes namely fp1,fp2,F3 and F4 (Petrantonakis and Hadjileontiadis, 2010) since these electrode positions are already identified as emotion specific. Also, we have considered the valence/arousal rating greater than 5 as High and less than or equal to 5 as LOW.

3.1. Experiment 1: EMD based features In this experiment, EEG signals were decomposed in to several IMFs using EMD. For some EEG signals, the total number of IMFs obtained was 11 and for some signals it was 12 based on the stopping criteria (i.e., when the IMF becomes constant). Then the top IMFs were selected for feature calculation. For each of the selected IMFs,the Peak value of PSD and First difference of IMF are calculated. The total number of features obtained for one electrode is 6.(i.e.,3x2) and thus for 4 electrodes (fp1, fp2, F3 and F4) it is 24. (i.e.,4x3x2). The features of the selected IMFs are then given to the two selected classifiers – SVM and DNN. The number of top IMFs per EEG signal was selected based on exhaustive experimentation. After trying different options like top one IMF, top two IMFs, top three IMFs and so on, we got best results with top three IMFs as given in Table 1. Table 1. SVM vs. DNN (EMD based features)

Arousal or Valence

No of IMFs selected for each signal

Train data size (Data from 30 subjects)

Test data size (Data from 2 subjects)


SVM Arousal


1200 X24


Kernal function

Accuracy on train Data (%)

Accuracy on test data (%)































Deep Neural Network SVM RBF Arousal




Polynomial Deep Neural Network





SVM Deep Neural Network





SVM Deep Neural Network

As shown in Table 1, the first three IMFs are highly correlated with EEG signal and hence for the second experiment only the top three IMFs are considered for feature extraction.

3.2. Experiment 2: VMD based features In this experiment, EEG signals were decomposed in to several IMFs using VMD. Then the top three IMFs of each signal are selected for feature extraction. The peak value of PSD and first difference of each of the selected IMFs are calculated as features. Then, the obtained feature vector is fed into SVM as well as DNN. In the case of SVM, linear, polynomial and radial basis function (RBF) kernel are examined and it is found that RBF is best suited for VMD based features. Similarly, different topologies of DNN are examined by changing the number of hidden layers and number of neurons in hidden layers. The best results were obtained at three hidden layers as shown in Table 2. Table 2. Accuracy at different DNN Topologies No. of Hidden Layers 3 4 5 6

No. of neurons at hidden layers for Valence

No. of neurons at hidden layers for Arousal

(12,24,12) (12,24,12,3) (12,24,12,21,12) (100, 200, 100, 200, 100,200)

(7,14,7) (12,24,12,3) (12,24,12,21,12) (100, 200, 100, 200, 100,200)

Accuracy (%) Arousal 62.50 55.00 60.00

61.25 59.75 56.75



Table 3 shows the performance comparison of the classifiers (SVM with RBF kernel and DNN with three hidden layers) with VMD based features. We did the experiment with data from 2 electrodes (fp1 and fp2) also in order to reduce the number of features as shown in Table 3. However the best results were obtained with features from four electrodes. Table 3. SVM vs. DNN (VMD based features) Valence /Arousal

No. of electrodes


No. of neurons at 3 hidden layers


(7,14,7) (12,24,12) (4,8,4) (10,20,10) -

Arousal Valence

4 (fp1,fp2,F3,F4)

Arousal Valence


2 (fp1,fp2)

Train data size (Data from 30 subjects)

Test data size (Data from 2 subjects)





Accuracy on test data (%) 61.25 57.50 62.50 58.75 57.50 55.00 60.00 56.25


From the experiments, it is found that 1. 2. 3. 4.

Features of IMFs like peak value of PSD and first differences are effective for emotion classification. Features of top IMFs, especially the first three IMFs, are more informative for the emotion recognition task. EEG recordings of electrodes fp1, fp2, F3 and F4 are related to emotions as found in literature. VMD based features with DNN perform best for both Arousal and Valence classification.

The comparison of EMD and VMD based features in terms of classification accuracy for arousal and valence is shown in the figure 9.

Low Vs High Arousal classification

Low Vs High Valence Classification







60 59

58 57

60 58 56





Figure 9.Comparison of different methods with respect to accuracy.

In DEAP database, the valence and arousal are rated in a 1-9 (low to high) scale. With this valence–arousal model (Russell, 1980), emotions can be classified as shown in Table 4. Thus, with the proposed binary classification model, four different emotions can be identified. Table 4. Emotion classification based on Valence-Arousal Valence Positive (6-9) Positive (6-9) Negative (1-5) Negative(1-5)

Arousal Passive(1-5) Active (6-9) Passive(1-5) Active(6-9)

Emotion Calm/Content Happy/Excited Sad/ Depressed Angry/Afraid

Various attempts have been made to develop human emotion recognition systems as explained in section 1. However most of them are subject dependent approaches and have used self created database with less number of subjects to evaluate the model. We have compared our results with the state of the art subject independent techniques, found in literature, on the bench mark DEAP database and is shown in Table 5 Table 5. Performance comparison with state of the art techniques Article


(Li et al., 2018)

Subject Independent

(Lan et al., 2019)

Subject Independent


Feature Extraction


Accuracy (%) 59.06 Positive and Negative Emotions





Differential Entropy

Domain adaption technique

48.93 For Three level valence

(Rayatdoost and Soleymani, 2018)


Subject Independent

Subject Independent


Spectral topography maps of different bands

Convolutional Neural Network

Arousal- 55.70 Valence- 59.22



Deep Neural Network

Arousal-61.25 Valence-62.50

5. Conclusion In this paper, a subject independent emotion recognition system based on EEG signals is proposed. In the proposed methodology, two features namely peak value of PSD and first difference were derived from the IMFs are used for emotion classification. The IMFs are obtained from EEG signals using two methods namely EMD and VMD. The experimental evaluation of the proposed methodology is done on benchmark DEAP dataset and it is found that the VMD based features from the top three IMFs are better compared to EMD based features. It is also found that the Deep Neural Network classifier performs better in case of subject-independent emotion recognition compared to SVM classifiers. In the proposed method, the EEG data of subjects used for building and testing the model are different and hence it is a generalized emotion recognition system. Thus, the proposed method can be utilized for emotion recognition from the EEG of any person whose data is not used for training of the classifier.

References Acharya, J. N., Hani, A. J., Cheek, J., Thirumala, P., & Tsuchida, T. N. (2016). American Clinical Neurophysiology Society guideline 2: guidelines for standard electrode position nomenclature. The Neurodiagnostic Journal, 56(4), 245-252. Ackermann, P., Kohlschein, C., Bitsch, J. A., Wehrle, K., & Jeschke, S., 2016, September. EEG-based automatic emotion recognition: Feature extraction, selection and classification methods. In e-Health Networking, Applications and Services (Healthcom), 2016 IEEE 18th International Conference on (pp. 1-6). IEEE Alarcao, S. M., & Fonseca, M.J., 2017. Emotions recognition using EEG signals: A survey. IEEE Transactions on Affective Computing. Atkinson, J., & Campos, D., 2016. Improving BCI-based emotion recognition by combining EEG feature selection and kernel classifiers. Expert Systems with Applications, 47, 35-41. Aydin, S. G., Kaya, T., & Guler, H., 2016. Wavelet-based study of valence–arousal model of emotions on EEG signals with LabVIEW. Brain informatics, 3(2), 109-117. Cai, H., Han, J., Chen, Y., Sha, X., Wang, Z., Hu, B., ... & Gutknecht, J., 2018. A pervasive approach to EEG-based depression detection. Complexity, 2018. Chanel, G., Kierkels, J. J., Soleymani, M., & Pun T., 2009. Short-term emotion assessment in a recall paradigm. International Journal of Human-Computer Studies, 67(8), 607-627. Chanel, G., Rebetez, C., Bétrancourt, M., & Pun T., 2011. Emotion assessment from physiological signals for adaptation of game difficulty. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 41(6), 1052-1063.

Dabbu, S., Malini, M., Reddy, B. R., & Vyza, Y. S. R., 2017. ANN based Joint Time and frequency analysis of EEG for detection of driver drowsiness. Defence Life Science Journal, 2(4), 406-415. Dolan, R. J., 2002. Emotion, cognition, and behavior. science, 298(5596), 1191-1194 Dragomiretskiy, K., & Zosso, D., 2014. Variational mode decomposition. IEEE transactions on signal processing, 62(3), 531-544. Huang, N. E., Shen, Z., Long, S. R., Wu, M. C., Shih, H. H., Zheng, Q., & Liu, H. H., 1998, March. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. In Proceedings of the Royal Society of London A: mathematical, physical and engineering sciences (Vol. 454, No. 1971, pp. 903995). Jiang, L., Zhou, X., Che, L., Rong, S., & Wen, H., 2019. Feature Extraction and Reconstruction by Using 2D-VMD Based on Carrier-Free UWB Radar Application in Human Motion Recognition. Sensors, 19(9), 1962. Jatupaiboon, N., Pan-ngum, S., & Israsena, P. (2013). Real-time EEG-based happiness detection system. The Scientific World Journal, 2013 Jirayucharoensak, S., Pan-Ngum, S., & Israsena, P., 2014. EEG-based emotion recognition using deep learning network with principal component based covariate shift adaptation. The Scientific World Journal,2014 Kalas, M. S., & Momin, B. F., 2016. Stress detection and reduction using EEG signals. In 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT) (pp. 471-475). [Database] Koelstra, S., Muhl, C., Soleymani, M., Lee, J. S., Yazdani, A., Ebrahimi, T., & Patras, I., 2012. Deap: A database for emotion analysis; using physiological signals. IEEE Transactions on Affective Computing, 3(1), 18-31 Lan, Z., Sourina, O., Wang, L., & Liu, Y., 2016. Real-time EEG-based emotion monitoring using stable features. The Visual Computer, 32(3), 347-358. Lan, Z., Sourina, O., Wang, L., Scherer, R., & Müller-Putz, G. R., 2019. Domain adaptation techniques for EEGbased emotion recognition: a comparative study on two public datasets. IEEE Transactions on Cognitive and Developmental Systems, 11(1), 85-94. Lang, P. J., 1995. The emotion probe: Studies of motivation and attention. American psychologist, 50(5), 372. Lerner, J. S., Li, Y., Valdesolo, P., & Kassam, K. S. 2015. Emotion and decision making. Annual review of psychology, 66, 799-823. Li, X., Song, D., Zhang, P., Zhang, Y., Hou, Y., & Hu, B., 2018. Exploring EEG Features in Cross-Subject Emotion Recognition. Frontiers in neuroscience, 12, 162. Lin, Y. P., Wang, C. H., Jung, T. P., Wu, T. L., Jeng, S. K., Duann, J. R., & Chen, J. H., 2010. EEG-based emotion recognition in music listening. IEEE Transactions on Biomedical Engineering, 57(7), 1798-1806. Lin, Y. P., Yang, Y. H., & Jung, T. P., 2014. Fusion of electroencephalographic dynamics and musical contents for estimating emotional responses in music listening. Frontiers in neuroscience, 8, 94. Liu, Y., & Sourina, O., 2014. Real-time subject-dependent EEG-based emotion recognition algorithm. In Transactions on Computational Science XXIII (pp. 199-223). Springer, Berlin, Heidelberg.

Liu, Y. H., Wu, C. T., Kao, Y. H., & Chen, Y. T., 2013, July. Single-trial EEG-based emotion recognition using kernel Eigen-emotion pattern and adaptive support vector machine. In Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE (pp. 4306-4309). Masood, N., & Farooq, H., 2019. Investigating EEG patterns for dual-stimuli induced human fear emotional state. Sensors, 19(3), 522. Mauss, I. B., & Robinson, M. D. (2009). Measures of emotion: A review. Cognition and emotion, 23(2), 209-237. Mert, A., & Akan, A., 2018. Emotion recognition from EEG signals by using multivariate empirical mode decomposition. Pattern Analysis and Applications, 21(1), 81-89. Mohamed, M., Quan, L. R., Ahmad, I. L., Chuan, L. C., & bt Hamid, S. H., 2012. Determination of Angry Condition based on EEG, Speech and Heartbeat. International Journal on Computer Science and Engineering, 4(12), 1897. Mohammadi, Z., Frounchi, J., & Amiri, M., 2017. Wavelet-based emotion recognition system using EEG signal. Neural Computing and Applications, 28(8), 1985-1990. Morris, J. D., 1995. Observations: SAM: the Self-Assessment Manikin; an efficient cross-cultural measurement of emotional response. Journal of advertising research, 35(6), 63-68. Pandey P., Seeja K.R., 2019a. Emotional State Recognition with EEG Signals Using Subject Independent Approach. In: Mishra D., Yang XS., Unal A. (eds) Data Science and Big Data Analytics. Lecture Notes on Data Engineering and Communications Technologies, vol 16. Springer, Singapore Pandey P., Seeja K.R., 2019b. Subject-Independent Emotion Detection from EEG Signals Using Deep Neural Network. In: Bhattacharyya S., Hassanien A., Gupta D., Khanna A., Pan I. (eds) International Conference on Innovative Computing and Communications. Lecture Notes in Networks and Systems, vol 56. Springer, Singapore Paul, S., Mazumder, A., Ghosh, P., Tibarewala, D. N., & Vimalarani, G., 2015, February. EEG based emotion recognition system using MFDFA as feature extractor. In Robotics, Automation, Control and Embedded Systems (RACE), International Conference on (pp. 1-5). Petrantonakis, P. C., & Hadjileontiadis, L. J., 2010. Emotion recognition from EEG using higher order crossings. IEEE Transactions on Information Technology in Biomedicine, 14(2), 186-197. Picard, R. W., Vyzas, E., & Healey, J., 2001. Toward machine emotional intelligence: Analysis of affective physiological state. IEEE transactions on pattern analysis and machine intelligence, 23(10), 1175-1191. Rahi, P. K., & Mehra, R., 2014. Analysis of power spectrum estimation using welch method for various window techniques. International Journal of Emerging Technologies and Engineering, 2(6), 106-109. Rayatdoost, S., & Soleymani, M., 2018, September. CROSS-CORPUS EEG-BASED EMOTION RECOGNITION. In 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP) (pp. 1-6). Read, G. L., & Innis, I. J., 2017. Electroencephalography (Eeg). The International Encyclopedia of Communication Research Methods, 1-18. Russell, J. A., 1980. A circumplex model of affect. J. Pers. Soc. Psychol. 39, 1161–1178. Shahabi, H., & Moghimi, S., 2016. Toward automatic detection of brain responses to emotional music through analysis of EEG effective connectivity. Computers in Human Behavior, 58, 231-239.

Soleymani, M., Pantic, M., & Pun, T., 2012. Multimodal emotion recognition in response to videos. IEEE transactions on affective computing, 3(2), 211-223. Wang, X. W., Nie, D., & Lu, B. L., 2011, November. EEG-based emotion recognition using frequency domain features and support vector machines. In International conference on neural information processing (pp. 734-743). Springer, Berlin, Heidelberg. Wang, X. W., Nie, D., & Lu, B. L., 2014. Emotional state classification from EEG data using machine learning approach. Neurocomputing, 129, 94-106. Xu, H., & Plataniotis, K. N., 2012, September. Affect recognition using EEG signal. In Multimedia Signal Processing (MMSP), 2012 IEEE 14th International Workshop on (pp. 299-304). Zhang, J., Chen, M., Zhao, S., Hu, S., Shi, Z., & Cao, Y., 2016. ReliefF-based EEG sensor selection methods for emotion recognition. Sensors, 16(10), 1558. Zheng, W. L., & Lu, B. L. (2015). Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Transactions on Autonomous Mental Development, 7(3), 162-175. Zhuang, N., Zeng, Y., Tong, L., Zhang, C., Zhang, H., & Yan, B. (2017). Emotion recognition from EEG signals using multidimensional information in EMD domain. BioMed research international, 2017