Discovery of Shared Feature Mapping for EEG-based Emotion Recognition by Multi-Task Learning Approach

Document Type : Persian Original Article

Authors

1 Sadjad University of Technology, Mashhad, Iran

2 Computer and Information Technology Engineering Department, Sadjad University of Technology, Mashhad, Iran

Abstract

Abstract- Investigations have revealed that human emotions are resulted from their internal neural operations. The feedback of each emotion, sent from the skull surface, can be received and processed as a signal. Brain signals can be received and recorded by the EEG setup. In recent years, researchers and scholars have utilized various methods to capture and pre-process the signal, feature selection, dimensionality reduction and classification of brain signals. But, the number and type of extracted features play key roles in classification. Since it is unknown which feature operates more effectively and due to the fact that the number of used features are typically high and dependent on person, reduction in number of features and improving the efficiency of the classifier have been focused by many researchers. The purpose of this article is to provide a multi-task learning method to reduce the dimension and achieve a common space of features that describes the feelings of different people well. To show the efficiency of the proposed method, three well-known datasets are used, i.e. DEAP, SEED and DREAMER. Experiments are performed in two forms. In the first experiment, each channel is investigated separately. Channels with high efficiency are selected. The second experiment is performed by considering channels related to different parts of the brain (frontal, occipital, left hemisphere, right hemisphere). In the first experiment, the highest efficiency is about 80% and in the second experiment it is about 84%. Experimental results have shown that the proposed method have a higher efficiency than other comparing methods.

Keywords


[1] RW .Picard, E .Vyzas, J .Healey. "Toward machine emotional intelligence: Analysis of affective physiological state". Pattern Analysis and Machine Intelligence, IEEE Transactions on;23(10):1175-91. (2001).
[2] P .Ekman, Levenson RW, Friesen WV. “Autonomic nervous system activity distinguishes among emotions.” Science;221(4616):1208-10.(1983).
[3] E. A . Murray, S. P Wise.. “the Evolution of Memory Systems: Ancestors, anatomy, and adaptations”. Oxford University Press.‏ (2017)
[4] E. Douglas-Cowie, R. Cowie, & M. Schröder, “A new emotion database: considerations, sources and scope”. In ISCA tutorial and research workshop (ITRW) on speech and emotion, (2000).
[5] N .Sebe , et al. "Multimodal approaches for emotion recognition: a survey." Internet Imaging VI. Vol. 5670. International Society for Optics and Photonics, (2005).
[6] M.Grimm, and K. Kroschel. "Rule-based emotion classification using acoustic features." in Proc. Int. Conf. on Telemedicine and Multimedia Communication.( 2005).
[7] A R.Subhani, et al. "MRMR based feature selection for the classification of stress using EEG." Sensing Technology (ICST), 2017 Eleventh International Conference on. IEEE, (2017).
[8] Y. Yan, Li .Chenyang, and M. Shaoliang. "Emotion recognition based on sparse learning feature selection method for social communication." Signal, Image and Video Processing ,1-5,(2019).
[9] J.Gao, W.Wang, and Ji Zhang. "Explore interregional EEG correlations changed by sport training using feature selection." Computational intelligence and neuroscience 2016: 30, (2016).
[10] L.Piho, and T. Tjahjadi. "A mutual information based adaptive windowing of informative EEG for emotion recognition." IEEE Transactions on Affective Computing,(2018).
[11] Y.Dai, et al. "Sparsity constrained differential evolution enabled feature-channel-sample hybrid selection for daily-life EEG emotion recognition." Multimedia Tools and Applications: 1-28, (2018).
[12] F.Ren ,Y.Dong, and Wei Wang. "Emotion recognition based on physiological signals using brain asymmetry index and echo state network." Neural Computing and Applications,1-11, (2018).‏
[13] J. Jaeger, “Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the" echo state network" approach. Vol. 5. Bonn: GMD-Forschungszentrum Informationstechnik, (2002).
[14] Z.‏Mohammadi, J. Frounchi, and M. Amiri. "Wavelet-based emotion recognition system using EEG signal." Neural Computing and Applications 28.8: 1985-1990, (2017).
[15] Y.Li , W.Zheng,  L.Wang , Y. Zong, & T. Song, “A Novel Bi-hemispheric Discrepancy Model for EEG Emotion Recognition”. arXiv preprint arXiv:1906.01704, (2019).‏
[16] J. X. Chen, P. W. Zhang, Z. J.Mao, & Zhang, Y. N. Accurate EEG-based emotion recognition on combined features using deep convolutional neural networks. IEEE Access7, 44317-44328, (2019).‏
[17] T.  Song, W. Zheng, P. Song, & , Z. Cui. “EEG emotion recognition using dynamical graph convolutional neural networks”. IEEE Transactions on Affective Computing, (2018).‏
[18] Y. Ding, N. Zeng, , & C. Guan, “TSception: A Deep Learning Framework for Emotion Detection Using EEG”. arXiv preprint arXiv:2004.02965., (2020).
[19] W. L. Zheng , & B. L .Lu, “Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks”. IEEE Transactions on Autonomous Mental Development7(3), 162-175. (2015).
[20] M. Bilucaglia, G. M. Duma, G. Mento, L. Semenzato, & P ,Tressoldi. “Applying machine learning EEG signal classification to emotion‑related brain anticipatory activity”. F1000Research9(173), 173.‏ (2020).
[21] X. Xing, et al. "Diagnosis of OCD using functional connectome and Riemann kernelPCA." Medical Imaging 2019: Computer-Aided Diagnosis. Vol. 10950. International Society for Optics and Photonics, (2019).
[22] M. A .Rahman, M. F. Hossain, & R. Ahmmed, “Employing PCA and t-statistical approach for feature extraction and classification of emotion from multichannel EEG signal”. Egyptian Informatics Journal21(1), 23-35.‏ (2020).
[23] H. Ullah, M. Uzair , A. Mahmood, M. Ullah, S. D. Khan & F. A. Cheikh, “Internal emotion classification using EEG signal with sparse discriminative ensemble”. IEEE Access7, 40144-40153.‏ (2019).
[24] J.Gao, W.Wang, and Ji Zhang. "Explore interregional EEG correlations changed by sport training using feature selection." Computational intelligence and neuroscience 2016: 30, (2016).
[25] B.Zhang , E. M.Provost, & G.Essl,“Cross-corpus acoustic emotion recognition from singing and speaking: A multi-task learning approach”. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 5805-5809). IEEE. (2016).
[26] R.Xia , & Y.Liu, “A multi-task learning framework for emotion recognition using 2D continuous space”. IEEE Transactions on affective computing8(1), 3-14. (2015).
[27] B.Zhang , E. M.Provost, & G.Essl, “Cross-corpus acoustic emotion recognition with multi-task learning: Seeking common ground while preserving differences”. IEEE Transactions on Affective Computing10(1), 85-99. (2017).
[28] F.Ma, W.Gu, W.Zhang, , S.Ni , S. L.Huang, & L.Zhang,. “Speech emotion recognition via attention-based dnn from multi-task learning”. In Proceedings of the 16th ACM Conference on Embedded. (2018).
[29] Kalhor, E., & Bakhtiari, B. (2021). Speaker independent feature selection for speech emotion recognition: A multi-task approach. Multimedia Tools and Applications80(6), 8127-8146.‏
[30] Kalhor, E., & Bakhtiari, B. (2021). Multi-Task Feature Selection for Speech Emotion Recognition: Common Speaker-Independent Features Among Emotions. Journal of AI and Data Mining9(3), 269-282.‏
[31] Kalhor, E., & Bakhtiari, B. (2021). Subject-Independent Channel and Feature Selection for Emotion Classification Based on EEG Signal: A Multi-Task Approach. Journal of Control15(2), 139-157.‏
[32] D.Le, Z.Aldeneh , & E. M.Provost, “Discretized Continuous Speech Emotion Recognition with Multi-Task Deep Recurrent Neural Network”. In Interspeech (pp. 1108-1112), (2017).
[33] G.Chanel, et al. "Emotion assessment: Arousal evaluation using EEG’s and peripheral physiological signals." International workshop on multimedia content representation, classification and security. Springer, Berlin, Heidelberg, (2006).‏
[34] S. Liang, et al. "Efficient recovery of jointly sparse vectors." Advances in Neural Information Processing Systems. (2009).
[35] J. Chen, et al. "A convex formulation for learning shared structures from multiple tasks." Proceedings of the 26th Annual International Conference on Machine Learning. ACM, (2009).
[36] J. Zhou, C. Jianhui, and Ye .Jieping. "Clustered multi-task learning via alternating structure optimization." Advances in neural information processing systems. (2011).
[37] S. Koelstra, et al. "Deap: A database for emotion analysis; using physiological signals." IEEE transactions on affective computing 3.1: 18-31. (2012),
[38] S. Wright  , and C. Osvaldo. "[Dataset:] Seed data per trap for 18 Barro Colorado Island tree species for the period 2008-2012." (2016).‏
[39] S.Katsigiannis, and N. Ramzan. "DREAMER: a database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices." IEEE journal of biomedical and health informatics 22.1: 98-107, (2018).
[40] R.Jenke, A. Peer, and M. Buss. "Feature extraction and selection for emotion recognition from EEG." IEEE Transactions on Affective Computing 5.3: 327-339, (2014).‏
[41] B. Schuller, et al., “Cross-corpus acoustic emotion recognition: Variances and strategies”. IEEE Transactions on Affective Computing, 1(2): p. 119-131.(2010.).