Poster No:
2375
Submission Type:
Abstract Submission
Authors:
Taeseong Kim1, Won Hee Lee1
Institutions:
1Kyung Hee University, Yonginsi, Republic of Korea
First Author:
Taeseong Kim
Kyung Hee University
Yonginsi, Republic of Korea
Co-Author:
Won Hee Lee
Kyung Hee University
Yonginsi, Republic of Korea
Introduction:
Emotion recognition using electroencephalogram (EEG) has gained significant attention in research due to its non-invasive, user-friendly and cost-effective nature. However, the challenge lies in the recognition of emotions from raw EEG signals, characterized by a low signal-to-noise ratio (SNR) and noise during data collection. Effective feature extraction is crucial for accurate EEG-based emotion recognition. While various studies have employed deep learning approaches such as DGCNN (Song, et al., 2018), STRNN (Zhang, et al., 2018), and BiLSTM (Joshi, et al., 2021) for emotion-related feature extraction, they often neglect the consideration of the brain's anatomical characteristics. In this study, we propose a transformer-based deep learning model that captures inherent patterns based on the anatomical information of the brain for enhanced EEG-based emotion recognition.
Methods:
We utilized the SJTU Emotion EEG Dataset (SEED), comprising EEG signals from 62 channels across 15 subjects (Zheng et al., 2015). EEG signals were recorded during 3 sessions for each subject, with 15 movie clips per session. For our analysis, we focused on 48 channels, excluding 14 channels challenging to categorize anatomically (Figure 1A). For feature extraction, we computed the differential entropy (DE) every second for five frequency bands: delta (1-3 Hz), theta (4-7 Hz), alpha (8-13 Hz), beta (14-30 Hz), and gamma (31-50 Hz). These features were stacked in 10-second intervals. Our proposed model comprises a spectral-channel attention module (SCAM) and two layers of the transformer encoder (Figure 1B). SCAM based on the squeeze-and-excitation (SE) block (Hu, et al., 2018) adaptively updates features in the spectral and EEG channel domains by assigning weights to prioritize crucial information in each domain, independently applied to each brain region (Figure 1C). The SCAM-generated embedding vectors for each brain region are then input into the transformer encoder. Within the transformer encoder, we incorporate anatomy-aware self-attention, deviating from conventional self-attention (Figure 1D). This anatomy-aware self-attention employs a masked mechanism, ensuring that attention is performed exclusively among embedding vectors corresponding to EEG channels associated with the same brain region. The transformer encoder outputs are flattened and processed through a fully-connected layer for emotion classification (neutral, positive, and negative). We conducted subject-dependent experiments on the SEED dataset. The model was trained on the first 9 trials as the training set and tested on the remaining 6 trials as the test set for each subject. Model performance was assessed by averaging accuracy and macro F1-scores across all subjects and sessions.

·Figure 1. (A) Partition of EEG channels according to the anatomy of the brain. (B) Model architecture. (C) Embedding methods. (D) Transformer encoder and anatomy-aware self-attention.
Results:
Figure 2 shows the subject-dependent emotion recognition performance with and without SCAM and anatomy-aware self-attention. The highest accuracy of 90.63% was achieved when both SCAM and anatomy-aware self-attention were incorporated. The model's performance decreased by 1.36% with the removal of SCAM and by 1.23% with the removal of anatomy-aware self-attention. When a portion of SCAM and anatomy-aware self-attention were removed, the classification accuracy decreased by 2.07%.

·Figure 2. Subject-dependent emotion recognition performance, demonstrating the impact of SCAM and anatomy-aware self-attention on emotion recognition.
Conclusions:
The results demonstrated that both SCAM and anatomy-aware self-attention improved the accuracy of emotion recognition compared to the model without these components. SCAM helped capture more relevant information in both spectral and EEG channel domains, while anatomy-aware self-attention leveraged the anatomical structure of the brain to extract meaningful patterns. The proposed transformer-based deep learning model shows promise for EEG-based emotion recognition with SCAM and anatomy-aware self-attention playing crucial roles in improving the accuracy of emotion recognition.
Modeling and Analysis Methods:
Classification and Predictive Modeling 2
Novel Imaging Acquisition Methods:
EEG 1
Keywords:
Electroencephaolography (EEG)
Emotions
Machine Learning
Other - Transformer
1|2Indicates the priority used for review
Provide references using author date format
Song, T., Zheng, W., Song, P., & Cui, Z. (2018). EEG emotion recognition using dynamical graph convolutional neural networks. IEEE Transactions on Affective Computing, 11(3), 532-541.
Zhang, T., Zheng, W., Cui, Z., Zong, Y., & Li, Y. (2018). Spatial–temporal recurrent neural network for emotion recognition. IEEE transactions on cybernetics, 49(3), 839-847.
Joshi, V. M., & Ghongade, R. B. (2021). EEG based emotion detection using fourth order spectral moment and deep learning. Biomedical Signal Processing and Control, 68, 102755.
Zheng, W. L., & Lu, B. L. (2015). Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Transactions on autonomous mental development, 7(3), 162-175.
Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).
Yu, F., & Koltun, V. (2015). Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122.