Electroencephalography-targeted Graph Attention Network Model to Extract Spatiotemporal Features

Poster No:

1677 

Submission Type:

Abstract Submission 

Authors:

Jae-eon Kang1, Changha Lee1, Jong-Hwan Lee1

Institutions:

1Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea, Republic of

First Author:

Jae-eon Kang  
Department of Brain and Cognitive Engineering, Korea University
Seoul, Korea, Republic of

Co-Author(s):

Changha Lee  
Department of Brain and Cognitive Engineering, Korea University
Seoul, Korea, Republic of
Jong-Hwan Lee  
Department of Brain and Cognitive Engineering, Korea University
Seoul, Korea, Republic of

Introduction:

Brain-computer interface (BCI) enhances our lives by bridging brains to external devices and deepens our understanding of the human brain. Utilizing electroencephalography (EEG), BCI studies have strived to develop algorithms that facilitate daily life improvements while unraveling the characteristics of brain activities in various tasks. Within this realm, deep learning-based BCI studies have demonstrated remarkable performance [1], albeit with challenges in interpreting their underlying brain mechanisms. In addressing this interpretability gap, our paper proposes a novel deep learning model, EEG-Graph Attention Network (EEGAT), leveraging the graph structure and attention mechanism of the Graph Attention Network (GAT) [2] to enhance interpretability.

Methods:

Fig. 1 illustrates the architecture of the EEGAT model, comprising the Feature Extraction Block and the Graph Learning Block. The Feature Extraction Block incorporates both temporal and spatial convolution kernels. Temporal convolution kernels discern temporal features from raw EEG signals, followed by spatial convolution kernels extracting spatial features. These spatiotemporal features serve as node features in the subsequent graph construction process. In the Graph Learning Block, the graph is structured using the k-nearest neighbor (kNN) algorithm. GAT modules with residual connections are applied to update node features to enhance interpretability. Following the GAT layers, features undergo flattening and pass through a linear layer, culminating in a softmax activation function, generating the final output of the EEGAT model.

Our study employed three distinct datasets for evaluation-fatigue dataset [3], DEAP [4], and BCI Competition IV 2a [5]. The fatigue dataset involved EEG signals from 27 subjects engaged in a 90-minute virtual reality driving task. Preprocessing of the recorded 30 EEG channels included band-pass filtering (1-50 Hz), eye blink removal, and downsampling to 128 Hz. Fatigue levels, determined by reaction time during lane-departure events, were labeled as fatigue and non-fatigue states. DEAP, a multimodal human emotion states dataset, employed 40 emotional music videos to evoke emotions in 32 subjects. EEG signals (32 channels) underwent downsampling to 128 Hz, EOG removal, band-pass filtering (4-45 Hz), and re-referencing. Trials, segmented into 4-second nonoverlapping windows, facilitated model training. Valence ratings were chosen as the classification thresholds (5 for two classes, 3 and 6 for three classes). The BCI Competition IV 2a dataset encompassed nine subjects performing four motor imagery tasks. EEG signals (22 electrodes) underwent preprocessing: bandpass filtering (4-40 Hz), downsampling to 128 Hz, and exponential moving standardization. Recorded across two sessions, each consisting of six runs with 48 trials, tasks included left hand (class 1), right hand (class 2), both feet (class 3), and tongue (class 4).

Leave-one-subject-out (LOSO) validation utilized the three datasets to evaluate model performance, demonstrating the reliability of learned features. Additionally, we compared EEGAT against EEGNet [6] models with 4 and 8 temporal kernels, assessing the effectiveness of the proposed model.
Supporting Image: fig1_EEGAT.jpg
 

Results:

Fig. 2 demonstrates the overall performance of the models. EEGAT achieved higher accuracy on the three datasets. Especially on the fatigue dataset and DEAP, EEGAT achieved significantly higher accuracy than EEGNet models (p<0.05). The attention weights revealed relationships between the trained weights of the temporal and spatial kernels representing specific spatiotemporal patterns captured by the model.
Supporting Image: fig2_performance.png
 

Conclusions:

Our proposed EEGAT outperforms previous models in accuracy, particularly on the fatigue dataset and DEAP. In addition, the graph structure and the attention weights of each node provide insights into complex relationships among spatiotemporal patterns of diverse BCI tasks, with an improvement in model interpretability.

Modeling and Analysis Methods:

Classification and Predictive Modeling 2
EEG/MEG Modeling and Analysis 1
Methods Development
Multivariate Approaches
Other Methods

Keywords:

Computational Neuroscience
Computing
Data analysis
Electroencephaolography (EEG)
Emotions
Modeling
Multivariate
Other - Brain-computer interface (BCI), Graph attention network (GAT); Graph neural networks (GNN)

1|2Indicates the priority used for review

Provide references using author date format

[1] Khademi Z, Ebrahimi F and Kordy H M 2023 A review of critical challenges in MI-BCI: From conventional to deep learning methods J. Neurosci. Methods 383 109736
[2] Veličković P, Cucurull G, Casanova A, Romero A, Lio P and Bengio Y 2017 Graph attention networks ArXiv Prepr. ArXiv171010903
[3] Cao Z, Chuang C-H, King J-K and Lin C-T 2019 Multi-channel EEG recordings during a sustained-attention driving task Sci. Data 6 19
[4] Koelstra S, Muhl C, Soleymani M, Lee J-S, Yazdani A, Ebrahimi T, Pun T, Nijholt A and Patras I 2011 Deap: A database for emotion analysis; using physiological signals IEEE Trans. Affect. Comput. 3 18–31
[5] Brunner C, Leeb R, Müller-Putz G, Schlögl A and Pfurtscheller G 2008 BCI Competition 2008–Graz data set A Inst. Knowl. Discov. Lab. Brain-Comput. Interfaces Graz Univ. Technol. 16 1–6
[6] Lawhern V J, Solon A J, Waytowich N R, Gordon S M, Hung C P and Lance B J 2018 EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces J. Neural Eng. 15 056013

Acknowledgment: This work was supported by the National Research Foundation (NRF) grant funded by the Korea government (MSIT) (NRF-2021M3E5D2A01022515, No. RS-2023-00218987), and in part by the Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean government. [23ZS1100, Core Technology Research for Self-Improving Integrated Artificial Intelligence System].