Poster No:
2052
Submission Type:
Abstract Submission
Authors:
Ji-Hun Jo1, Dere Deji2, Boreom Lee2
Institutions:
1Gwangju Institution of the Science and Technology(GIST), Gwangju-si, Jeollanam-do, 2Gwangju Institute of Science and Technology(GIST), Gwangju-si, Jellanam-do
First Author:
Ji-Hun Jo
Gwangju Institution of the Science and Technology(GIST)
Gwangju-si, Jeollanam-do
Co-Author(s):
Dere Deji
Gwangju Institute of Science and Technology(GIST)
Gwangju-si, Jellanam-do
Boreom Lee
Gwangju Institute of Science and Technology(GIST)
Gwangju-si, Jellanam-do
Introduction:
Many studies have reported on Brain-Machine Interface (BMI) using bio signals such as Electrocardiogram (ECG), Electroencephalogram (EEG), and Electromyogram (EMG). A lot of BMI works were developed for the purpose of rehabilitation, but also the same time, it has brought improvements in the monitoring and analysis technologies [1].
In the previous study, we developed the Hand gesture classification system using multimodal Bio signal using a Field programmable gate array (FPGA) which implements a deep learning model [2]. Our system shows high classification accuracy in the case of using an EMG and combining EMG and EEG. Only in the case of using EEG shows lower accuracy than other reports.
This study aims to investigate the possibility of improving the classification accuracy when other various characteristics of EEG were applied to the hand gesture classification system in the feature extraction layers. Also, comparison results of Event-related Potential (ERP) characteristics for each hand gesture of different types of grasp can be expected to determine a modified experimental protocol or model in future studies.
Methods:
This study used the data from previous works to prepare a Hand gesture classification system consisting of bio signal sensors and a processing board that contains Convolution Neural Networks (CNN) based deep Learning Model Implemented FPGA chip. The experiment consisted of a cue time and performance time (5 sec) and a rest (3 sec). The subjects were ten healthy subjects (three females and seven males, 25 ± 5.5 years) (20211005-HR-63-01-02).
The gestures (large/medium-diameter grasp, three-finger, sphere grasp, prismatic pinch grasp, power flat grasp, cut, and rest) were chosen to improve the performance on the Toronto Rehabilitation Institute Hand Function Test (TRI-HFT) and the Jebsen-Taylor Hand Function Test.
The system was connected to each EMG and EEG sensor and acquired signals in an on-device format.
Ultracortex Mark IV Headgear and cyton board (OpenBCI Inc.,) were used to acquire the EEG data with 8-channels (F3, F4, C3, CZ, C4, P3, PZ, and P4) with 250Hz sampling frequency. The signal is preprocessed by Ultra96-V2 Board which contains an Arm cortex Multiprocessor system on a chip (MPSoC) and Xilinx Zynq series FPGA. Implemented deep learning model are based on a convolutional neural network (CNN), which extracts features from data with time series data. The proposed CNN implementation includes a downstream multilayer perceptron (MLP) layer to learn nonlinearities for gesture classification.
Input sources of EEG were filtered in 5-50Hz (band-pass) with 60Hz Notch, but analyzed in ERP to see a peak and Latency components in the mu rhythm (8-12Hz) range [3]. ERP signals were measured from the C3 area for the seven different hand gestures.
Results:
ERP signals were measured from the C3 area for the seven different grasp gestures. Each signal shows an average of 10 subjects for all gestures except rest. at approximately 500 ms, Maximum peak components were observed. Statistical analyses were performed with SPSS 25 (IBM Inc.,) using repeated-measures analysis of variance (ANOVA), and one-way ANOVA using the type of grasp gesture as the variable, no significant differences in maximum peak (p = 0.139) were found. However, the latency components show significant differences (p = 0.042), and between time gap of large diameter grasp is significantly faster than medium diameter grasp (p<0.05) in Bonferroni analysis.
Conclusions:
This study investigated the possibility of improved classification accuracy when other features of EEG were applied to the system. Most of the ERP patterns showed similar to general gesture results [4], and the statistical significance shown in latency also indicates the time points of peaks for each EEG data also need to consider not only simple features such as Root Mean Square (RMS) in the time series signal in the model. In further, these results would help to develop more effective rehabilitation orthosis.
Modeling and Analysis Methods:
EEG/MEG Modeling and Analysis
Motor Behavior:
Brain Machine Interface 1
Novel Imaging Acquisition Methods:
EEG 2
Keywords:
Data analysis
Electroencephaolography (EEG)
Motor
1|2Indicates the priority used for review
Provide references using author date format
[1] J. M. Veerbeek, A. C. Langbroek-Amersfoort, E. E. Van Wegen, C. G. Meskers, and G. Kwakkel, “Effects of robot-assisted therapy for the upper limb after stroke: a systematic review and meta-analysis,” Neurorehabilitation and neural repair, vol. 31, no. 2, pp. 107–121, 2017. Doi:10.1177/154596831666695.
[2] M. D. Dere, J. -H. Jo and B. Lee, "Event-Driven Edge Deep Learning Decoder for Real-Time Gesture Classification and Neuro-Inspired Rehabilitation Device Control," in IEEE Transactions on Instrumentation and Measurement, vol. 72, pp. 1-12, 2023, Art no. 4011612, doi: 10.1109/TIM.2023.3323962.
[3] Event-Related Brain Potentials for Goal-Related Power Grips, Westerholz J, Schack T, Koester D (2013) Event-Related Brain Potentials for Goal-Related Power Grips. PLOS ONE 8(7): e68501. doi: 10.1371/journal.pone.0068501.
[4] M. E. Cabrera, K. Novak, D. Foti, R. Voyles and J. P. Wachs, "What Makes a Gesture a Gesture? Neural Signatures Involved in Gesture Recognition," 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), Washington, DC, USA, 2017, pp. 748-753, doi: 10.1109/FG.2017.93.