Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/144350
Title: Multi-view graph fusion of self-weighted EEG feature representations for speech imagery decoding
Authors: Zhao, Zhenye
Li, Yibing
Peng, Yong
Camilleri, Kenneth P.
Kong, Wanzeng
Keywords: Electroencephalographers
Brain -- Diseases -- Diagnosis
Brain-computer interfaces
User interfaces (Computer systems)
Human-computer interaction
Speech disorders -- Patients -- Means of communication
Issue Date: 2025
Publisher: Elsevier B. V.
Citation: Zhao, Z., Li, Y., Peng, Y., Camilleri, K., & Kong, W. (2025). Multi-view graph fusion of self-weighted EEG feature representations for speech imagery decoding. Journal of Neuroscience Methods, 418, 110413.
Abstract: Background: Electroencephalogram (EEG)-based speech imagery is an emerging brain–computer interface paradigm, which enables the speech disabled to naturally and intuitively communicate with external devices or other people. Currently, speech imagery research decoding performance is limited. One of the reasons is that there is still no consensus on which domain features are more discriminative. New method: To adaptively capture the complementary information from different domain features, we treat each domain as a view and propose a multi-view graph fusion of self-weighted EEG feature representations (MVGSF) model by learning a consensus graph from multi-view EEG features, based on which the imagery intentions can be effectively decoded. Considering that different EEG features in each view have different discriminative abilities, the view-dependent feature importance exploration strategy is incorporated in MVGSF. Results: (1) MVGSF exhibits outstanding performance on two public speech imagery datasets (2) The learned consensus graph from multi-view features effectively characterizes the relationships of EEG samples in a progressive manner. (3) Some task-related insights are explored including the feature importance-based identification of critical EEG channels and frequency bands in speech imagery decoding. Comparison with existing methods: We compared MVGSF with single-view counterparts, other multi-view models, and state-of-the-art models. MVGSF achieved the highest accuracy, with average accuracies of 78.93% on the 2020IBCIC3 dataset and 53.85% on the KaraOne dataset. Conclusions: MVGSF effectively integrates features from multiple domains to enhance decoding capabilities. Furthermore, through the learned feature importance, MVGSF has made certain contributions to identify the EEG spatial-frequency patterns in speech imagery decoding.
URI: https://www.um.edu.mt/library/oar/handle/123456789/144350
Appears in Collections:Scholarly Works - FacEngSCE

Files in This Item:
File Description SizeFormat 
Multi_view_graph_fusion_of_self_weighted_EEG_feature_representations_for_speech_imagery_decoding_2025.pdf
  Restricted Access
3.44 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.