Author
Listed:
- Fan Xiong
- Mengzhao Fan
- Xu Yang
- Chenxiao Wang
- Jinli Zhou
Abstract
Emotion recognition plays a significant role in artificial intelligence and human-computer interaction. Electroencephalography (EEG) signals, due to their ability to directly reflect brain activity, have become an essential tool in emotion recognition research. However, the low dimensionality of sparse EEG channel data presents a key challenge in extracting effective features. This paper proposes a sparse channel EEG-based emotion recognition method using the CNN-KAN-F2CA network to address the challenges of limited feature extraction and cross-subject variability in emotion recognition. Through a feature mapping strategy, this method maps features such as Differential Entropy (DE), Power Spectral Density (PSD), and Emotion Valence Index (EVI) - Asymmetry Index (ASI) to pseudo-RGB images, effectively integrating both frequency-domain and spatial information from sparse channels, providing multi-dimensional input for CNN feature extraction. By combining the KAN module with a fast Fourier transform-based F2CA attention mechanism, the model can effectively fuse frequency-domain and spatial features for accurate classification of complex emotional signals. Experimental results show that the CNN-KAN-F2CA model performs comparably to multi-channel models while only using four EEG channels. Through training based on short-time segments, the model effectively reduces the impact of individual differences, significantly improving generalization ability in cross-subject emotion recognition tasks. Extensive experiments on the SEED and DEAP datasets demonstrate the proposed method’s superior performance in emotion classification tasks. In the merged dataset experiments, the accuracy of the SEED three-class task reached 97.985%, while the accuracy for the DEAP four-class task was 91.718%. In the subject-dependent experiment, the average accuracy for the SEED three-class task was 97.45%, and for the DEAP four-class task, it was 89.16%.
Suggested Citation
Fan Xiong & Mengzhao Fan & Xu Yang & Chenxiao Wang & Jinli Zhou, 2025.
"Research on emotion recognition using sparse EEG channels and cross-subject modeling based on CNN-KAN-F2CA model,"
PLOS ONE, Public Library of Science, vol. 20(5), pages 1-21, May.
Handle:
RePEc:plo:pone00:0322583
DOI: 10.1371/journal.pone.0322583
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0322583. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.