IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0274847.html
   My bibliography  Save this article

Improving classification and reconstruction of imagined images from EEG signals

Author

Listed:
  • Hirokatsu Shimizu
  • Ramesh Srinivasan

Abstract

Decoding brain activity related to specific tasks, such as imagining something, is important for brain computer interface (BCI) control. While decoding of brain signals, such as functional magnetic resonance imaging (fMRI) signals and electroencephalography (EEG) signals, during observing visual images and while imagining images has been previously reported, further development of methods for improving training, performance, and interpretation of brain data was the goal of this study. We applied a Sinc-EEGNet to decode brain activity during perception and imagination of visual stimuli, and added an attention module to extract the importance of each electrode or frequency band. We also reconstructed images from brain activity by using a generative adversarial network (GAN). By combining the EEG recorded during a visual task (perception) and an imagination task, we have successfully boosted the accuracy of classifying EEG data in the imagination task and improved the quality of reconstruction by GAN. Our result indicates that the brain activity evoked during the visual task is present in the imagination task and can be used for better classification of the imagined image. By using the attention module, we can derive the spatial weights in each frequency band and contrast spatial or frequency importance between tasks from our model. Imagination tasks are classified by low frequency EEG signals over temporal cortex, while perception tasks are classified by high frequency EEG signals over occipital and frontal cortex. Combining data sets in training results in a balanced model improving classification of the imagination task without significantly changing performance in the visual task. Our approach not only improves performance and interpretability but also potentially reduces the burden on training since we can improve the accuracy of classifying a relatively hard task with high variability (imagination) by combining with the data of the relatively easy task, observing visual images.

Suggested Citation

  • Hirokatsu Shimizu & Ramesh Srinivasan, 2022. "Improving classification and reconstruction of imagined images from EEG signals," PLOS ONE, Public Library of Science, vol. 17(9), pages 1-16, September.
  • Handle: RePEc:plo:pone00:0274847
    DOI: 10.1371/journal.pone.0274847
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0274847
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0274847&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0274847?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Tomoyasu Horikawa & Yukiyasu Kamitani, 2017. "Generic decoding of seen and imagined objects using hierarchical visual features," Nature Communications, Nature, vol. 8(1), pages 1-15, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Guohua Shen & Tomoyasu Horikawa & Kei Majima & Yukiyasu Kamitani, 2019. "Deep image reconstruction from human brain activity," PLOS Computational Biology, Public Library of Science, vol. 15(1), pages 1-23, January.
    2. Hojin Jang & Frank Tong, 2024. "Improved modeling of human vision by incorporating robustness to blur in convolutional neural networks," Nature Communications, Nature, vol. 15(1), pages 1-14, December.
    3. Serra E. Favila & Brice A. Kuhl & Jonathan Winawer, 2022. "Perception and memory have distinct spatial tuning properties in human visual cortex," Nature Communications, Nature, vol. 13(1), pages 1-21, December.
    4. Benjamin Lahner & Kshitij Dwivedi & Polina Iamshchinina & Monika Graumann & Alex Lascelles & Gemma Roig & Alessandro Thomas Gifford & Bowen Pan & SouYoung Jin & N. Apurva Ratan Murty & Kendrick Kay & , 2024. "Modeling short visual events through the BOLD moments video fMRI dataset and metadata," Nature Communications, Nature, vol. 15(1), pages 1-26, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0274847. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.