IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v9y2021i12p1437-d578457.html
   My bibliography  Save this article

Multi-Output Learning Based on Multimodal GCN and Co-Attention for Image Aesthetics and Emotion Analysis

Author

Listed:
  • Haotian Miao

    (School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China)

  • Yifei Zhang

    (School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China)

  • Daling Wang

    (School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China)

  • Shi Feng

    (School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China)

Abstract

With the development of social networks and intelligent terminals, it is becoming more convenient to share and acquire images. The massive growth of the number of social images makes people have higher demands for automatic image processing, especially in the aesthetic and emotional perspective. Both aesthetics assessment and emotion recognition require a higher ability for the computer to simulate high-level visual perception understanding, which belongs to the field of image processing and pattern recognition. However, existing methods often ignore the prior knowledge of images and intrinsic relationships between aesthetic and emotional perspectives. Recently, machine learning and deep learning have become powerful methods for researchers to solve mathematical problems in computing, such as image processing and pattern recognition. Both images and abstract concepts can be converted into numerical matrices and then establish the mapping relations using mathematics on computers. In this work, we propose an end-to-end multi-output deep learning model based on multimodal Graph Convolutional Network (GCN) and co-attention for aesthetic and emotion conjoint analysis. In our model, a stacked multimodal GCN network is proposed to encode the features under the guidance of the correlation matrix, and a co-attention module is designed to help the aesthetics and emotion feature representation learn from each other interactively. Experimental results indicate that our proposed model achieves competitive performance on the IAE dataset. Progressive results on the AVA and ArtPhoto datasets also prove the generalization ability of our model.

Suggested Citation

  • Haotian Miao & Yifei Zhang & Daling Wang & Shi Feng, 2021. "Multi-Output Learning Based on Multimodal GCN and Co-Attention for Image Aesthetics and Emotion Analysis," Mathematics, MDPI, vol. 9(12), pages 1-17, June.
  • Handle: RePEc:gam:jmathe:v:9:y:2021:i:12:p:1437-:d:578457
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/9/12/1437/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/9/12/1437/
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Xiaodan Zhang & Qiao Song & Gang Liu, 2022. "Multimodal Image Aesthetic Prediction with Missing Modality," Mathematics, MDPI, vol. 10(13), pages 1-19, July.
    2. Xiaodan Zhang & Xun Zhang & Yuan Xiao & Gang Liu, 2022. "Theme-Aware Semi-Supervised Image Aesthetic Quality Assessment," Mathematics, MDPI, vol. 10(15), pages 1-18, July.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:9:y:2021:i:12:p:1437-:d:578457. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.