IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v10y2022i12p2002-d835590.html
   My bibliography  Save this article

A Masked Self-Supervised Pretraining Method for Face Parsing

Author

Listed:
  • Zhuang Li

    (Department of Control Science and Engineering, Tongji University, Shanghai 201804, China
    Ant Group, Hangzhou 310013, China)

  • Leilei Cao

    (Ant Group, Hangzhou 310013, China)

  • Hongbin Wang

    (Ant Group, Hangzhou 310013, China)

  • Lihong Xu

    (Department of Control Science and Engineering, Tongji University, Shanghai 201804, China)

Abstract

Face Parsing aims to partition the face into different semantic parts, which can be applied into many downstream tasks, e.g., face mask up, face swapping, and face animation. With the popularity of cameras, it is easier to acquire facial images. However, pixel-wise manually labeling is time-consuming and labor-intensive, which motivates us to explore the unlabeled data. In this paper, we present a self-supervised learning method attempting to make full use of the unlabeled facial images for face parsing. In particular, we randomly mask some patches in the central area of facial images, and the model is required to reconstruct the masked patches. This self-supervised pretraining is capable of making the model capture facial feature representations through these unlabeled data. After self-supervised pretraining, the model is fine-tuned on a few labeled data for the face parsing task. Experimental results show that the model achieves better performance for face parsing assisted by the self-supervised pretraining, which greatly decreases the labeling cost. Our approach achieves 74.41 mIoU on the LaPa test set fine-tuned on only 0.2% of the labeled data of the whole training data, surpassing the model that is directly trained by a large margin of +5.02 mIoU. In addition, our approach achieves a new state-of-the-art on the LaPa and CelebAMask-HQ test set.

Suggested Citation

  • Zhuang Li & Leilei Cao & Hongbin Wang & Lihong Xu, 2022. "A Masked Self-Supervised Pretraining Method for Face Parsing," Mathematics, MDPI, vol. 10(12), pages 1-13, June.
  • Handle: RePEc:gam:jmathe:v:10:y:2022:i:12:p:2002-:d:835590
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/10/12/2002/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/10/12/2002/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:10:y:2022:i:12:p:2002-:d:835590. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.