IDEAS home Printed from https://ideas.repec.org/a/plo/pcbi00/1010878.html
   My bibliography  Save this article

Unsupervised learning reveals interpretable latent representations for translucency perception

Author

Listed:
  • Chenxi Liao
  • Masataka Sawayama
  • Bei Xiao

Abstract

Humans constantly assess the appearance of materials to plan actions, such as stepping on icy roads without slipping. Visual inference of materials is important but challenging because a given material can appear dramatically different in various scenes. This problem especially stands out for translucent materials, whose appearance strongly depends on lighting, geometry, and viewpoint. Despite this, humans can still distinguish between different materials, and it remains unsolved how to systematically discover visual features pertinent to material inference from natural images. Here, we develop an unsupervised style-based image generation model to identify perceptually relevant dimensions for translucent material appearances from photographs. We find our model, with its layer-wise latent representation, can synthesize images of diverse and realistic materials. Importantly, without supervision, human-understandable scene attributes, including the object’s shape, material, and body color, spontaneously emerge in the model’s layer-wise latent space in a scale-specific manner. By embedding an image into the learned latent space, we can manipulate specific layers’ latent code to modify the appearance of the object in the image. Specifically, we find that manipulation on the early-layers (coarse spatial scale) transforms the object’s shape, while manipulation on the later-layers (fine spatial scale) modifies its body color. The middle-layers of the latent space selectively encode translucency features and manipulation of such layers coherently modifies the translucency appearance, without changing the object’s shape or body color. Moreover, we find the middle-layers of the latent space can successfully predict human translucency ratings, suggesting that translucent impressions are established in mid-to-low spatial scale features. This layer-wise latent representation allows us to systematically discover perceptually relevant image features for human translucency perception. Together, our findings reveal that learning the scale-specific statistical structure of natural images might be crucial for humans to efficiently represent material properties across contexts.Author summary: Translucency is an essential visual phenomenon, facilitating our interactions with the environment. Perception of translucent materials (i.e., materials that transmit light) is challenging to study due to the high perceptual variability of their appearance across different scenes. We present the first image-computable model that can predict human translucency judgments based on unsupervised learning from natural photographs of translucent objects. We train a deep image generation network to synthesize realistic translucent appearances from unlabeled data and learn a layer-wise latent representation that captures the statistical structure of images at multiple spatial scales. By manipulating specific layers of latent representation, we can independently modify certain visual attributes of the generated object, such as its shape, material, and color, without affecting the others. Particularly, we find the middle-layers of the latent space, which represent mid-to-low spatial scale features, can predict human perception. In contrast, the pixel-based embeddings from dimensionality reduction methods (e.g., t-SNE) do not correlate with perception. Our results suggest that scale-specific representation of visual information might be crucial for humans to perceive materials. We provide a systematic framework to discover perceptually relevant image features from natural stimuli for perceptual inference tasks and therefore valuable for understanding both human and computer vision.

Suggested Citation

  • Chenxi Liao & Masataka Sawayama & Bei Xiao, 2023. "Unsupervised learning reveals interpretable latent representations for translucency perception," PLOS Computational Biology, Public Library of Science, vol. 19(2), pages 1-31, February.
  • Handle: RePEc:plo:pcbi00:1010878
    DOI: 10.1371/journal.pcbi.1010878
    as

    Download full text from publisher

    File URL: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010878
    Download Restriction: no

    File URL: https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1010878&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcbi.1010878?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1010878. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.