Author
Listed:
- Christian Jarvers
- Heiko Neumann
Abstract
Deep neural networks have been remarkably successful as models of the primate visual system. One crucial problem is that they fail to account for the strong shape-dependence of primate vision. Whereas humans base their judgements of category membership to a large extent on shape, deep networks rely much more strongly on other features such as color and texture. While this problem has been widely documented, the underlying reasons remain unclear. We design simple, artificial image datasets in which shape, color, and texture features can be used to predict the image class. By training networks from scratch to classify images with single features and feature combinations, we show that some network architectures are unable to learn to use shape features, whereas others are able to use shape in principle but are biased towards the other features. We show that the bias can be explained by the interactions between the weight updates for many images in mini-batch gradient descent. This suggests that different learning algorithms with sparser, more local weight changes are required to make networks more sensitive to shape and improve their capability to describe human vision.Author summary: When humans recognize objects, the cue they rely on most is shape. In contrast, deep neural networks mostly use local features like color and texture to classify images. We investigated how this difference arises, using images of simple shapes like rectangles and the letters L and T, combined with color and texture features. By testing different feature combinations, we show that some networks are generally unable to learn about shape, whereas others could learn to recognize shapes in isolation, but ignored shape if another feature was present. We show that this bias for color and texture arises from the way in which networks are trained: by averaging the learning signal over many images, the training algorithm favors simple features that are relatively similar in many images and removes sparser, more varied shape features. These insights can help build networks that are more sensitive to shape and work better as models of human vision.
Suggested Citation
Christian Jarvers & Heiko Neumann, 2024.
"Teaching deep networks to see shape: Lessons from a simplified visual world,"
PLOS Computational Biology, Public Library of Science, vol. 20(11), pages 1-32, November.
Handle:
RePEc:plo:pcbi00:1012019
DOI: 10.1371/journal.pcbi.1012019
Download full text from publisher
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1012019. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.