Author
Abstract
Object recognition in real-world environments requires dealing with considerable ambiguity, yet the human visual system is highly robust to noisy viewing conditions. Here, we investigated the role of perceptual learning in the acquisition of robustness in both humans and deep neural networks (DNNs). Specifically, we sought to determine whether perceptual training with object images in Gaussian noise, drawn from certain animate or inanimate categories, would lead to category-specific or category-general improvements in human robustness. Moreover, might DNNs provide viable models of human perceptual learning? Both before and after training, we evaluated the noise threshold required for accurate recognition using novel object images. Human observers were quite robust to noise before training, but showed additional category-specific improvement after training with only a few hundred noisy object examples. In comparison, standard DNNs initially lacked robustness, then showed both category-general and category-specific learning after training with the same noisy examples. We further evaluated DNN models that were pre-trained with moderately noisy images to match human pre-training accuracy. Notably, these models only showed category-specific improvement, matching the overall pattern of learning exhibited by human observers. A layer-wise analysis of DNN responses revealed that category-general learning effects emerged in the lower layers, whereas category-specific improvements emerged in the higher layers. Our findings provide support for the notion that robustness to noisy visual conditions arises through learning, humans likely acquire robustness from everyday encounters with real-world noise, and additional category-specific improvements exhibited by humans and DNNs involve learning at higher levels of visual representation.Author summary: We explored how humans and artificial neural networks learn to recognize objects under noisy and ambiguous conditions, which is crucial for making sense of complex, real-world environments. Humans are naturally adept at identifying objects even when visibility is poor, like on a rainy or snowy day, or when objects are partially hidden. We wanted to ask if humans or neural networks receive training with very noisy images of objects, do they get better at the task? Also, if they are trained specifically with animate or inanimate object images, would recognition improve in general or only for the trained category? We found that humans became better at recognizing new object images in noisy conditions, but only for the categories they were trained on. Artificial networks initially struggled with noisy images but showed some general improvement from training, plus further benefits for the trained category. Interestingly, networks that were pre-trained to mimic the initial robustness of human observers only showed category-specific benefits of training, mirroring the effects of training in humans. Our findings highlight how humans adapt to challenging visual conditions, suggesting that learning plays an important role in understanding and navigating noisy, real-world settings.
Suggested Citation
Hojin Jang & Frank Tong, 2025.
"Category-specific perceptual learning of robust object recognition modelled using deep neural networks,"
PLOS Computational Biology, Public Library of Science, vol. 21(9), pages 1-19, September.
Handle:
RePEc:plo:pcbi00:1013529
DOI: 10.1371/journal.pcbi.1013529
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1013529. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.