Author
Listed:
- Hiroki Kojima
- Asako Toyama
- Shinsuke Suzuki
- Yuichi Yamashita
Abstract
Food preferences differ among individuals, and these variations reflect underlying personalities or mental tendencies. However, capturing and predicting these individual differences remains challenging. Here, we propose a novel method to predict individual food preferences by using CLIP (Contrastive Language-Image Pre-Training), which can capture both visual and semantic features of food images. By applying this method to food image rating data obtained from human subjects, we demonstrated our method’s prediction capability, which achieved better scores compared to methods using pixel-based embeddings or label text-based embeddings. Our method can also be used to characterize individual traits as characteristic vectors in the embedding space. By analyzing these individual trait vectors, we captured the tendency of the trait vectors of the high picky-eater group. In contrast, the group with relatively high levels of general psychopathology did not show any bias in the distribution of trait vectors, but their preferences were significantly less well-represented by a single trait vector for each individual. Our results demonstrate that CLIP embeddings, which integrate both visual and semantic features, not only effectively predict food image preferences but also provide valuable representations of individual trait characteristics, suggesting potential applications for understanding and addressing food preference patterns in both research and clinical contexts.Author summary: Food preferences vary greatly among individuals and can provide insights into personality traits and mental health patterns. Traditional approaches to understanding these preferences have been limited by their inability to capture the complex interplay between what we see and what we know about food. In this study, we developed a new computational method using CLIP (Contrastive Language-Image Pre-Training), an artificial intelligence model that can analyze both visual features and semantic meaning simultaneously. We tested our approach on food rating data from 199 participants who evaluated 896 food images. Our method successfully predicted individual food preferences and revealed distinct patterns in people with different eating behaviors and mental health characteristics. Notably, individuals with picky eating tendencies showed preference patterns that systematically avoided healthy foods, while those with higher mental health symptom scores had less consistent preference patterns overall. These findings demonstrate that combining visual and semantic information provides a powerful tool for understanding food preferences, with potential applications in personalized nutrition, clinical assessment, and treatment of eating disorders.
Suggested Citation
Hiroki Kojima & Asako Toyama & Shinsuke Suzuki & Yuichi Yamashita, 2025.
"Predicting individual food valuation via vision-language embedding model,"
PLOS Digital Health, Public Library of Science, vol. 4(10), pages 1-18, October.
Handle:
RePEc:plo:pdig00:0001044
DOI: 10.1371/journal.pdig.0001044
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0001044. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.