Author
Listed:
- Daniela Onita
(Department of Computer Science and Engineering, “1 Decembrie 1918” University of Alba Iulia, 5, Gabriel Bethlen, 515900 Alba Iulia, Romania
These authors contributed equally to this work.)
- Matei-Vasile Căpîlnaș
(Department of Computer Science and Engineering, “1 Decembrie 1918” University of Alba Iulia, 5, Gabriel Bethlen, 515900 Alba Iulia, Romania)
- Adriana Baciu (Birlutiu)
(Department of Computer Science and Engineering, “1 Decembrie 1918” University of Alba Iulia, 5, Gabriel Bethlen, 515900 Alba Iulia, Romania
These authors contributed equally to this work.)
Abstract
Recent advances in vision-language models such as BLIP-2 have made AI-generated image descriptions increasingly fluent and difficult to distinguish from human-authored texts. This paper investigates whether such differences can still be reliably detected by introducing a novel bilingual dataset of English and Romanian captions. The English subset was derived from the T4SA dataset, while AI-generated captions were produced with BLIP-2 and translated into Romanian using MarianMT; human-written Romanian captions were collected via manual annotation. We analyze the problem from two perspectives: (i) semantic alignment, using CLIP similarity, and (ii) supervised classification with both traditional and transformer-based models. Our results show that BERT achieves over 95% cross-validation accuracy (F1 = 0.95, ROC AUC = 0.99) in distinguishing AI from human texts, while simpler classifiers such as Logistic Regression also reach competitive scores (F1 ≈ 0.88). Beyond classification, semantic and linguistic analyses reveal systematic cross-lingual differences: English captions are significantly longer and more verbose, whereas Romanian texts—often more concise—exhibit higher alignment with visual content. Romanian was chosen as a representative low-resource language, where studying such differences provides insights into multilingual AI detection and challenges in vision-language modeling. These findings emphasize the novelty of our contribution: a publicly available bilingual dataset and the first systematic comparison of human vs. AI-generated captions in both high- and low-resource languages.
Suggested Citation
Daniela Onita & Matei-Vasile Căpîlnaș & Adriana Baciu (Birlutiu), 2025.
"Distinguishing Human- and AI-Generated Image Descriptions Using CLIP Similarity and Transformer-Based Classification,"
Mathematics, MDPI, vol. 13(19), pages 1-19, October.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:19:p:3228-:d:1766935
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:19:p:3228-:d:1766935. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.