IDEAS home Printed from https://ideas.repec.org/a/spr/elmark/v35y2025i1d10.1007_s12525-025-00806-7.html
   My bibliography  Save this article

Fine-tuning image-to-text models on Liechtenstein tourist attractions

Author

Listed:
  • Pejman Ebrahimi

    (University of Liechtenstein)

  • Johannes Schneider

    (University of Liechtenstein)

Abstract

Adjusting pre-trained artificial intelligence models to domain-specific problems is essential for many business problems. But domain-specific data is often scarce and expensive to collect. Moreover, fine-tuning on small datasets is challenging, as it carries risks of overfitting and catastrophic forgetting. This paper systematically investigates the effectiveness of fine-tuning pre-trained image-to-text models for domain-specific applications, emphasizing how model performance scales with dataset size. We compare two state-of-the-art architectures, Generative Image-to-Text (GIT) and Florence-2, using small and large datasets of Liechtenstein tourism attractions. Our analysis reveals a nuanced relationship between model architecture and data efficiency. On the small dataset, measured by BLEU score, GIT outperformed Florence-2 (0.71 vs 0.03). However, with the larger dataset, Florence-2 surpassed GIT by 33–37%. Similarly, CIDEr scores improved dramatically from 0.00 to 0.97 for GIT and from 0.33 to 0.95 for Florence-2, underscoring the critical importance of data volume. Our results suggest that fine-tuned models are capable of generating contextually accurate captions, capturing architectural details, historical context, and geographical information of tourist attractions, as well as potentially benefiting other domains like cultural heritage preservation and education. Our methodology emphasizes computational efficiency, requiring less than 3 GB of GPU memory for both GIT and Florence-2, making these approaches accessible to organizations with limited resources. This research contributes both theoretical insights into model scaling properties and practical guidance on selecting appropriate architectures based on available data resources. The results demonstrate that while fine-tuning can enable reasonable performance even with limited domain-specific data, architecture selection should be informed by anticipated data availability. Furthermore, evaluating multiple models is highly recommended.

Suggested Citation

  • Pejman Ebrahimi & Johannes Schneider, 2025. "Fine-tuning image-to-text models on Liechtenstein tourist attractions," Electronic Markets, Springer;IIM University of St. Gallen, vol. 35(1), pages 1-25, December.
  • Handle: RePEc:spr:elmark:v:35:y:2025:i:1:d:10.1007_s12525-025-00806-7
    DOI: 10.1007/s12525-025-00806-7
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s12525-025-00806-7
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s12525-025-00806-7?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    More about this item

    Keywords

    Image-to-text models; Fine-tuning; Domain-specific applications; Evaluation metrics (BLEU; CIDEr; ROUGE); Liechtenstein tourist attractions; Data scaling;
    All these keywords.

    JEL classification:

    • C02 - Mathematical and Quantitative Methods - - General - - - Mathematical Economics
    • O3 - Economic Development, Innovation, Technological Change, and Growth - - Innovation; Research and Development; Technological Change; Intellectual Property Rights
    • R10 - Urban, Rural, Regional, Real Estate, and Transportation Economics - - General Regional Economics - - - General
    • Y8 - Miscellaneous Categories - - Related Disciplines
    • Z3 - Other Special Topics - - Tourism Economics

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:elmark:v:35:y:2025:i:1:d:10.1007_s12525-025-00806-7. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.