Author
Listed:
- Benedikt Gloria
- Johannes Melsbach
- Sven Bienert
- Detlef Schoder
Abstract
Lately, large language models (LLMs) such as ChatGPT and Llama have gained significant attention. These models demonstrate a remarkable capability to solve complex tasks, drawing knowledge primarily from a generalized database rather than niche subject areas. Consequently, there has been a growing demand for domain-specific LLMs tailored to social and natural sciences, such as BloombergGPT or BioGPT. In this study, we present a domain-specific LLM focused on real estate: Real-GPT. The model is based on the parameter-efficient fine-tuning technique known as quantized low-rank adaptation (QLoRA) applied to Mistral 7B. To create a comprehensive fine-tuning dataset, we compiled a curated 21,000 self-instruction dataset sourced from 670 scientific papers, market research, scholarly articles, and real estate books. To assess the efficacy of Real-GPT, we devised a set of 1046 multiple-choice questions to gauge the real estate knowledge of the models. Furthermore, we tested Real-GPT in a real investment scenario. Despite its compact size, our model significantly outperforms GPT 3.5 and Mistral 7B. Hence, the model not only showcases superior performance but also illustrates its capacity to facilitate investment decisions or interpret current market data. This development showcases the potential of LLMs to revolutionize real estate analysis and decision-making.
Suggested Citation
Benedikt Gloria & Johannes Melsbach & Sven Bienert & Detlef Schoder, 2025.
"Real-GPT: Efficiently Tailoring LLMs for Informed Decision-Making in the Real Estate Industry,"
Journal of Real Estate Portfolio Management, Taylor & Francis Journals, vol. 31(1), pages 56-72, January.
Handle:
RePEc:taf:repmxx:v:31:y:2025:i:1:p:56-72
DOI: 10.1080/10835547.2024.2372748
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:repmxx:v:31:y:2025:i:1:p:56-72. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Longhurst (email available below). General contact details of provider: http://www.tandfonline.com/repm20 .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.