Author
Listed:
- Pavel Kireyev
(LSE - London School of Economics and Political Science)
- Brendon Rhodes
(INSEAD-Sorbonne Université Behavioural Lab - SU - Sorbonne Université - INSEAD - Institut Européen d'administration des Affaires)
- Cathy Yang
(HEC Paris - Ecole des Hautes Etudes Commerciales)
- Abhishek Borah
(INSEAD-Sorbonne Université Behavioural Lab - SU - Sorbonne Université - INSEAD - Institut Européen d'administration des Affaires)
Abstract
Firms that use crowdsourcing to gather advertising and product ideas often rely on internal experts to manually screen thousands of submissions, a costly and time-consuming process. Internal experts rate thousands of ideas to identify a small set of promising ones that are then submitted for additional review. We evaluate how large language models (LLMs), when combined with a machine learning model trained on historical expert ratings and final client selections, can improve the efficiency of this screening. Using data from a platform that engaged experts to evaluate 74,436 ideas across 153 contests for major advertisers, we show that evaluation effort can be reduced by 28.4% compared to the status quo. Of this reduction, 3.8% is directly attributable to the LLM output, while the remainder comes from better weighting expert scores to align with sponsor preferences. Notably, incorporating LLMs could make 5 out of 10 experts redundant, compared to 3 with machine learning alone. Importantly, the experts whose judgments are most replicable by the LLM are not necessarily the poorest performers. These findings offer a practical framework for integrating LLMs into idea screening pipelines and underscore their potential to streamline expert evaluation while maintaining alignment with client goals.
Suggested Citation
Pavel Kireyev & Brendon Rhodes & Cathy Yang & Abhishek Borah, 2025.
"Large Language Models Augment or Substitute Human Experts in Idea Screening,"
Working Papers
hal-05562668, HAL.
Handle:
RePEc:hal:wpaper:hal-05562668
DOI: 10.2139/ssrn.5634331
Download full text from publisher
To our knowledge, this item is not available for
download. To find whether it is available, there are three
options:
1. Check below whether another version of this item is available online.
2. Check on the provider's
web page
whether it is in fact available.
3. Perform a
for a similarly titled item that would be
available.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hal:wpaper:hal-05562668. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: CCSD (email available below). General contact details of provider: https://hal.archives-ouvertes.fr/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.