IDEAS home Printed from https://ideas.repec.org/p/osf/socarx/gjvcf_v1.html

Scaling Open-Ended Survey Coding: An LLM Pipeline Where Definitions Do the Heavy Lifting

Author

Listed:
  • Soria, Chris

Abstract

As large language model (LLM)–based text classification becomes routine in the social sciences, researchers confront dozens of competing models, inconsistent advice on prompting, and little standardized tooling with evidence‑based defaults. CatLLM, an open‑source Python and R package, addresses this gap with a three‑stage pipeline—exploration, extraction, classification—for coding open‑ended survey responses. The package offers a provider‑agnostic interface that supports multi‑model ensembles, batch processing, and fully local deployment via open‑weight models, and can be operated through a conversational interface by researchers with no programming experience. CatLLM’s defaults are calibrated by a systematic empirical study evaluating 21 LLMs across three capability tiers, six providers, and four survey questions, benchmarked against sociologist‑coded ground truth. This validation reveals a consistent problem: all models over‑classify, with precision lagging 40–50 percentage points behind sensitivity, implying that default LLM configurations may substantially overstate category prevalence. CatLLM encodes empirically grounded mitigations as defaults: verbose category definitions with explicit inclusion and exclusion criteria, unanimous multi‑model ensembling, and an automatic “Other” escape‑valve category, while disabling advanced prompting strategies that show no reliable benefit. Ensembles of inexpensive open‑weight models outperform the best individual cloud model, enabling fully local deployment without transmitting survey data to external servers. These findings replicate on two independent public datasets spanning political and emotional text, and an applied example linking tool‑coded “move reasons” to respondent demographics uncovers distinct life‑course patterns in residential mobility.

Suggested Citation

  • Soria, Chris, 2026. "Scaling Open-Ended Survey Coding: An LLM Pipeline Where Definitions Do the Heavy Lifting," SocArXiv gjvcf_v1, Center for Open Science.
  • Handle: RePEc:osf:socarx:gjvcf_v1
    DOI: 10.31219/osf.io/gjvcf_v1
    as

    Download full text from publisher

    File URL: https://osf.io/download/69bd7a620e33cbebb4a6f7d7/
    Download Restriction: no

    File URL: https://libkey.io/10.31219/osf.io/gjvcf_v1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:osf:socarx:gjvcf_v1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: OSF (email available below). General contact details of provider: https://arabixiv.org .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.