IDEAS home Printed from https://ideas.repec.org/a/plo/pdig00/0001294.html

Participatory-informed preference optimization (PiPrO): A reinforcement learning simulation study

Author

Listed:
  • Tara Templin
  • Shuyi Song
  • Sophia Fort
  • Nasa Sinnott-Armstrong

Abstract

Artificial intelligence (AI) has transformative potential in public health, but its impact is limited by models that implicitly prioritize a single stakeholder perspective and do not make explicit and tunable trade-offs between community and clinician endorsement. To address this gap, we introduce Participatory-informed Preference Optimization (PiPrO), a large language model embedding-based calibration framework that generates a single clinical outcome prediction while explicitly accounting for differences between community and physician interpretations of the same scenario. PiPrO takes as input two embeddings derived from a large language model representing a community-facing context and a physician-facing context. It then applies a shared lightweight feedforward predictor to produce per-stakeholder scores which are then mixed using a single global mixing weight (alpha). Alpha controls how strongly the final prediction reflects the community versus physician responses and is learned using a policy-gradient update driven by an abundant but noisy community text and a sparse and biased physician text. PiPrO reliably learned stable alpha values and a consistent reward signal. Alpha shifts systematically toward physician weighting as community feedback becomes noisier and shifts toward community weighting as physician feedback becomes more biased. Our results suggest PiPrO’s potential to produce more transparent, and context-sensitive AI-driven healthcare recommendations. Future research should validate this approach using real-world community inputs to ensure generalizability and practical impact.Author summary: Artificial intelligence tools are increasingly adopted in medicine and public health, but they are often trained to reflect only one viewpoint. In practice, community members and physicians can interpret the same clinical situation differently, and those differences can matter for recommendations that affect care. In this study, we developed a method called Participatory-informed Preference Optimization to help a prediction model account for both perspectives while still producing one final prediction. We tested the method in a simulation study using community-facing and physician-facing versions of the same scenario, and we varied how reliable each source of feedback was. We found that the model learned a stable balance between the two perspectives. It shifted toward physician input when community feedback became less reliable, and toward community input when physician feedback became more biased. These results suggest that health-related artificial intelligence can be designed to make trade-offs between stakeholder perspectives more transparent.

Suggested Citation

  • Tara Templin & Shuyi Song & Sophia Fort & Nasa Sinnott-Armstrong, 2026. "Participatory-informed preference optimization (PiPrO): A reinforcement learning simulation study," PLOS Digital Health, Public Library of Science, vol. 5(3), pages 1-18, March.
  • Handle: RePEc:plo:pdig00:0001294
    DOI: 10.1371/journal.pdig.0001294
    as

    Download full text from publisher

    File URL: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0001294
    Download Restriction: no

    File URL: https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0001294&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pdig.0001294?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0001294. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.