IDEAS home Printed from https://ideas.repec.org/a/eee/ejores/v324y2025i1p168-182.html
   My bibliography  Save this article

The multi-armed bandit problem under the mean-variance setting

Author

Listed:
  • Hu, Hongda
  • Charpentier, Arthur
  • Ghossoub, Mario
  • Schied, Alexander

Abstract

The classical multi-armed bandit problem involves a learner and a collection of arms with unknown reward distributions. At each round, the learner selects an arm and receives new information. The learner faces a tradeoff between exploiting the current information and exploring all arms. The objective is to maximize the expected cumulative reward over all rounds. Such an objective does not involve a risk-reward tradeoff, which is fundamental in many areas of application. In this paper, we build upon Sani et al. (2012)’s extension of the classical problem to a mean–variance setting. We relax their assumptions of independent arms and bounded rewards, and we consider sub-Gaussian arms. We introduce the Risk-Aware Lower Confidence Bound algorithm to solve the problem, and study some of its properties. We perform numerical simulations to demonstrate that, in both independent and dependent scenarios, our approach outperforms the algorithm suggested by Sani et al. (2012).

Suggested Citation

  • Hu, Hongda & Charpentier, Arthur & Ghossoub, Mario & Schied, Alexander, 2025. "The multi-armed bandit problem under the mean-variance setting," European Journal of Operational Research, Elsevier, vol. 324(1), pages 168-182.
  • Handle: RePEc:eee:ejores:v:324:y:2025:i:1:p:168-182
    DOI: 10.1016/j.ejor.2025.03.011
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0377221725002085
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ejor.2025.03.011?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Malekipirbazari, Milad & Çavuş, Özlem, 2024. "Index policy for multiarmed bandit problem with dynamic risk measures," European Journal of Operational Research, Elsevier, vol. 312(2), pages 627-640.
    2. Lagos, Felipe & Pereira, Jordi, 2024. "Multi-armed bandit-based hyper-heuristics for combinatorial optimization problems," European Journal of Operational Research, Elsevier, vol. 312(1), pages 70-91.
    3. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    4. Xu, Jianyu & Chen, Lujie & Tang, Ou, 2021. "An online algorithm for the risk-aware restless bandit," European Journal of Operational Research, Elsevier, vol. 290(2), pages 622-639.
    5. Preil, Deniz & Krapp, Michael, 2022. "Bandit-based inventory optimisation: Reinforcement learning in multi-echelon supply chains," International Journal of Production Economics, Elsevier, vol. 252(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Teymourian, Ehsan & Yang, Jian, 2025. "Simple fixes that accommodate switching costs in multi-armed bandits," European Journal of Operational Research, Elsevier, vol. 320(3), pages 616-627.
    2. Agrawal, Priyank & Tulabandhula, Theja & Avadhanula, Vashist, 2023. "A tractable online learning algorithm for the multinomial logit contextual bandit," European Journal of Operational Research, Elsevier, vol. 310(2), pages 737-750.
    3. Hongda Hu & Arthur Charpentier & Mario Ghossoub & Alexander Schied, 2022. "Multiarmed Bandits Problem Under the Mean-Variance Setting," Papers 2212.09192, arXiv.org, revised May 2024.
    4. Dingding Qi & Yingjun Zhao & Zhengjun Wang & Wei Wang & Li Pi & Longyue Li, 2024. "Joint Approach for Vehicle Routing Problems Based on Genetic Algorithm and Graph Convolutional Network," Mathematics, MDPI, vol. 12(19), pages 1-18, October.
    5. Li, Jin & Chen, Yanan & Liao, Yi & Shi, Victor & Zhang, Haixia, 2025. "Managing strategic inventories in a three-echelon supply chain of durable goods," Omega, Elsevier, vol. 131(C).
    6. Jung, Seung Hwan & Yang, Yunsi, 2023. "On the value of operational flexibility in the trailer shipment and assignment problem: Data-driven approaches and reinforcement learning," International Journal of Production Economics, Elsevier, vol. 264(C).
    7. Abada, Ibrahim & Belkhouja, Mustapha & Ehrenmann, Andreas, 2025. "On the valuation of legacy power production in liberalized markets via option-pricing," European Journal of Operational Research, Elsevier, vol. 322(3), pages 1005-1024.
    8. Malekipirbazari, Milad, 2025. "Optimizing sequential decision-making under risk: Strategic allocation with switching penalties," European Journal of Operational Research, Elsevier, vol. 321(1), pages 160-176.
    9. José Niño-Mora, 2023. "Markovian Restless Bandits and Index Policies: A Review," Mathematics, MDPI, vol. 11(7), pages 1-27, March.
    10. Park, Hyungjun & Choi, Dong Gu & Min, Daiki, 2023. "Adaptive inventory replenishment using structured reinforcement learning by exploiting a policy structure," International Journal of Production Economics, Elsevier, vol. 266(C).

    More about this item

    Keywords

    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:ejores:v:324:y:2025:i:1:p:168-182. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/eor .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.