IDEAS home Printed from https://ideas.repec.org/a/gam/jgames/v14y2023i1p13-d1051349.html
   My bibliography  Save this article

Robust Data Sampling in Machine Learning: A Game-Theoretic Framework for Training and Validation Data Selection

Author

Listed:
  • Zhaobin Mo

    (Department of Civil Engineering and Engineering Mechanics, Columbia University, New York, NY 10027, USA)

  • Xuan Di

    (Department of Civil Engineering and Engineering Mechanics, Columbia University, New York, NY 10027, USA
    Data Science Institute, Columbia University, New York, NY 10027, USA)

  • Rongye Shi

    (Department of Civil Engineering and Engineering Mechanics, Columbia University, New York, NY 10027, USA)

Abstract

How to sample training/validation data is an important question for machine learning models, especially when the dataset is heterogeneous and skewed. In this paper, we propose a data sampling method that robustly selects training/validation data. We formulate the training/validation data sampling process as a two-player game: a trainer aims to sample training data so as to minimize the test error, while a validator adversarially samples validation data that can increase the test error. Robust sampling is achieved at the game equilibrium. To accelerate the searching process, we adopt reinforcement learning aided Monte Carlo trees search (MCTS). We apply our method to a car-following modeling problem, a complicated scenario with heterogeneous and random human driving behavior. Real-world data, the Next Generation SIMulation (NGSIM), is used to validate this method, and experiment results demonstrate the sampling robustness and thereby the model out-of-sample performance.

Suggested Citation

  • Zhaobin Mo & Xuan Di & Rongye Shi, 2023. "Robust Data Sampling in Machine Learning: A Game-Theoretic Framework for Training and Validation Data Selection," Games, MDPI, vol. 14(1), pages 1-13, January.
  • Handle: RePEc:gam:jgames:v:14:y:2023:i:1:p:13-:d:1051349
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2073-4336/14/1/13/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2073-4336/14/1/13/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Sharma, Anshuman & Zheng, Zuduo & Bhaskar, Ashish, 2019. "Is more always better? The impact of vehicular trajectory completeness on car-following model calibration and validation," Transportation Research Part B: Methodological, Elsevier, vol. 120(C), pages 49-75.
    2. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yuchen Zhang & Wei Yang, 2022. "Breakthrough invention and problem complexity: Evidence from a quasi‐experiment," Strategic Management Journal, Wiley Blackwell, vol. 43(12), pages 2510-2544, December.
    2. Daníelsson, Jón & Macrae, Robert & Uthemann, Andreas, 2022. "Artificial intelligence and systemic risk," Journal of Banking & Finance, Elsevier, vol. 140(C).
    3. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    4. Ostheimer, Julia & Chowdhury, Soumitra & Iqbal, Sarfraz, 2021. "An alliance of humans and machines for machine learning: Hybrid intelligent systems and their design principles," Technology in Society, Elsevier, vol. 66(C).
    5. Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
    6. Zhou, Yuhao & Wang, Yanwei, 2022. "An integrated framework based on deep learning algorithm for optimizing thermochemical production in heavy oil reservoirs," Energy, Elsevier, vol. 253(C).
    7. Mandal, Ankit & Tiwari, Yash & Panigrahi, Prasanta K. & Pal, Mayukha, 2022. "Physics aware analytics for accurate state prediction of dynamical systems," Chaos, Solitons & Fractals, Elsevier, vol. 164(C).
    8. Bossert, Leonie & Hagendorff, Thilo, 2021. "Animals and AI. The role of animals in AI research and application – An overview and ethical evaluation," Technology in Society, Elsevier, vol. 67(C).
    9. Yang, Zhengzhi & Zheng, Lei & Perc, Matjaž & Li, Yumeng, 2024. "Interaction state Q-learning promotes cooperation in the spatial prisoner's dilemma game," Applied Mathematics and Computation, Elsevier, vol. 463(C).
    10. Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
    11. Jun Li & Wei Zhu & Jun Wang & Wenfei Li & Sheng Gong & Jian Zhang & Wei Wang, 2018. "RNA3DCNN: Local and global quality assessments of RNA 3D structures using 3D deep convolutional neural networks," PLOS Computational Biology, Public Library of Science, vol. 14(11), pages 1-18, November.
    12. Keller, Alexander & Dahm, Ken, 2019. "Integral equations and machine learning," Mathematics and Computers in Simulation (MATCOM), Elsevier, vol. 161(C), pages 2-12.
    13. Canhoto, Ana Isabel & Clear, Fintan, 2020. "Artificial intelligence and machine learning as business tools: A framework for diagnosing value destruction potential," Business Horizons, Elsevier, vol. 63(2), pages 183-193.
    14. Zhang, Guangming & Zhang, Chao & Wang, Wei & Cao, Huan & Chen, Zhenyu & Niu, Yuguang, 2023. "Offline reinforcement learning control for electricity and heat coordination in a supercritical CHP unit," Energy, Elsevier, vol. 266(C).
    15. Haoran Wang & Shi Yu, 2021. "Robo-Advising: Enhancing Investment with Inverse Optimization and Deep Reinforcement Learning," Papers 2105.09264, arXiv.org.
    16. Yang, Kaiyuan & Huang, Houjing & Vandans, Olafs & Murali, Adithya & Tian, Fujia & Yap, Roland H.C. & Dai, Liang, 2023. "Applying deep reinforcement learning to the HP model for protein structure prediction," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 609(C).
    17. Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
    18. Yifeng Guo & Xingyu Fu & Yuyan Shi & Mingwen Liu, 2018. "Robust Log-Optimal Strategy with Reinforcement Learning," Papers 1805.00205, arXiv.org.
    19. Xueqing Yan & Yongming Li, 2023. "A Novel Discrete Differential Evolution with Varying Variables for the Deficiency Number of Mahjong Hand," Mathematics, MDPI, vol. 11(9), pages 1-21, May.
    20. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jgames:v:14:y:2023:i:1:p:13-:d:1051349. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.