IDEAS home Printed from https://ideas.repec.org/a/spr/snopef/v5y2024i1d10.1007_s43069-024-00301-3.html
   My bibliography  Save this article

gym-flp: A Python Package for Training Reinforcement Learning Algorithms on Facility Layout Problems

Author

Listed:
  • Benjamin Heinbach

    (University of Siegen)

  • Peter Burggräf

    (University of Siegen)

  • Johannes Wagner

    (University of Siegen)

Abstract

Reinforcement learning (RL) algorithms have proven to be useful tools for combinatorial optimisation. However, they are still underutilised in facility layout problems (FLPs). At the same time, RL research relies on standardised benchmarks such as the Arcade Learning Environment. To address these issues, we present an open-source Python package (gym-flp) that utilises the OpenAI Gym toolkit, specifically designed for developing and comparing RL algorithms. The package offers one discrete and three continuous problem representation environments with customisable state and action spaces. In addition, the package provides 138 discrete and 61 continuous problems commonly used in FLP literature and supports submitting custom problem sets. The user can choose between numerical and visual output of observations, depending on the RL approach being used. The package aims to facilitate experimentation with different algorithms in a reproducible manner and advance RL use in factory planning.

Suggested Citation

  • Benjamin Heinbach & Peter Burggräf & Johannes Wagner, 2024. "gym-flp: A Python Package for Training Reinforcement Learning Algorithms on Facility Layout Problems," SN Operations Research Forum, Springer, vol. 5(1), pages 1-26, March.
  • Handle: RePEc:spr:snopef:v:5:y:2024:i:1:d:10.1007_s43069-024-00301-3
    DOI: 10.1007/s43069-024-00301-3
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s43069-024-00301-3
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s43069-024-00301-3?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Thiago Serra & Ryan J. O’Neil, 2020. "MIPLIBing: Seamless Benchmarking of Mathematical Optimization Problems and Metadata Extensions," SN Operations Research Forum, Springer, vol. 1(3), pages 1-6, September.
    2. Weitzel, Timm & Glock, C. H., 2018. "Energy Management for Stationary Electric Energy Storage Systems: A Systematic Literature Review," Publications of Darmstadt Technical University, Institute for Business Studies (BWL) 88880, Darmstadt Technical University, Department of Business Administration, Economics and Law, Institute for Business Studies (BWL).
    3. Daming Shi & Wenhui Fan & Yingying Xiao & Tingyu Lin & Chi Xing, 2020. "Intelligent scheduling of discrete automated production line via deep reinforcement learning," International Journal of Production Research, Taylor & Francis Journals, vol. 58(11), pages 3362-3380, June.
    4. Pablo Pérez-Gosende & Josefa Mula & Manuel Díaz-Madroñero, 2021. "Facility layout planning. An extended literature review," International Journal of Production Research, Taylor & Francis Journals, vol. 59(12), pages 3777-3816, June.
    5. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    6. Marcel Panzer & Benedict Bender, 2022. "Deep reinforcement learning in production systems: a systematic literature review," International Journal of Production Research, Taylor & Francis Journals, vol. 60(13), pages 4316-4341, July.
    7. Weitzel, Timm & Glock, Christoph H., 2018. "Energy management for stationary electric energy storage systems: A systematic literature review," European Journal of Operational Research, Elsevier, vol. 264(2), pages 582-606.
    8. Kazuhiro Tsuchiya & Sunil Bharitkar & Yoshiyasu Takefuji, 1996. "A neural network approach to facility layout problems," European Journal of Operational Research, Elsevier, vol. 89(3), pages 556-563, March.
    9. Benjamin Rolf & Ilya Jackson & Marcel Müller & Sebastian Lang & Tobias Reggelin & Dmitry Ivanov, 2023. "A review on reinforcement learning algorithms and applications in supply chain management," International Journal of Production Research, Taylor & Francis Journals, vol. 61(20), pages 7151-7179, October.
    10. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Benedikt Finnah, 2022. "Optimal bidding functions for renewable energies in sequential electricity markets," OR Spectrum: Quantitative Approaches in Management, Springer;Gesellschaft für Operations Research e.V., vol. 44(1), pages 1-27, March.
    2. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    3. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    4. Taejong Joo & Hyunyoung Jun & Dongmin Shin, 2022. "Task Allocation in Human–Machine Manufacturing Systems Using Deep Reinforcement Learning," Sustainability, MDPI, vol. 14(4), pages 1-18, February.
    5. Feng, Jie & Ran, Lun & Wang, Zhiyuan & Zhang, Mengling, 2024. "Optimal energy scheduling of virtual power plant integrating electric vehicles and energy storage systems under uncertainty," Energy, Elsevier, vol. 309(C).
    6. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.
    7. Zhimian Chen & Yizeng Wang & Hao Hu & Zhipeng Zhang & Chengwei Zhang & Shukun Zhou, 2024. "Investigating Autonomous Vehicle Driving Strategies in Highway Ramp Merging Zones," Mathematics, MDPI, vol. 12(23), pages 1-22, December.
    8. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
    9. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    10. Finnah, Benedikt & Gönsch, Jochen & Ziel, Florian, 2022. "Integrated day-ahead and intraday self-schedule bidding for energy storage systems using approximate dynamic programming," European Journal of Operational Research, Elsevier, vol. 301(2), pages 726-746.
    11. Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
    12. Golpîra, Hêriş & Khan, Syed Abdul Rehman, 2019. "A multi-objective risk-based robust optimization approach to energy management in smart residential buildings under combined demand and supply uncertainty," Energy, Elsevier, vol. 170(C), pages 1113-1129.
    13. Collath, Nils & Cornejo, Martin & Engwerth, Veronika & Hesse, Holger & Jossen, Andreas, 2023. "Increasing the lifetime profitability of battery energy storage systems through aging aware operation," Applied Energy, Elsevier, vol. 348(C).
    14. Jesús Fernández-Villaverde & Galo Nuño & Jesse Perla, 2024. "Taming the Curse of Dimensionality: Quantitative Economics with Deep Learning," NBER Working Papers 33117, National Bureau of Economic Research, Inc.
    15. Emilio Ghiani & Alessandro Serpi & Virginia Pilloni & Giuliana Sias & Marco Simone & Gianluca Marcialis & Giuliano Armano & Paolo Attilio Pegoraro, 2018. "A Multidisciplinary Approach for the Development of Smart Distribution Networks," Energies, MDPI, vol. 11(10), pages 1-29, September.
    16. Donghun Lee & Hyeongwon Kang & Dongjin Lee & Jeonwoo Lee & Kwanho Kim, 2023. "Deep Reinforcement Learning-Based Scheduler on Parallel Dedicated Machine Scheduling Problem towards Minimizing Total Tardiness," Sustainability, MDPI, vol. 15(4), pages 1-14, February.
    17. Yifeng Guo & Xingyu Fu & Yuyan Shi & Mingwen Liu, 2018. "Robust Log-Optimal Strategy with Reinforcement Learning," Papers 1805.00205, arXiv.org.
    18. Hamed Khalili, 2024. "Deep Learning Pricing of Processing Firms in Agricultural Markets," Agriculture, MDPI, vol. 14(5), pages 1-14, April.
    19. repec:zib:zbjtin:v:3:y:2023:i:1:p:01-05 is not listed on IDEAS
    20. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    21. Chengmin Zhou & Bingding Huang & Pasi Fränti, 2022. "A review of motion planning algorithms for intelligent robots," Journal of Intelligent Manufacturing, Springer, vol. 33(2), pages 387-424, February.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:snopef:v:5:y:2024:i:1:d:10.1007_s43069-024-00301-3. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.