IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v325y2025ics0360544225015920.html
   My bibliography  Save this article

Deep reinforcement learning for optimizing the thermoacoustic core in a supercritical CO2 thermoacoustic engine

Author

Listed:
  • Yang, Junjiao
  • Hu, Zhan-Chao

Abstract

Thermoacoustic engines (TAEs) are promising energy conversion technologies due to their absence of moving parts, flexibility, and environmental friendliness. The driver of such an engine is the thermoacoustic core (TAC). In this study, we propose a framework that integrates CFD simulations, a surrogate model based on an artificial neural network (ANN), and deep reinforcement learning (DRL) to optimize the channel shape in the TAC of a supercritical CO2 TAE. CFD simulations generate a dataset for the surrogate model. The surrogate model demonstrates exceptional generalization capability (R2=0.992) and computational efficiency (within 3.8 ms per prediction), enabling fast reward evaluation during the DRL optimization. The TD3 algorithm is employed to explore the continuous design space. The optimized channel achieves a pressure amplitude of 0.663MPa, an 8.51% improvement compared to the original straight channel, which can be attributed to the enhanced heat transfer matching between the hot heat exchanger and the ambient one. This study demonstrates the potential of combining ANN-based surrogate models with DRL for optimizing thermoacoustic devices. The proposed framework is adaptable for optimizing other thermal systems and casts light on integrating artificial intelligence with physical modeling for engineering optimization.

Suggested Citation

  • Yang, Junjiao & Hu, Zhan-Chao, 2025. "Deep reinforcement learning for optimizing the thermoacoustic core in a supercritical CO2 thermoacoustic engine," Energy, Elsevier, vol. 325(C).
  • Handle: RePEc:eee:energy:v:325:y:2025:i:c:s0360544225015920
    DOI: 10.1016/j.energy.2025.135950
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544225015920
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2025.135950?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Wang, Xin & Xu, Jingyuan & Wu, Zhanghua & Luo, Ercang, 2022. "A thermoacoustic refrigerator with multiple-bypass expansion cooling configuration for natural gas liquefaction," Applied Energy, Elsevier, vol. 313(C).
    2. Wang, Kaixin & Hu, Zhan-Chao, 2023. "Experimental investigation of a novel standing-wave thermoacoustic engine based on PCHE and supercritical CO2," Energy, Elsevier, vol. 282(C).
    3. Adrien Ecoffet & Joost Huizinga & Joel Lehman & Kenneth O. Stanley & Jeff Clune, 2021. "First return, then explore," Nature, Nature, vol. 590(7847), pages 580-586, February.
    4. Chen, Geng & Wang, Yufan & Tang, Lihua & Wang, Kai & Yu, Zhibin, 2020. "Large eddy simulation of thermally induced oscillatory flow in a thermoacoustic engine," Applied Energy, Elsevier, vol. 276(C).
    5. Chen, Geng & Tang, Lihua & Mace, Brian & Yu, Zhibin, 2021. "Multi-physics coupling in thermoacoustic devices: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 146(C).
    6. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    7. Jurriath-Azmathi Mumith & Tassos Karayiannis & Charalampos Makatsoris, 2016. "Design and optimization of a thermoacoustic heat engine using reinforcement learning," International Journal of Low-Carbon Technologies, Oxford University Press, vol. 11(3), pages 431-439.
    8. Oriol Vinyals & Igor Babuschkin & Wojciech M. Czarnecki & Michaël Mathieu & Andrew Dudzik & Junyoung Chung & David H. Choi & Richard Powell & Timo Ewalds & Petko Georgiev & Junhyuk Oh & Dan Horgan & M, 2019. "Grandmaster level in StarCraft II using multi-agent reinforcement learning," Nature, Nature, vol. 575(7782), pages 350-354, November.
    9. S. Backhaus & G. W. Swift, 1999. "A thermoacoustic Stirling heat engine," Nature, Nature, vol. 399(6734), pages 335-338, May.
    10. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    11. Zhou, Jianhao & Xue, Siwu & Xue, Yuan & Liao, Yuhui & Liu, Jun & Zhao, Wanzhong, 2021. "A novel energy management strategy of hybrid electric vehicle via an improved TD3 deep reinforcement learning," Energy, Elsevier, vol. 224(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    2. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
    3. Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klöckl, 2024. "Computational Performance of Deep Reinforcement Learning to Find Nash Equilibria," Computational Economics, Springer;Society for Computational Economics, vol. 63(2), pages 529-576, February.
    4. Wang, Xin & Liu, Shuo & Yu, Yifan & Yue, Shengzhi & Liu, Ying & Zhang, Fumin & Lin, Yuanshan, 2023. "Modeling collective motion for fish schooling via multi-agent reinforcement learning," Ecological Modelling, Elsevier, vol. 477(C).
    5. Guo, Lixian & Zhao, Dan & Cheng, Li & Dong, Xu & Xu, Jingyuan, 2024. "Enhancing energy conversion performances in standing-wave thermoacoustic engine with externally forcing periodic oscillations," Energy, Elsevier, vol. 292(C).
    6. Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
    7. János Kramár & Tom Eccles & Ian Gemp & Andrea Tacchetti & Kevin R. McKee & Mateusz Malinowski & Thore Graepel & Yoram Bachrach, 2022. "Negotiation and honesty in artificial intelligence methods for the board game of Diplomacy," Nature Communications, Nature, vol. 13(1), pages 1-15, December.
    8. Jin, Jiahuan & Cui, Tianxiang & Bai, Ruibin & Qu, Rong, 2024. "Container port truck dispatching optimization using Real2Sim based deep reinforcement learning," European Journal of Operational Research, Elsevier, vol. 315(1), pages 161-175.
    9. Chen, Geng & Tang, Lihua & Mace, Brian & Yu, Zhibin, 2021. "Multi-physics coupling in thermoacoustic devices: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 146(C).
    10. Zhang, Qin & Liu, Yu & Xiang, Yisha & Xiahou, Tangfan, 2024. "Reinforcement learning in reliability and maintenance optimization: A tutorial," Reliability Engineering and System Safety, Elsevier, vol. 251(C).
    11. Benjamin Heinbach & Peter Burggräf & Johannes Wagner, 2024. "gym-flp: A Python Package for Training Reinforcement Learning Algorithms on Facility Layout Problems," SN Operations Research Forum, Springer, vol. 5(1), pages 1-26, March.
    12. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    13. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    14. Taejong Joo & Hyunyoung Jun & Dongmin Shin, 2022. "Task Allocation in Human–Machine Manufacturing Systems Using Deep Reinforcement Learning," Sustainability, MDPI, vol. 14(4), pages 1-18, February.
    15. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.
    16. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    17. Zhimian Chen & Yizeng Wang & Hao Hu & Zhipeng Zhang & Chengwei Zhang & Shukun Zhou, 2024. "Investigating Autonomous Vehicle Driving Strategies in Highway Ramp Merging Zones," Mathematics, MDPI, vol. 12(23), pages 1-22, December.
    18. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
    19. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    20. Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;
    ;

    JEL classification:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:325:y:2025:i:c:s0360544225015920. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.