Physics aware analytics for accurate state prediction of dynamical systems
Author
Abstract
Suggested Citation
DOI: 10.1016/j.chaos.2022.112670
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Bethany Lusch & J. Nathan Kutz & Steven L. Brunton, 2018. "Deep learning for universal linear embeddings of nonlinear dynamics," Nature Communications, Nature, vol. 9(1), pages 1-10, December.
- David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Qin, Bo & Zhang, Ying, 2024. "Comprehensive analysis of the mechanism of sensitivity to initial conditions and fractal basins of attraction in a novel variable-distance magnetic pendulum," Chaos, Solitons & Fractals, Elsevier, vol. 183(C).
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Bo Hu & Jiaxi Li & Shuang Li & Jie Yang, 2019. "A Hybrid End-to-End Control Strategy Combining Dueling Deep Q-network and PID for Transient Boost Control of a Diesel Engine with Variable Geometry Turbocharger and Cooled EGR," Energies, MDPI, vol. 12(19), pages 1-15, September.
- Chao, Xiangrui & Ran, Qin & Chen, Jia & Li, Tie & Qian, Qian & Ergu, Daji, 2022. "Regulatory technology (Reg-Tech) in financial stability supervision: Taxonomy, key methods, applications and future directions," International Review of Financial Analysis, Elsevier, vol. 80(C).
- Qingyan Li & Tao Lin & Qianyi Yu & Hui Du & Jun Li & Xiyue Fu, 2023. "Review of Deep Reinforcement Learning and Its Application in Modern Renewable Power System Control," Energies, MDPI, vol. 16(10), pages 1-23, May.
- Yuchen Zhang & Wei Yang, 2022. "Breakthrough invention and problem complexity: Evidence from a quasi‐experiment," Strategic Management Journal, Wiley Blackwell, vol. 43(12), pages 2510-2544, December.
- Michael Curry & Alexander Trott & Soham Phade & Yu Bai & Stephan Zheng, 2022. "Analyzing Micro-Founded General Equilibrium Models with Many Agents using Deep Reinforcement Learning," Papers 2201.01163, arXiv.org, revised Feb 2022.
- Elsisi, Mahmoud & Amer, Mohammed & Dababat, Alya’ & Su, Chun-Lien, 2023. "A comprehensive review of machine learning and IoT solutions for demand side energy management, conservation, and resilient operation," Energy, Elsevier, vol. 281(C).
- Minkyu Shin & Jin Kim & Bas van Opheusden & Thomas L. Griffiths, 2023. "Superhuman Artificial Intelligence Can Improve Human Decision Making by Increasing Novelty," Papers 2303.07462, arXiv.org, revised Apr 2023.
- Wang, Xuan & Wang, Rui & Jin, Ming & Shu, Gequn & Tian, Hua & Pan, Jiaying, 2020. "Control of superheat of organic Rankine cycle under transient heat source based on deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
- Zequn Lin & Zhaofan Lu & Zengru Di & Ying Tang, 2024. "Learning noise-induced transitions by multi-scaling reservoir computing," Nature Communications, Nature, vol. 15(1), pages 1-10, December.
- Yunping Bai & Yifu Xu & Shifan Chen & Xiaotian Zhu & Shuai Wang & Sirui Huang & Yuhang Song & Yixuan Zheng & Zhihui Liu & Sim Tan & Roberto Morandotti & Sai T. Chu & Brent E. Little & David J. Moss & , 2025. "TOPS-speed complex-valued convolutional accelerator for feature extraction and inference," Nature Communications, Nature, vol. 16(1), pages 1-13, December.
- Kalmykov, N.I. & Zagidullin, R. & Rogov, O.Y. & Rykovanov, S. & Dylov, D.V., 2024. "Suppressing modulation instability with reinforcement learning," Chaos, Solitons & Fractals, Elsevier, vol. 186(C).
- Daníelsson, Jón & Macrae, Robert & Uthemann, Andreas, 2022.
"Artificial intelligence and systemic risk,"
Journal of Banking & Finance, Elsevier, vol. 140(C).
- Danielsson, Jon & Macrae, Robert & Uthemann, Andreas, 2022. "Artificial intelligence and systemic risk," LSE Research Online Documents on Economics 111601, London School of Economics and Political Science, LSE Library.
- Zhang, Xi & Wang, Qin & Bi, Xiaowen & Li, Donghong & Liu, Dong & Yu, Yuanjin & Tse, Chi Kong, 2024. "Mitigating cascading failure in power grids with deep reinforcement learning-based remedial actions," Reliability Engineering and System Safety, Elsevier, vol. 250(C).
- Sichen Ding & Gaiyun Liu & Li Yin & Jianzhou Wang & Zhiwu Li, 2024. "Detection of Cyber-Attacks in a Discrete Event System Based on Deep Learning," Mathematics, MDPI, vol. 12(17), pages 1-21, August.
- Zhou, Jun & Jia, Yubin & Sun, Changyin, 2025. "Flywheel energy storage system controlled using tube-based deep Koopman model predictive control for wind power smoothing," Applied Energy, Elsevier, vol. 381(C).
- Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
- Gong, Xun & Wang, Xiaozhe & Cao, Bo, 2023. "On data-driven modeling and control in modern power grids stability: Survey and perspective," Applied Energy, Elsevier, vol. 350(C).
- Yanfei Kang & Rob J Hyndman & Feng Li, 2018. "Efficient generation of time series with diverse and controllable characteristics," Monash Econometrics and Business Statistics Working Papers 15/18, Monash University, Department of Econometrics and Business Statistics.
- Ostheimer, Julia & Chowdhury, Soumitra & Iqbal, Sarfraz, 2021. "An alliance of humans and machines for machine learning: Hybrid intelligent systems and their design principles," Technology in Society, Elsevier, vol. 66(C).
- Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
More about this item
Keywords
Neural networks; Hamiltonian Neural Networks; Lagrangian Neural Networks; Physics inspired learning; Action angle variable & Poisson bracket;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:chsofr:v:164:y:2022:i:c:s0960077922008499. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Thayer, Thomas R. (email available below). General contact details of provider: https://www.journals.elsevier.com/chaos-solitons-and-fractals .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.