IDEAS home Printed from https://ideas.repec.org/a/gam/jsusta/v16y2024i7p3041-d1370745.html
   My bibliography  Save this article

Analysis of Transportation Systems for Colonies on Mars

Author

Listed:
  • J. de Curtò

    (Department of Computer Applications in Science & Engineering, BARCELONA Supercomputing Center, 08034 Barcelona, Spain
    Informatik und Mathematik, GOETHE-University Frankfurt am Main, 60323 Frankfurt am Main, Germany
    Estudis d’Informàtica, Multimèdia i Telecomunicació, Universitat Oberta de Catalunya, 08018 Barcelona, Spain
    Escuela Técnica Superior de Ingeniería (ICAI), Universidad Pontificia Comillas, 28015 Madrid, Spain)

  • I. de Zarzà

    (Informatik und Mathematik, GOETHE-University Frankfurt am Main, 60323 Frankfurt am Main, Germany
    Estudis d’Informàtica, Multimèdia i Telecomunicació, Universitat Oberta de Catalunya, 08018 Barcelona, Spain
    Escuela Politécnica Superior, Universidad Francisco de Vitoria, Pozuelo de Alarcón, 28223 Madrid, Spain)

Abstract

The colonization of Mars poses unprecedented challenges in developing sustainable and efficient transportation systems to support inter-settlement connectivity and resource distribution. This study conducts a comprehensive evaluation of two proposed transportation systems for Martian colonies: a ground-based magnetically levitated (maglev) train and a low-orbital spaceplane. Through simulation models, we assess the energy consumption, operational and construction costs, and environmental impacts of each system. Monte Carlo simulations further provide insights into the cost variability and financial risk associated with each option over a decade. Our findings reveal that while the spaceplane system offers lower average costs and reduced financial risk, the maglev train boasts greater scalability and potential for integration with Martian infrastructural development. The maglev system, despite its higher initial cost, emerges as a strategic asset for long-term colony expansion and sustainability, highlighting the need for balanced investment in transportation technologies that align with the goals of Martian colonization. Further extending our exploration, this study introduces advanced analysis of alternative transportation technologies, including hyperloop systems, drones, and rovers, incorporating dynamic environmental modeling of Mars and reinforcement learning for autonomous navigation. In an effort to enhance the realism and complexity of our navigation simulation of Mars, we introduce several significant improvements. These enhancements focus on the inclusion of dynamic atmospheric conditions, the simulation of terrain-specific obstacles such as craters and rocks, and the introduction of a swarm intelligence approach for navigating multiple drones simultaneously. This analysis serves as a foundational framework for future research and strategic planning in Martian transportation infrastructure.

Suggested Citation

  • J. de Curtò & I. de Zarzà, 2024. "Analysis of Transportation Systems for Colonies on Mars," Sustainability, MDPI, vol. 16(7), pages 1-28, April.
  • Handle: RePEc:gam:jsusta:v:16:y:2024:i:7:p:3041-:d:1370745
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2071-1050/16/7/3041/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2071-1050/16/7/3041/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    2. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    2. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    3. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    4. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    5. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    6. Taejong Joo & Hyunyoung Jun & Dongmin Shin, 2022. "Task Allocation in Human–Machine Manufacturing Systems Using Deep Reinforcement Learning," Sustainability, MDPI, vol. 14(4), pages 1-18, February.
    7. Mahmoud Mahfouz & Tucker Balch & Manuela Veloso & Danilo Mandic, 2021. "Learning to Classify and Imitate Trading Agents in Continuous Double Auction Markets," Papers 2110.01325, arXiv.org, revised Oct 2021.
    8. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.
    9. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    10. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
    11. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    12. Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
    13. Alessio Brini & Daniele Tantari, 2021. "Deep Reinforcement Trading with Predictable Returns," Papers 2104.14683, arXiv.org, revised May 2023.
    14. Georgios D. Kontes & Georgios I. Giannakis & Víctor Sánchez & Pablo De Agustin-Camacho & Ander Romero-Amorrortu & Natalia Panagiotidou & Dimitrios V. Rovas & Simone Steiger & Christopher Mutschler & G, 2018. "Simulation-Based Evaluation and Optimization of Control Strategies in Buildings," Energies, MDPI, vol. 11(12), pages 1-23, December.
    15. Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
    16. Chanjuan Liu & Jinmiao Cong & Tianhao Zhao & Enqiang Zhu, 2023. "Improving Agent Decision Payoffs via a New Framework of Opponent Modeling," Mathematics, MDPI, vol. 11(14), pages 1-15, July.
    17. Yifeng Guo & Xingyu Fu & Yuyan Shi & Mingwen Liu, 2018. "Robust Log-Optimal Strategy with Reinforcement Learning," Papers 1805.00205, arXiv.org.
    18. Guan, Xiaoshu & Xiang, Zhengliang & Bao, Yuequan & Li, Hui, 2022. "Structural dominant failure modes searching method based on deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 219(C).
    19. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    20. Yuchao Dong, 2022. "Randomized Optimal Stopping Problem in Continuous time and Reinforcement Learning Algorithm," Papers 2208.02409, arXiv.org, revised Sep 2023.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:16:y:2024:i:7:p:3041-:d:1370745. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.