IDEAS home Printed from https://ideas.repec.org/a/gam/jsusta/v15y2023i4p3585-d1069452.html
   My bibliography  Save this article

Telepresence Robot with DRL Assisted Delay Compensation in IoT-Enabled Sustainable Healthcare Environment

Author

Listed:
  • Fawad Naseer

    (Electrical Engineering Department, The University of Lahore, Lahore 54590, Pakistan)

  • Muhammad Nasir Khan

    (Electrical Engineering Department, The University of Lahore, Lahore 54590, Pakistan)

  • Ali Altalbe

    (Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia)

Abstract

Telepresence robots have become popular during the COVID-19 era due to the quarantine measures and the requirement to interact less with other humans. Telepresence robots are helpful in different scenarios, such as healthcare, academia, or the exploration of certain unreachable territories. IoT provides a sensor-based environment wherein robots acquire more precise information about their surroundings. Remote telepresence robots are enabled with more efficient data from IoT sensors, which helps them to compute the data effectively. While navigating in a distant IoT-enabled healthcare environment, there is a possibility of delayed control signals from a teleoperator. We propose a human cooperative telecontrol robotics system in an IoT-sensed healthcare environment. The deep reinforcement learning (DRL)-based deep deterministic policy gradient (DDPG) offered improved control of the telepresence robot to provide assistance to the teleoperator during the delayed communication control signals. The proposed approach can stabilize the system in aid of the teleoperator by taking the delayed signal term out of the main controlling framework, along with the sensed IOT infrastructure. In a dynamic IoT-enabled healthcare context, our suggested approach to operating the telepresence robot can effectively manage the 30 s delayed signal. Simulations and physical experiments in a real-time healthcare environment with human teleoperators demonstrate the implementation of the proposed method.

Suggested Citation

  • Fawad Naseer & Muhammad Nasir Khan & Ali Altalbe, 2023. "Telepresence Robot with DRL Assisted Delay Compensation in IoT-Enabled Sustainable Healthcare Environment," Sustainability, MDPI, vol. 15(4), pages 1-15, February.
  • Handle: RePEc:gam:jsusta:v:15:y:2023:i:4:p:3585-:d:1069452
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2071-1050/15/4/3585/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2071-1050/15/4/3585/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. George E. Monahan, 1982. "State of the Art---A Survey of Partially Observable Markov Decision Processes: Theory, Models, and Algorithms," Management Science, INFORMS, vol. 28(1), pages 1-16, January.
    2. Ahmad F. Subahi & Osamah Ibrahim Khalaf & Youseef Alotaibi & Rajesh Natarajan & Natesh Mahadev & Timmarasu Ramesh, 2022. "Modified Self-Adaptive Bayesian Algorithm for Smart Heart Disease Prediction in IoT System," Sustainability, MDPI, vol. 14(21), pages 1-20, October.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Abdullah Addas & Muhammad Tahir & Najma Ismat, 2023. "Enhancing Precision of Crop Farming towards Smart Cities: An Application of Artificial Intelligence," Sustainability, MDPI, vol. 16(1), pages 1-18, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wooseung Jang & J. George Shanthikumar, 2002. "Stochastic allocation of inspection capacity to competitive processes," Naval Research Logistics (NRL), John Wiley & Sons, vol. 49(1), pages 78-94, February.
    2. Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2009. "Protocols with No Acknowledgment," Operations Research, INFORMS, vol. 57(4), pages 905-915, August.
    3. Kazmi, Hussain & Suykens, Johan & Balint, Attila & Driesen, Johan, 2019. "Multi-agent reinforcement learning for modeling and control of thermostatically controlled loads," Applied Energy, Elsevier, vol. 238(C), pages 1022-1035.
    4. Williams, Byron K., 2009. "Markov decision processes in natural resources management: Observability and uncertainty," Ecological Modelling, Elsevier, vol. 220(6), pages 830-840.
    5. Xin Jin, 2021. "Can we imitate the principal investor's behavior to learn option price?," Papers 2105.11376, arXiv.org, revised Jan 2022.
    6. Yanling Chang & Alan Erera & Chelsea White, 2015. "Value of information for a leader–follower partially observed Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 129-153, December.
    7. Churlzu Lim & J. Neil Bearden & J. Cole Smith, 2006. "Sequential Search with Multiattribute Options," Decision Analysis, INFORMS, vol. 3(1), pages 3-15, March.
    8. Anyan Qi & Hyun-Soo Ahn & Amitabh Sinha, 2017. "Capacity Investment with Demand Learning," Operations Research, INFORMS, vol. 65(1), pages 145-164, February.
    9. Chiel van Oosterom & Lisa M. Maillart & Jeffrey P. Kharoufeh, 2017. "Optimal maintenance policies for a safety‐critical system and its deteriorating sensor," Naval Research Logistics (NRL), John Wiley & Sons, vol. 64(5), pages 399-417, August.
    10. Ciriaco Valdez‐Flores & Richard M. Feldman, 1989. "A survey of preventive maintenance models for stochastically deteriorating single‐unit systems," Naval Research Logistics (NRL), John Wiley & Sons, vol. 36(4), pages 419-446, August.
    11. Grosfeld-Nir, Abraham, 2007. "Control limits for two-state partially observable Markov decision processes," European Journal of Operational Research, Elsevier, vol. 182(1), pages 300-304, October.
    12. Paul L Fackler & Krishna Pacifici & Julien Martin & Carol McIntyre, 2014. "Efficient Use of Information in Adaptive Management with an Application to Managing Recreation near Golden Eagle Nesting Sites," PLOS ONE, Public Library of Science, vol. 9(8), pages 1-14, August.
    13. Malek Ebadi & Raha Akhavan-Tabatabaei, 2021. "Personalized Cotesting Policies for Cervical Cancer Screening: A POMDP Approach," Mathematics, MDPI, vol. 9(6), pages 1-20, March.
    14. Tianhu Deng & Zuo-Jun Max Shen & J. George Shanthikumar, 2014. "Statistical Learning of Service-Dependent Demand in a Multiperiod Newsvendor Setting," Operations Research, INFORMS, vol. 62(5), pages 1064-1076, October.
    15. Zong-Zhi Lin & James C. Bean & Chelsea C. White, 2004. "A Hybrid Genetic/Optimization Algorithm for Finite-Horizon, Partially Observed Markov Decision Processes," INFORMS Journal on Computing, INFORMS, vol. 16(1), pages 27-38, February.
    16. Kıvanç, İpek & Özgür-Ünlüakın, Demet & Bilgiç, Taner, 2022. "Maintenance policy analysis of the regenerative air heater system using factored POMDPs," Reliability Engineering and System Safety, Elsevier, vol. 219(C).
    17. T Sloan, 2010. "First, do no harm? A framework for evaluating new versus reprocessed medical devices," Journal of the Operational Research Society, Palgrave Macmillan;The OR Society, vol. 61(2), pages 191-201, February.
    18. Yanling Chang & Alan Erera & Chelsea White, 2015. "A leader–follower partially observed, multiobjective Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 103-128, December.
    19. İ. Esra Büyüktahtakın & Robert G. Haight, 2018. "A review of operations research models in invasive species management: state of the art, challenges, and future directions," Annals of Operations Research, Springer, vol. 271(2), pages 357-403, December.
    20. Hao Zhang, 2010. "Partially Observable Markov Decision Processes: A Geometric Technique and Analysis," Operations Research, INFORMS, vol. 58(1), pages 214-228, February.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:15:y:2023:i:4:p:3585-:d:1069452. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.