IDEAS home Printed from https://ideas.repec.org/a/gam/jsusta/v13y2021i24p13686-d699920.html
   My bibliography  Save this article

Deep Reinforcement Learning-Based Robotic Grasping in Clutter and Occlusion

Author

Listed:
  • Marwan Qaid Mohammed

    (Faculty of Engineering and Technology, Multimedia University (MMU), Ayer Keroh 75450, Melaka, Malaysia)

  • Lee Chung Kwek

    (Faculty of Engineering and Technology, Multimedia University (MMU), Ayer Keroh 75450, Melaka, Malaysia)

  • Shing Chyi Chua

    (Faculty of Engineering and Technology, Multimedia University (MMU), Ayer Keroh 75450, Melaka, Malaysia)

  • Abdulaziz Salamah Aljaloud

    (College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia)

  • Arafat Al-Dhaqm

    (School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia (UTM), Skudai 81310, Johor, Malaysia)

  • Zeyad Ghaleb Al-Mekhlafi

    (College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia)

  • Badiea Abdulkarem Mohammed

    (College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia)

Abstract

In robotic manipulation, object grasping is a basic yet challenging task. Dexterous grasping necessitates intelligent visual observation of the target objects by emphasizing the importance of spatial equivariance to learn the grasping policy. In this paper, two significant challenges associated with robotic grasping in both clutter and occlusion scenarios are addressed. The first challenge is the coordination of push and grasp actions, in which the robot may occasionally fail to disrupt the arrangement of the objects in a well-ordered object scenario. On the other hand, when employed in a randomly cluttered object scenario, the pushing behavior may be less efficient, as many objects are more likely to be pushed out of the workspace. The second challenge is the avoidance of occlusion that occurs when the camera itself is entirely or partially occluded during a grasping action. This paper proposes a multi-view change observation-based approach (MV-COBA) to overcome these two problems. The proposed approach is divided into two parts: 1) using multiple cameras to set up multiple views to address the occlusion issue; and 2) using visual change observation on the basis of the pixel depth difference to address the challenge of coordinating push and grasp actions. According to experimental simulation findings, the proposed approach achieved an average grasp success rate of 83.6%, 86.3%, and 97.8% in the cluttered, well-ordered object, and occlusion scenarios, respectively.

Suggested Citation

  • Marwan Qaid Mohammed & Lee Chung Kwek & Shing Chyi Chua & Abdulaziz Salamah Aljaloud & Arafat Al-Dhaqm & Zeyad Ghaleb Al-Mekhlafi & Badiea Abdulkarem Mohammed, 2021. "Deep Reinforcement Learning-Based Robotic Grasping in Clutter and Occlusion," Sustainability, MDPI, vol. 13(24), pages 1-27, December.
  • Handle: RePEc:gam:jsusta:v:13:y:2021:i:24:p:13686-:d:699920
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2071-1050/13/24/13686/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2071-1050/13/24/13686/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:13:y:2021:i:24:p:13686-:d:699920. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.