IDEAS home Printed from https://ideas.repec.org/a/gam/jagris/v12y2022i10p1650-d937430.html
   My bibliography  Save this article

A Deep-Learning Extraction Method for Orchard Visual Navigation Lines

Author

Listed:
  • Jianjun Zhou

    (College of Information Engineering, Beijing Institute of Petrochemical Technology, Beijing 102617, China)

  • Siyuan Geng

    (Beijing Electro-Mechanical Engineering Institute, Beijing 100074, China)

  • Quan Qiu

    (Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China)

  • Yang Shao

    (College of Information Engineering, Beijing Institute of Petrochemical Technology, Beijing 102617, China)

  • Man Zhang

    (Key Laboratory of Smart Agriculture System Integration Research, Ministry of Education, China Agricultural University, Beijing 100083, China)

Abstract

Orchard machinery autonomous navigation is helpful for improving the efficiency of fruit production and reducing labor costs. Path planning is one of the core technologies of autonomous navigation for orchard machinery. As normally planted in straight and parallel rows, fruit trees are natural landmarks that can provide suitable cues for orchard intelligent machinery. This paper presents a novel method to realize path planning based on computer vision technologies. We combine deep learning and the least-square (DL-LS) algorithm to carry out a new navigation line extraction algorithm for orchard scenarios. First, a large number of actual orchard images are collected and processed for training the YOLO V3 model. After the training, the mean average precision (MAP) of the model for trunk and tree detection can reach 92.11%. Secondly, the reference point coordinates of the fruit trees are calculated with the coordinates of the bounding box of trunks. Thirdly, the reference lines of fruit trees growing on both sides are fitted by the least-square method and the navigation line for the orchard machinery is determined by the two reference lines. Experimental results show that the trained YOLO V3 network can identify the tree trunk and the fruit tree accurately and that the new navigation line of fruit tree rows can be extracted effectively. The accuracy of orchard centerline extraction is 90.00%.

Suggested Citation

  • Jianjun Zhou & Siyuan Geng & Quan Qiu & Yang Shao & Man Zhang, 2022. "A Deep-Learning Extraction Method for Orchard Visual Navigation Lines," Agriculture, MDPI, vol. 12(10), pages 1-13, October.
  • Handle: RePEc:gam:jagris:v:12:y:2022:i:10:p:1650-:d:937430
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2077-0472/12/10/1650/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2077-0472/12/10/1650/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jagris:v:12:y:2022:i:10:p:1650-:d:937430. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.