IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0192246.html
   My bibliography  Save this article

A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos

Author

Listed:
  • Amirhossein Aghamohammadi
  • Mei Choo Ang
  • Elankovan A. Sundararajan
  • Ng Kok Weng
  • Marzieh Mogharrebi
  • Seyed Yashar Banihashem

Abstract

Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.

Suggested Citation

  • Amirhossein Aghamohammadi & Mei Choo Ang & Elankovan A. Sundararajan & Ng Kok Weng & Marzieh Mogharrebi & Seyed Yashar Banihashem, 2018. "A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos," PLOS ONE, Public Library of Science, vol. 13(2), pages 1-19, February.
  • Handle: RePEc:plo:pone00:0192246
    DOI: 10.1371/journal.pone.0192246
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0192246
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0192246&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0192246?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Jing Lou & Mingwu Ren & Huan Wang, 2014. "Regional Principal Color Based Saliency Detection," PLOS ONE, Public Library of Science, vol. 9(11), pages 1-13, November.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wei Zhu & Jing Lou & Longtao Chen & Qingyuan Xia & Mingwu Ren, 2017. "Scene text detection via extremal region based double threshold convolutional network classification," PLOS ONE, Public Library of Science, vol. 12(8), pages 1-17, August.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0192246. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.