IDEAS home Printed from https://ideas.repec.org/a/gam/jagris/v15y2025i11p1152-d1665915.html
   My bibliography  Save this article

MAMNet: Lightweight Multi-Attention Collaborative Network for Fine-Grained Cropland Extraction from Gaofen-2 Remote Sensing Imagery

Author

Listed:
  • Jiayong Wu

    (School of Information Science and Technology, Yunnan Normal University, Kunming 650500, China)

  • Xue Ding

    (School of Information Science and Technology, Yunnan Normal University, Kunming 650500, China
    Key Laboratory of Resources and Environmental Remote Sensing for Universities in Yunnan, Kunming 650500, China
    Center for Geospatial Information Engineering and Technology of Yunnan Province, Kunming 650500, China
    Department of Geography, Yunnan Normal University, Kunming 650500, China)

  • Jinliang Wang

    (Key Laboratory of Resources and Environmental Remote Sensing for Universities in Yunnan, Kunming 650500, China
    Center for Geospatial Information Engineering and Technology of Yunnan Province, Kunming 650500, China
    Department of Geography, Yunnan Normal University, Kunming 650500, China)

  • Jiya Pan

    (Key Laboratory of Resources and Environmental Remote Sensing for Universities in Yunnan, Kunming 650500, China
    Center for Geospatial Information Engineering and Technology of Yunnan Province, Kunming 650500, China
    School of Economics, Yunnan Normal University, Kunming 650500, China)

Abstract

To address the issues of high computational complexity and boundary feature loss encountered when extracting farmland information from high-resolution remote sensing images, this study proposes an innovative CNN–Transformer hybrid network, MAMNet. This framework integrates a lightweight encoder, a global–local Transformer decoder, and a bidirectional attention architecture to achieve efficient and accurate farmland information extraction. First, we reconstruct the ResNet-18 backbone network using deep separable convolutions, reducing computational complexity while preserving feature representation capabilities. Second, the global–local Transformer block (GLTB) decoder uses multi-head self-attention mechanisms to dynamically fuse multi-scale features across layers, effectively restoring the topological structure of fragmented farmland boundaries. Third, we propose a novel bidirectional attention architecture: the Detail Improvement Module (DIM) uses channel attention to transfer semantic features to geometric features. The Context Enhancement Module (CEM) utilizes spatial attention to achieve dynamic geometric–semantic fusion, quantitatively distinguishing farmland textures from mixed ground cover. The positional attention mechanism (PAM) enhances the continuity of linear features by strengthening spatial correlations in jump connections. By cascading front-end feature module (FEM) to expand the receptive field and combining an adaptive feature reconstruction head (FRH), this method improves information integrity in fragmented areas. Evaluation results on the 2022 high-resolution two-channel image dataset from Chenggong District, Kunming City, demonstrate that MAMNet achieves an mIoU of 86.68% (an improvement of 1.66% and 2.44% over UNetFormer and BANet, respectively) and an F1-Score of 92.86% with only 12 million parameters. This method provides new technical insights for plot-level farmland monitoring in precision agriculture.

Suggested Citation

  • Jiayong Wu & Xue Ding & Jinliang Wang & Jiya Pan, 2025. "MAMNet: Lightweight Multi-Attention Collaborative Network for Fine-Grained Cropland Extraction from Gaofen-2 Remote Sensing Imagery," Agriculture, MDPI, vol. 15(11), pages 1-23, May.
  • Handle: RePEc:gam:jagris:v:15:y:2025:i:11:p:1152-:d:1665915
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2077-0472/15/11/1152/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2077-0472/15/11/1152/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jagris:v:15:y:2025:i:11:p:1152-:d:1665915. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.