IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v13y2025i18p3046-d1754666.html
   My bibliography  Save this article

Towards Robust Chain-of-Thought Prompting with Self-Consistency for Remote Sensing VQA: An Empirical Study Across Large Multimodal Models

Author

Listed:
  • Fatema Tuj Johora Faria

    (Department of Computer Science and Engineering, Ahsanullah University of Science and Technology, Dhaka 1208, Bangladesh)

  • Laith H. Baniata

    (Department of Computing, Gachon University, Seongnam 13120, Republic of Korea)

  • Ahyoung Choi

    (Department of Computing, Gachon University, Seongnam 13120, Republic of Korea)

  • Sangwoo Kang

    (Department of Computing, Gachon University, Seongnam 13120, Republic of Korea)

Abstract

Remote sensing visual question answering (RSVQA) involves interpreting complex geospatial information captured by satellite imagery to answer natural language questions, making it a vital tool for observing and analyzing Earth’s surface without direct contact. Although numerous studies have addressed RSVQA, most have focused primarily on answer accuracy, often overlooking the underlying reasoning capabilities required to interpret spatial and contextual cues in satellite imagery. To address this gap, this study presents a comprehensive evaluation of four large multimodal models (LMMs) as follows: GPT-4o, Grok 3, Gemini 2.5 Pro, and Claude 3.7 Sonnet. We used a curated subset of the EarthVQA dataset consisting of 100 rural images with 29 question–answer pairs each and 100 urban images with 42 pairs each. We developed the following three task-specific frameworks: (1) Zero-GeoVision , which employs zero-shot prompting with problem-specific prompts that elicit direct answers from the pretrained knowledge base without fine-tuning; (2) CoT-GeoReason , which enhances the knowledge base with chain-of-thought prompting, guiding it through explicit steps of feature detection, spatial analysis, and answer synthesis; and (3) Self-GeoSense , which extends this approach by stochastically decoding five independent reasoning chains for each remote sensing question. Rather than merging these chains, it counts the final answers, selects the majority choice, and returns a single complete reasoning chain whose conclusion aligns with that majority. Additionally, we designed the Geo-Judge framework to employ a two-stage evaluation process. In Stage 1, a GPT-4o-mini-based LMM judge assesses reasoning coherence and answer correctness using the input image, task type, reasoning steps, generated model answer, and ground truth. In Stage 2, blinded human experts independently review the LMM’s reasoning and answer, providing unbiased validation through careful reassessment. Focusing on Self-GeoSense with Grok 3 , this framework achieves superior performance with 94.69% accuracy in Basic Judging, 93.18% in Basic Counting, 89.42% in Reasoning-Based Judging, 83.29% in Reasoning-Based Counting, 77.64% in Object Situation Analysis, and 65.29% in Comprehensive Analysis, alongside RMSE values of 0.9102 in Basic Counting and 1.0551 in Reasoning-Based Counting.

Suggested Citation

  • Fatema Tuj Johora Faria & Laith H. Baniata & Ahyoung Choi & Sangwoo Kang, 2025. "Towards Robust Chain-of-Thought Prompting with Self-Consistency for Remote Sensing VQA: An Empirical Study Across Large Multimodal Models," Mathematics, MDPI, vol. 13(18), pages 1-28, September.
  • Handle: RePEc:gam:jmathe:v:13:y:2025:i:18:p:3046-:d:1754666
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/13/18/3046/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/13/18/3046/
    Download Restriction: no
    ---><---

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:18:p:3046-:d:1754666. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.