IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v18y2026i3p136-d1879187.html

Multi-Modal Artificial Intelligence for Smart Cities: Experimental Integration of Textual and Sensor Data

Author

Listed:
  • Nouf Alkhater

    (Department of Computer Science and Engineering, College of Computer Science and Engineering, University of Hafr Al Batin, Hafr Al Batin 39524, Saudi Arabia)

Abstract

Smart city decision-making increasingly relies on heterogeneous urban data sources. Dense traffic sensor streams provide continuous quantitative measurements, while citizen-generated textual reports offer event-driven contextual information. However, integrating these modalities remains challenging due to temporal misalignment, textual sparsity, and semantic noise. This paper investigates multi-modal learning for traffic congestion severity prediction through an experimental integration of open traffic sensor data (METR-LA: Los Angeles, USA) and citizen-generated textual reports (NYC 311: New York City, USA). Congestion severity is formulated as a four-class classification task derived from traffic speed measurements. We propose an end-to-end framework that combines: (i) sensor time-series encoding using a GRU-based temporal encoder, (ii) textual representation learning using a BERT-based encoder, (iii) a symmetric time-window alignment strategy (±Δ) to associate irregular reports with sensor time steps, and (iv) multiple fusion architectures, including early fusion, late fusion, and a cross-attention module for cross-modal interaction modeling. Experiments on publicly available datasets show that multi-modal early fusion achieves the best overall performance (Accuracy = 0.8283, Macro-F1 = 0.8231) compared to uni-modal baselines. In the studied cross-city setting with sparse and weakly aligned textual signals, the proposed cross-attention fusion does not outperform the strong sensor-only baseline, suggesting that the sensor modality dominates when cross-modal signal strength is limited. These results highlight both the potential and the practical constraints of multi-modal fusion in heterogeneous smart-city environments, emphasizing the importance of alignment design, modality relevance, and transparent experimental validation.

Suggested Citation

  • Nouf Alkhater, 2026. "Multi-Modal Artificial Intelligence for Smart Cities: Experimental Integration of Textual and Sensor Data," Future Internet, MDPI, vol. 18(3), pages 1-24, March.
  • Handle: RePEc:gam:jftint:v:18:y:2026:i:3:p:136-:d:1879187
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/18/3/136/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/18/3/136/
    Download Restriction: no
    ---><---

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:18:y:2026:i:3:p:136-:d:1879187. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.