Author
Abstract
The increasing prevalence of online education for children underscores the need for intelligent systems capable of recognizing and responding to learners' emotional states in real time. Emotional fluctuations, such as boredom, frustration, or confusion, have been empirically linked to cognitive disengagement and poor learning outcomes, particularly in younger learners. However, existing educational platforms lack reliable mechanisms for detecting such states and initiating timely pedagogical interventions. This study proposes a multimodal emotion recognition and intervention framework tailored for children's online learning environments. The proposed system integrates facial expression analysis, speech emotion recognition, and behavioral signal tracking to detect six core emotional states commonly observed in child learners. A hierarchical fusion architecture, combining CNN-LSTM visual encoders and Transformer-based cross-modal attention modules, enables robust emotion classification even under modality loss or environmental noise. In parallel, a rule-enhanced policy engine maps detected emotions to personalized intervention strategies, including task scaffolding, verbal encouragement, and content pacing adjustments. The framework is evaluated on a newly curated multimodal dataset of primary school students engaged in online learning tasks, demonstrating superior performance over unimodal and early-fusion baselines across multiple metrics. In real-world deployment trials, the system significantly improves learners' task completion rates and emotional stability. Furthermore, ablation studies and statistical significance tests confirm the contributions of each modality and fusion mechanism. The results suggest that incorporating multimodal affective computing into online learning platforms offers a promising pathway toward emotionally adaptive and child-centric digital education. This work contributes both a scalable technical solution and empirical evidence supporting the integration of emotional monitoring and intervention in intelligent tutoring systems.
Suggested Citation
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ajp:edwast:v:9:y:2025:i:7:p:1482-1495:id:8944. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Melissa Fernandes (email available below). General contact details of provider: https://learning-gate.com/index.php/2576-8484/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.