Author
Listed:
- Murali Palani
(Capitol Technology University, Laurel, Maryland, USA)
- Atif Farid Mohammad
(Capitol Technology University, Laurel, Maryland, USA)
- Malathy Muthu
(Capitol Technology University, Laurel, Maryland, USA)
Abstract
This study examines how explainable artificial intelligence can support responsible decision-making in socio-ecological systems by analyzing tree bioelectrical responses to geomagnetic variability as a global environmental case study. Using 309,660 hourly observations collected from 21 international monitoring stations between 2023 and 2024, we compare traditional machine learning and deep learning approaches to model bioelectrical circadian rhythms under varying geomagnetic and environmental conditions. Nine AI architectures were evaluated, including Random Forest, Gradient Boosting, XGBoost, LSTM networks, and Transformer models. Results indicate that traditional machine learning methods outperform deep learning approaches in both predictive accuracy and interpretability, with Random Forest achieving the highest performance (R² = 0.936), exceeding the best deep learning model by 18.7%. Geomagnetic storm conditions were associated with a 143.9% increase in signal amplitude and a three-hour phase delay in tree circadian rhythms, demonstrating measurable environmental sensitivity to electromagnetic variability. SHAP-based explainability analysis identified tree ground voltage as the dominant predictor, followed by key meteorological variables such as humidity, temperature, and wind speed. Beyond predictive performance, the findings highlight critical social and institutional implications of AI model selection. Traditional machine learning approaches offer greater transparency, lower computational barriers, and higher stakeholder interpretability factors essential for environmental governance, policy compliance, and public trust in AI-driven monitoring systems. By positioning explainable AI as a socio-technical tool rather than a purely computational solution, this research contributes to interdisciplinary discussions on responsible AI deployment, environmental decision support, and the role of transparent analytics in managing complex human–environment interactions.
Suggested Citation
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:smo:raiswp:0634. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Eduard David (email available below). General contact details of provider: http://rais.education/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.