IDEAS home Printed from https://ideas.repec.org/a/bjc/journl/v12y2025i3p1081-1090.html
   My bibliography  Save this article

Optimized Machine Learning Models for Poverty Detection: A Scientific Review of Multidimensional Approaches

Author

Listed:
  • Abdulrehman Mohamed

    (Institute of Computing and Informatics, Technical University of Mombasa, Tom Mboya Street Tudor, Mombasa)

  • Fullgence Mwakondo

    (Institute of Computing and Informatics, Technical University of Mombasa, Tom Mboya Street Tudor, Mombasa)

  • Kelvin Tole

    (Institute of Computing and Informatics, Technical University of Mombasa, Tom Mboya Street Tudor, Mombasa)

  • Mvurya Mgala

    (Institute of Computing and Informatics, Technical University of Mombasa, Tom Mboya Street Tudor, Mombasa)

Abstract

This paper enhances the discussion on machine learning (ML) models for poverty detection by introducing empirical validation, comparative performance analysis, and practical deployment strategies. We validate the proposed Optimized Machine Learning Model (OMLM) through experiments on real-world datasets. A comparative study against existing poverty detection models, including logistic regression, decision trees, and convolutional neural networks (CNNs), highlights OMLM’s superior adaptability and accuracy. The paper further explores data limitations, computational efficiency, and regional performance variations. Finally, a novel optimization technique, combining Genetic Algorithms (GA) with Reinforcement Learning (RL), is introduced to refine predictive accuracy and real-time adaptability. Practical implementation details, including data processing pipelines, cloud-based deployment, and integration into governmental policy frameworks, are discussed to enhance the model’s real-world applicability. This study contributes to advancing ML applications in poverty detection, reinforcing its role in data-driven policymaking and targeted socio-economic interventions.

Suggested Citation

  • Abdulrehman Mohamed & Fullgence Mwakondo & Kelvin Tole & Mvurya Mgala, 2025. "Optimized Machine Learning Models for Poverty Detection: A Scientific Review of Multidimensional Approaches," International Journal of Research and Scientific Innovation, International Journal of Research and Scientific Innovation (IJRSI), vol. 12(3), pages 1081-1090, March.
  • Handle: RePEc:bjc:journl:v:12:y:2025:i:3:p:1081-1090
    as

    Download full text from publisher

    File URL: https://www.rsisinternational.org/journals/ijrsi/digital-library/volume-12-issue-3/1081-1090.pdf
    Download Restriction: no

    File URL: https://rsisinternational.org/journals/ijrsi/articles/optimized-machine-learning-models-for-poverty-detection-a-scientific-review-of-multidimensional-approaches/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. De Moor, Bram J. & Gijsbrechts, Joren & Boute, Robert N., 2022. "Reward shaping to improve the performance of deep reinforcement learning in perishable inventory management," European Journal of Operational Research, Elsevier, vol. 301(2), pages 535-545.
    2. Bo Hu & Jiaxi Li & Shuang Li & Jie Yang, 2019. "A Hybrid End-to-End Control Strategy Combining Dueling Deep Q-network and PID for Transient Boost Control of a Diesel Engine with Variable Geometry Turbocharger and Cooled EGR," Energies, MDPI, vol. 12(19), pages 1-15, September.
    3. Li, Hao & Misra, Siddharth, 2021. "Reinforcement learning based automated history matching for improved hydrocarbon production forecast," Applied Energy, Elsevier, vol. 284(C).
    4. Tian Zhu & Merry H. Ma, 2022. "Deriving the Optimal Strategy for the Two Dice Pig Game via Reinforcement Learning," Stats, MDPI, vol. 5(3), pages 1-14, August.
    5. Xiaoyue Li & John M. Mulvey, 2023. "Optimal Portfolio Execution in a Regime-switching Market with Non-linear Impact Costs: Combining Dynamic Program and Neural Network," Papers 2306.08809, arXiv.org.
    6. Baoyu Liang & Yuchen Wang & Chao Tong, 2025. "AI Reasoning in Deep Learning Era: From Symbolic AI to Neural–Symbolic AI," Mathematics, MDPI, vol. 13(11), pages 1-42, May.
    7. Feng, Cong & Zhang, Jie & Zhang, Wenqi & Hodge, Bri-Mathias, 2022. "Convolutional neural networks for intra-hour solar forecasting based on sky image sequences," Applied Energy, Elsevier, vol. 310(C).
    8. Anthony Coache & Sebastian Jaimungal & 'Alvaro Cartea, 2022. "Conditionally Elicitable Dynamic Risk Measures for Deep Reinforcement Learning," Papers 2206.14666, arXiv.org, revised May 2023.
    9. repec:zib:zbjtin:v:3:y:2023:i:1:p:01-05 is not listed on IDEAS
    10. Pedro Afonso Fernandes, 2024. "Forecasting with Neuro-Dynamic Programming," Papers 2404.03737, arXiv.org.
    11. Hao, Peng & Wei, Zhensong & Bai, Zhengwei & Barth, Matthew J., 2020. "Developing an Adaptive Strategy for Connected Eco-Driving Under Uncertain Traffic and Signal Conditions," Institute of Transportation Studies, Working Paper Series qt2fv5063b, Institute of Transportation Studies, UC Davis.
    12. Guangyuan Li & Baobao Song & Harinder Singh & V. B. Surya Prasath & H. Leighton Grimes & Nathan Salomonis, 2023. "Decision level integration of unimodal and multimodal single cell data with scTriangulate," Nature Communications, Nature, vol. 14(1), pages 1-16, December.
    13. Tambet Matiisen & Aqeel Labash & Daniel Majoral & Jaan Aru & Raul Vicente, 2022. "Do Deep Reinforcement Learning Agents Model Intentions?," Stats, MDPI, vol. 6(1), pages 1-17, December.
    14. Nathan Companez & Aldeida Aleti, 2016. "Can Monte-Carlo Tree Search learn to sacrifice?," Journal of Heuristics, Springer, vol. 22(6), pages 783-813, December.
    15. Yuchen Zhang & Wei Yang, 2022. "Breakthrough invention and problem complexity: Evidence from a quasi‐experiment," Strategic Management Journal, Wiley Blackwell, vol. 43(12), pages 2510-2544, December.
    16. Benjamin Heinbach & Peter Burggräf & Johannes Wagner, 2024. "gym-flp: A Python Package for Training Reinforcement Learning Algorithms on Facility Layout Problems," SN Operations Research Forum, Springer, vol. 5(1), pages 1-26, March.
    17. Zhang, Yiwen & Ren, Yifan & Liu, Ziyun & Li, Haoqin & Jiang, Huaiguang & Xue, Ying & Ou, Junhui & Hu, Renzong & Zhang, Jun & Gao, David Wenzhong, 2025. "Federated deep reinforcement learning for varying-scale multi-energy microgrids energy management considering comprehensive security," Applied Energy, Elsevier, vol. 380(C).
    18. Elsisi, Mahmoud & Amer, Mohammed & Dababat, Alya’ & Su, Chun-Lien, 2023. "A comprehensive review of machine learning and IoT solutions for demand side energy management, conservation, and resilient operation," Energy, Elsevier, vol. 281(C).
    19. Lu Wang & Wenqing Ai & Tianhu Deng & Zuo‐Jun M. Shen & Changjing Hong, 2020. "Optimal production ramp‐up in the smartphone manufacturing industry," Naval Research Logistics (NRL), John Wiley & Sons, vol. 67(8), pages 685-704, December.
    20. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    21. Christopher R. Madan, 2020. "Considerations for Comparing Video Game AI Agents with Humans," Challenges, MDPI, vol. 11(2), pages 1-12, August.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:bjc:journl:v:12:y:2025:i:3:p:1081-1090. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Dr. Renu Malsaria (email available below). General contact details of provider: https://rsisinternational.org/journals/ijrsi/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.