IDEAS home Printed from https://ideas.repec.org/h/spr/sprchp/978-3-031-24628-9_7.html
   My bibliography  Save this book chapter

Empowering Interpretable, Explainable Machine Learning Using Bayesian Network Classifiers

In: Machine Learning for Data Science Handbook

Author

Listed:
  • Boaz Lerner

    (Ben-Gurion University of the Negev)

Abstract

Even before the deep learning era, the machine learning (ML) community commonly believed that while decision trees, neural networks (NNs), support vector machines, and ensemble (bagging and boosting) methods are the ultimate tools for highly accurate classification, graphical models and their flagship Bayesian networks (BNs) are only appropriate for knowledge representation. This chapter challenges the belief that the unsupervised graphical model is inferior to the supervised classifier and provides evidence to the contrary. Moreover, it demonstrates how the graphical models’ knowledge representation capability promotes a level of interpretability and explainability that is not found in conventional ML classifiers. This chapter further challenges the ML community to invest even 1% of the efforts currently focused on increasing the accuracy of deep and non-deep ML classifiers and to equip them with the means for visualization and interpretation, on instead developing BN learning algorithms that would allow graphical models to complement and integrate with these classifiers to foster interpretability and explainability. One example could be to utilize the natural interpretability provided by conditional (in)dependencies among the nodes and causal pathways in the BN classifier to visualize, interpret, and explain deep NN results and important interactions among network units, layers, and activities that may be responsible for right and wrong classification decisions made by the network. Another example could be development of graphical user interface tools encouraging, promoting, and supporting human–machine interaction by which both users’ inquiries will help manipulate and extend the learned BN model to better address these and further inquiries, and the tools will inspire users’ curiosity to further investigate the model to enrich their understanding of the domain. Such efforts will further contribute to the attempts of the ML community to not only increase its impact on advancing and supporting the many fields that strive for innovation, but also to meet growing criticism concerning lack of explainability, transparency, and accountability in AI—criticism that may undermine and hinder the tremendous societal benefits that ML can bring.

Suggested Citation

  • Boaz Lerner, 2023. "Empowering Interpretable, Explainable Machine Learning Using Bayesian Network Classifiers," Springer Books, in: Lior Rokach & Oded Maimon & Erez Shmueli (ed.), Machine Learning for Data Science Handbook, edition 0, pages 111-142, Springer.
  • Handle: RePEc:spr:sprchp:978-3-031-24628-9_7
    DOI: 10.1007/978-3-031-24628-9_7
    as

    Download full text from publisher

    To our knowledge, this item is not available for download. To find whether it is available, there are three options:
    1. Check below whether another version of this item is available online.
    2. Check on the provider's web page whether it is in fact available.
    3. Perform a
    for a similarly titled item that would be available.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:sprchp:978-3-031-24628-9_7. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.