Show simple item record

dc.contributor.authorSomani, Ayush
dc.contributor.authorHorsch, Ludwig Alexander
dc.contributor.authorBopardikar, Ajit
dc.contributor.authorPrasad, Dilip Kumar
dc.date.accessioned2024-11-05T13:20:14Z
dc.date.available2024-11-05T13:20:14Z
dc.date.issued2024-08-19
dc.description.abstractIn the rapidly evolving landscape of deep learning (DL), understanding the inner workings of neural networks remains a significant challenge. The need for transparency and accountability in DL models grows in importance as they become more prevalent in decision-making processes. Interpreting these models is key to addressing this challenge. This paper offers a comprehensive overview of interpretable methods for neural networks, particularly convolutional nets. The focus is on gradient-based propagation techniques that provide insight into the intricate mechanisms behind neural network predictions. Using a systematic review, we classify interpretability approaches that are based on gradients, dive into the theory of notable methods, and compare their strengths and weaknesses. Furthermore, we investigate different evaluation metrics for interpretable systems, often generalized under the term eXplainable Artificial Intelligence (XAI). We highlight the importance of these factors in evaluating the faithfulness, robustness, localization, complexity, randomization, and adherence to the axiomatic principles of XAI methods. Our objective is to assist researchers and practitioners in advancing towards a future for artificial intelligence that is characterized by a deeper understanding of its workings, thereby providing the desired transparency and accuracy. To this end, we offer a comprehensive summary of the latest advances in the field.en_US
dc.identifier.citationSomani A, Horsch A, Bopardikar, Prasad DK. Propagating Transparency: A Deep Dive into the Interpretability of Neural Networks. Nordic Machine Intelligence (NMI). 2024;4(2):1-18en_US
dc.identifier.cristinIDFRIDAID 2303684
dc.identifier.doi10.5617/nmi.10755
dc.identifier.issn2703-9196
dc.identifier.urihttps://hdl.handle.net/10037/35446
dc.language.isoengen_US
dc.publisherUniversitetet i Osloen_US
dc.relation.journalNordic Machine Intelligence (NMI)
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/H2020/964800/EU/OrganVision: Technology for real-time visualizing and modelling of fundamental process in living organoids towards new insights into organ-specific health, disease, and recovery/OrganVision/en_US
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/European Research Council/101123485/EU/Sperm filtration for improved success rate of assisted reproduction technology/Spermotile/en_US
dc.relation.urihttps://journals.uio.no/NMI/article/view/10755/9682
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2024 The Author(s)en_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0en_US
dc.rightsAttribution 4.0 International (CC BY 4.0)en_US
dc.titlePropagating Transparency: A Deep Dive into the Interpretability of Neural Networksen_US
dc.type.versionpublishedVersionen_US
dc.typeJournal articleen_US
dc.typeTidsskriftartikkelen_US
dc.typePeer revieweden_US


File(s) in this item

Thumbnail

This item appears in the following collection(s)

Show simple item record

Attribution 4.0 International (CC BY 4.0)
Except where otherwise noted, this item's license is described as Attribution 4.0 International (CC BY 4.0)