Show simple item record

dc.contributor.authorHicks, Steven Alexander
dc.contributor.authorRiegler, Michael
dc.contributor.authorPogorelov, Konstantin
dc.contributor.authorÅnonsen, Kim Vidar
dc.contributor.authorde Lange, Thomas
dc.contributor.authorJohansen, Dag
dc.contributor.authorJeppsson, Mattis
dc.contributor.authorRandel, Kristin Ranheim
dc.contributor.authorEskeland, Sigrun Losada
dc.contributor.authorHalvorsen, Pål
dc.date.accessioned2019-02-05T12:35:05Z
dc.date.available2019-02-05T12:35:05Z
dc.date.issued2018-07-23
dc.description.abstractNeural networks, in the context of deep learning, show much promise in becoming an important tool with the purpose assisting medical doctors in disease detection during patient examinations. However, the current state of deep learning is something of a "black box", making it very difficult to understand what internal processes lead to a given result. This is not only true for non-technical users but among experts as well. This lack of understanding has led to hesitation in the implementation of these methods among mission-critical fields, with many putting interpretability in front of actual performance. Motivated by increasing the acceptance and trust of these methods, and to make qualified decisions, we present a system that allows for the partial opening of this black box. This includes an investigation on what the neural network sees when making a prediction, to both, improve algorithmic understanding, and to gain intuition into what pre-processing steps may lead to better image classification performance. Furthermore, a significant part of a medical expert's time is spent preparing reports after medical examinations, and if we already have a system for dissecting the analysis done by the network, the same tool can be used for automatic examination documentation through content suggestions. In this paper, we present a system that can look into the layers of a deep neural network and present the network's decision in a way that that medical doctors may understand. Furthermore, we present and discuss how this information can possibly be used for automatic reporting. Our initial results are very promising.en_US
dc.descriptionSource at: <a href=https://doi.org/10.1109/CBMS.2018.00070>https://doi.org/10.1109/CBMS.2018.00070</a>en_US
dc.identifier.citationHicks, S.A., Riegler, M., Pogorelov, K., Ånonsen, K.V., de Lange, T., Johansen, D., ... Halvorsen, P. (2018). Dissecting deep neural networks for better medical image classification and classification understanding. <i>IEEE International Symposium on Computer-Based Medical Systems</i>, 363-368. https://doi.org/10.1109/CBMS.2018.00070en_US
dc.identifier.cristinIDFRIDAID 1616225
dc.identifier.doi10.1109/CBMS.2018.00070
dc.identifier.issn2372-9198
dc.identifier.urihttps://hdl.handle.net/10037/14620
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.journalIEEE International Symposium on Computer-Based Medical Systems
dc.rights.accessRightsopenAccessen_US
dc.subjectVDP::Medical disciplines: 700::Clinical medical disciplines: 750::Radiology and diagnostic imaging: 763en_US
dc.subjectVDP::Medisinske Fag: 700::Klinisk medisinske fag: 750::Radiologi og bildediagnostikk: 763en_US
dc.subjectMedical diagnostic imagingen_US
dc.subjectNeural networksen_US
dc.subjectMachine learningen_US
dc.subjectToolsen_US
dc.subjectMedical servicesen_US
dc.subjectVisualizationen_US
dc.titleDissecting deep neural networks for better medical image classification and classification understandingen_US
dc.typeJournal articleen_US
dc.typeTidsskriftartikkelen_US
dc.typePeer revieweden_US


File(s) in this item

Thumbnail

This item appears in the following collection(s)

Show simple item record