Dissecting deep neural networks for better medical image classification and classification understanding
File(s) with restricted access are under embargo until 2020-06-23
AuthorHicks, Steven Alexander; Riegler, Michael; Pogorelov, Konstantin; Ånonsen, Kim Vidar; de Lange, Thomas; Johansen, Dag; Jeppsson, Mattis; Randel, Kristin Ranheim; Eskeland, Sigrun Losada; Halvorsen, Pål
Neural networks, in the context of deep learning, show much promise in becoming an important tool with the purpose assisting medical doctors in disease detection during patient examinations. However, the current state of deep learning is something of a "black box", making it very difficult to understand what internal processes lead to a given result. This is not only true for non-technical users but among experts as well. This lack of understanding has led to hesitation in the implementation of these methods among mission-critical fields, with many putting interpretability in front of actual performance. Motivated by increasing the acceptance and trust of these methods, and to make qualified decisions, we present a system that allows for the partial opening of this black box. This includes an investigation on what the neural network sees when making a prediction, to both, improve algorithmic understanding, and to gain intuition into what pre-processing steps may lead to better image classification performance. Furthermore, a significant part of a medical expert's time is spent preparing reports after medical examinations, and if we already have a system for dissecting the analysis done by the network, the same tool can be used for automatic examination documentation through content suggestions. In this paper, we present a system that can look into the layers of a deep neural network and present the network's decision in a way that that medical doctors may understand. Furthermore, we present and discuss how this information can possibly be used for automatic reporting. Our initial results are very promising.
Source at: https://doi.org/10.1109/CBMS.2018.00070
CitationHicks, S.A., Riegler, M., Pogorelov, K., Ånonsen, K.V., de Lange, T., Johansen, D., ... Halvorsen, P. (2018). Dissecting deep neural networks for better medical image classification and classification understanding. IEEE International Symposium on Computer-Based Medical Systems, 363-368. https://doi.org/10.1109/CBMS.2018.00070
Showing items related by title, author, creator and subject.
Prognostic Impacts of Angiopoietins in NSCLC Tumor Cells and Stroma : VEGF-A Impact Is Strongly Associated with Ang-2 Andersen, Sigve; Dønnem, Tom; Al-Shibli, Khalid Ibrahim; Al-Saad, Samer; Stenvold, Helge; Busund, Lill-Tove; Bremnes, Roy M. (Journal article; Tidsskriftartikkel; Peer reviewed, 2011)Angiopoietins and their receptor Tie-2 are, in concert with VEGF-A, key mediators in angiogenesis. This study evaluates the prognostic impact of all known human angiopoietins (Ang-1, Ang-2 and Ang-4) and their receptor Tie-2, as well as their relation to the prognostic expression of VEGF-A. 335 unselected stage I-IIIA NSCLC-patients were included and tissue samples of respective tumor cells and ...
Humant papillomavirus : en litteraturstudie om HPV, dets relasjon til cancer og tiltak mot videre spredning av virus Gabrielsen, Endre (Master thesis; Mastergradsoppgave, 2012-06-01)I 1983 oppdaget zur Hausen sammenhengen mellom Humant Papillomavirus (HPV) og livmorhalskreft. På denne tiden visste man ikke at det var HPV som var årsaken til at Helaceller kunne leve in vitro. Ny forskning relaterer HPV til en rekke andre cancertyper. En stor andel anal-, oropharyngeal-, penis-, vaginal-, og vulvacancer skyldes HPV. Det er også påvist HPV i tumorvev fra øsofagus, larynx, lunge, ...