Vis enkel innførsel

dc.contributor.authorJha, Debesh
dc.contributor.authorSharma, Vanshali
dc.contributor.authorBanik, Debapriya
dc.contributor.authorBhattacharya, Debayan
dc.contributor.authorRoy, Kaushiki
dc.contributor.authorHicks, Steven
dc.contributor.authorTomar, Nikhil Kumar
dc.contributor.authorThambawita, Vajira L B
dc.contributor.authorKrenzer, Adrian
dc.contributor.authorJi, Ge-Peng
dc.contributor.authorPoudel, Sahadev
dc.contributor.authorBatchkala, George
dc.contributor.authorAlam, Saruar
dc.contributor.authorAhmed, Awadelrahman M.A.
dc.contributor.authorTrinh, Quoc-Huy
dc.contributor.authorKhan, Zeshan
dc.contributor.authorNguyen, Tien-Phat
dc.contributor.authorShrestha, Shruti
dc.contributor.authorNathan, Sabari
dc.contributor.authorGwak, Jeonghwan Gwak
dc.contributor.authorJha, Ritika Kumari
dc.contributor.authorZhang, Zheyuan
dc.contributor.authorSchlaefer, Alexander
dc.contributor.authorBhattacharjee, Debotosh
dc.contributor.authorBhuyan, M.K.
dc.contributor.authorDas, Pradip K.
dc.contributor.authorFan, Deng-Ping
dc.contributor.authorParasa, Sravanthi
dc.contributor.authorAli, Sharib
dc.contributor.authorRiegler, Michael Alexander
dc.contributor.authorHalvorsen, Pål
dc.contributor.authorde Lange, Thomas
dc.contributor.authorBagci, Ulas
dc.date.accessioned2024-11-18T13:18:55Z
dc.date.available2024-11-18T13:18:55Z
dc.date.issued2025-09-05
dc.description.abstractAutomatic analysis of colonoscopy images has been an active field of research motivated by the importance of early detection of precancerous polyps. However, detecting polyps during the live examination can be challenging due to various factors such as variation of skills and experience among the endoscopists, lack of attentiveness, and fatigue leading to a high polyp miss-rate. Therefore, there is a need for an automated system that can flag missed polyps during the examination and improve patient care. Deep learning has emerged as a promising solution to this challenge as it can assist endoscopists in detecting and classifying overlooked polyps and abnormalities in real time, improving the accuracy of diagnosis and enhancing treatment. In addition to the algorithm’s accuracy, transparency and interpretability are crucial to explaining the whys and hows of the algorithm’s prediction. Further, conclusions based on incorrect decisions may be fatal, especially in medicine. Despite these pitfalls, most algorithms are developed in private data, closed source, or proprietary software, and methods lack reproducibility. Therefore, to promote the development of efficient and transparent methods, we have organized the “Medico automatic polyp segmentation (Medico 2020)” and “MedAI: Transparency in Medical Image Segmentation (MedAI 2021)” competitions. The Medico 2020 challenge received submissions from 17 teams, while the MedAI 2021 challenge also gathered submissions from another 17 distinct teams in the following year. We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic. Our analysis revealed that the participants improved dice coefficient metrics from 0.8607 in 2020 to 0.8993 in 2021 despite adding diverse and challenging frames (containing irregular, smaller, sessile, or flat polyps), which are frequently missed during a routine clinical examination. For the instrument segmentation task, the best team obtained a mean Intersection over union metric of 0.9364. For the transparency task, a multi-disciplinary team, including expert gastroenterologists, accessed each submission and evaluated the team based on open-source practices, failure case analysis, ablation studies, usability and understandability of evaluations to gain a deeper understanding of the models’ credibility for clinical deployment. The best team obtained a final transparency score of 21 out of 25. Through the comprehensive analysis of the challenge, we not only highlight the advancements in polyp and surgical instrument segmentation but also encourage subjective evaluation for building more transparent and understandable AI-based colonoscopy systems. Moreover, we discuss the need for multi-center and out-of-distribution testing to address the current limitations of the methods to reduce the cancer burden and improve patient care.en_US
dc.identifier.citationJha, Sharma, Banik, Bhattacharya, Roy, Hicks, Tomar, Thambawita, Krenzer, Ji, Poudel, Batchkala, Alam, Ahmed, Trinh, Khan, Nguyen, Shrestha, Nathan, Gwak, Jha, Zhang, Schlaefer, Bhattacharjee, Bhuyan, Das, Fan, Parasa, Ali, Riegler, Halvorsen, de Lange, Bagci. Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges. Medical Image Analysis. 2025;99en_US
dc.identifier.cristinIDFRIDAID 2318510
dc.identifier.doi10.1016/j.media.2024.103307
dc.identifier.issn1361-8415
dc.identifier.issn1361-8423
dc.identifier.urihttps://hdl.handle.net/10037/35752
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.relation.journalMedical Image Analysis
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2025 The Author(s)en_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0en_US
dc.rightsAttribution 4.0 International (CC BY 4.0)en_US
dc.titleValidating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challengesen_US
dc.type.versionpublishedVersionen_US
dc.typeJournal articleen_US
dc.typeTidsskriftartikkelen_US
dc.typePeer revieweden_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution 4.0 International (CC BY 4.0)
Med mindre det står noe annet, er denne innførselens lisens beskrevet som Attribution 4.0 International (CC BY 4.0)