dc.contributor.author | Jha, Debesh | |
dc.contributor.author | Sharma, Vanshali | |
dc.contributor.author | Banik, Debapriya | |
dc.contributor.author | Bhattacharya, Debayan | |
dc.contributor.author | Roy, Kaushiki | |
dc.contributor.author | Hicks, Steven | |
dc.contributor.author | Tomar, Nikhil Kumar | |
dc.contributor.author | Thambawita, Vajira L B | |
dc.contributor.author | Krenzer, Adrian | |
dc.contributor.author | Ji, Ge-Peng | |
dc.contributor.author | Poudel, Sahadev | |
dc.contributor.author | Batchkala, George | |
dc.contributor.author | Alam, Saruar | |
dc.contributor.author | Ahmed, Awadelrahman M.A. | |
dc.contributor.author | Trinh, Quoc-Huy | |
dc.contributor.author | Khan, Zeshan | |
dc.contributor.author | Nguyen, Tien-Phat | |
dc.contributor.author | Shrestha, Shruti | |
dc.contributor.author | Nathan, Sabari | |
dc.contributor.author | Gwak, Jeonghwan Gwak | |
dc.contributor.author | Jha, Ritika Kumari | |
dc.contributor.author | Zhang, Zheyuan | |
dc.contributor.author | Schlaefer, Alexander | |
dc.contributor.author | Bhattacharjee, Debotosh | |
dc.contributor.author | Bhuyan, M.K. | |
dc.contributor.author | Das, Pradip K. | |
dc.contributor.author | Fan, Deng-Ping | |
dc.contributor.author | Parasa, Sravanthi | |
dc.contributor.author | Ali, Sharib | |
dc.contributor.author | Riegler, Michael Alexander | |
dc.contributor.author | Halvorsen, Pål | |
dc.contributor.author | de Lange, Thomas | |
dc.contributor.author | Bagci, Ulas | |
dc.date.accessioned | 2024-11-18T13:18:55Z | |
dc.date.available | 2024-11-18T13:18:55Z | |
dc.date.issued | 2025-09-05 | |
dc.description.abstract | Automatic analysis of colonoscopy images has been an active field of research motivated by the importance of early detection of precancerous polyps. However, detecting polyps during the live examination can be challenging due to various factors such as variation of skills and experience among the endoscopists, lack of attentiveness, and fatigue leading to a high polyp miss-rate. Therefore, there is a need for an automated system that can flag missed polyps during the examination and improve patient care. Deep learning has emerged as a promising solution to this challenge as it can assist endoscopists in detecting and classifying overlooked polyps and abnormalities in real time, improving the accuracy of diagnosis and enhancing treatment. In addition to the algorithm’s accuracy, transparency and interpretability are crucial to explaining the whys and hows of the algorithm’s prediction. Further, conclusions based on incorrect decisions may be fatal, especially in medicine. Despite these pitfalls, most algorithms are developed in private data, closed source, or proprietary software, and methods lack reproducibility. Therefore, to promote the development of efficient and transparent methods, we have organized the “Medico automatic polyp segmentation (Medico 2020)” and “MedAI: Transparency in Medical Image Segmentation (MedAI 2021)” competitions. The Medico 2020 challenge received submissions from 17 teams, while the MedAI 2021 challenge also gathered submissions from another 17 distinct teams in the following year. We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic. Our analysis revealed that the participants improved dice coefficient metrics from 0.8607 in 2020 to 0.8993 in 2021 despite adding diverse and challenging frames (containing irregular, smaller, sessile, or flat polyps), which are frequently missed during a routine clinical examination. For the instrument segmentation task, the best team obtained a mean Intersection over union metric of 0.9364. For the transparency task, a multi-disciplinary team, including expert gastroenterologists, accessed each submission and evaluated the team based on open-source practices, failure case analysis, ablation studies, usability and understandability of evaluations to gain a deeper understanding of the models’ credibility for clinical deployment. The best team obtained a final transparency score of 21 out of 25. Through the comprehensive analysis of the challenge, we not only highlight the advancements in polyp and surgical instrument segmentation but also encourage subjective evaluation for building more transparent and understandable AI-based colonoscopy systems. Moreover, we discuss the need for multi-center and out-of-distribution testing to address the current limitations of the methods to reduce the cancer burden and improve patient care. | en_US |
dc.identifier.citation | Jha, Sharma, Banik, Bhattacharya, Roy, Hicks, Tomar, Thambawita, Krenzer, Ji, Poudel, Batchkala, Alam, Ahmed, Trinh, Khan, Nguyen, Shrestha, Nathan, Gwak, Jha, Zhang, Schlaefer, Bhattacharjee, Bhuyan, Das, Fan, Parasa, Ali, Riegler, Halvorsen, de Lange, Bagci. Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges. Medical Image Analysis. 2025;99 | en_US |
dc.identifier.cristinID | FRIDAID 2318510 | |
dc.identifier.doi | 10.1016/j.media.2024.103307 | |
dc.identifier.issn | 1361-8415 | |
dc.identifier.issn | 1361-8423 | |
dc.identifier.uri | https://hdl.handle.net/10037/35752 | |
dc.language.iso | eng | en_US |
dc.publisher | Elsevier | en_US |
dc.relation.journal | Medical Image Analysis | |
dc.rights.accessRights | openAccess | en_US |
dc.rights.holder | Copyright 2025 The Author(s) | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0 | en_US |
dc.rights | Attribution 4.0 International (CC BY 4.0) | en_US |
dc.title | Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges | en_US |
dc.type.version | publishedVersion | en_US |
dc.type | Journal article | en_US |
dc.type | Tidsskriftartikkel | en_US |
dc.type | Peer reviewed | en_US |