On Measures of Uncertainty in Classification
Permanent link
https://hdl.handle.net/10037/32185Date
2023-10-12Type
Journal articleTidsskriftartikkel
Peer reviewed
Abstract
Uncertainty is unavoidable in classification tasks and might originate from data (e.g., due to noise or wrong labeling), or the model (e.g., due to erroneous assumptions, etc). Providing an assessment of uncertainty associated with each outcome is of paramount importance in assessing the reliability of classification algorithms, especially on unseen data. In this work, we propose two measures of uncertainty in classification. One of the measures is developed from a geometrical perspective and quantifies a classifier's distance from a random guess. In contrast, the second proposed uncertainty measure is homophily-based since it takes into account the similarity between the classes. Accordingly, it reflects the type of mistaken classes. The proposed measures are not aggregated, i.e., they provide an uncertainty assessment to each data point. Moreover, they do not require label information. Using several datasets, we demonstrate the proposed measures’ differences and merit in assessing uncertainty in classification. The source code is available at github.com/pioui/uncertainty .
Publisher
IEEECitation
Chlaily S, Ratha D, Lozou P, Marinoni A. On Measures of Uncertainty in Classification. IEEE Transactions on Signal Processing. 2023;71:3710-3725Metadata
Show full item recordCollections
Copyright 2023 The Author(s)