Show simple item record

dc.contributor.authorTorpmann-Hagen, Birk Sebastian Frostelid
dc.contributor.authorRiegler, Michael
dc.contributor.authorHalvorsen, Pål
dc.contributor.authorJohansen, Dag
dc.date.accessioned2024-09-26T08:39:10Z
dc.date.available2024-09-26T08:39:10Z
dc.date.issued2024-04-24
dc.description.abstractDeep Neural Networks have been shown to perform poorly or even fail altogether when deployed in real-world settings, despite exhibiting excellent performance on initial benchmarks. This typically occurs due to relative changes in the nature of the production data, often referred to as distributional shifts. In an attempt to increase the transparency, trustworthiness, and overall utility of deep learning systems, a growing body of work has been dedicated to developing distributional shift detectors. As part of our work, we investigate distributional shift detectors that utilize statistical tests of neural network-based representations of data. We show that these methods are prone to fail under sample-bias, which we argue is unavoidable in most practical machine learning systems. To mitigate this, we implement a novel distributional shift detection framework which explicitly accounts for sample-bias via a simple sample-selection procedure. In particular, we show that the effect of sample-bias can be significantly reduced by performing statistical tests against the most similar data in the training set, as opposed to the training set as a whole. We find that this improves the stability and accuracy of a variety of distributional shift detection methods on both covariate- and semantic-shifts, with improvements to balanced accuracy typically ranging between 0.1 and 0.2, and false-positive-rates often being eliminated altogether under bias.en_US
dc.identifier.citationTorpmann-Hagen, Riegler, Halvorsen, Johansen. A Robust Framework for Distributional Shift Detection Under Sample-Bias. IEEE Access. 2024;12:59598-59611en_US
dc.identifier.cristinIDFRIDAID 2271569
dc.identifier.doi10.1109/ACCESS.2024.3393296
dc.identifier.issn2169-3536
dc.identifier.urihttps://hdl.handle.net/10037/34876
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.journalIEEE Access
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2024 The Author(s)en_US
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0en_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)en_US
dc.titleA Robust Framework for Distributional Shift Detection Under Sample-Biasen_US
dc.type.versionpublishedVersionen_US
dc.typeJournal articleen_US
dc.typeTidsskriftartikkelen_US
dc.typePeer revieweden_US


File(s) in this item

Thumbnail

This item appears in the following collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)