IEEE Access (Jan 2025)

On Risk Assessment for Out-of-Distribution Detection

  • Anton Vasiliuk

DOI
https://doi.org/10.1109/ACCESS.2025.3533201
Journal volume & issue
Vol. 13
pp. 18546 – 18568

Abstract

Read online

This paper challenges the conventional approach treating of out-of-distribution (OOD) risk as uniform and aimed at reducing OOD risk on average. We argue that managing OOD risk on average fails to account for the potential impact of rare, high-consequence events, which can undermine trust in a model even with just a single OOD incident. First, we show that OOD performance depends on both the rate of outliers and the number of samples processed by a machine learning (ML) model. Second, we introduce a novel perspective that assesses OOD risk by considering the expected maximum risk within a limited sample size. Our theoretical findings clearly distinguish when OOD detection is essential and when it becomes redundant, allowing efforts to be directed towards improving ID performance once adequate OOD robustness is achieved. Finally, an analysis of popular computer vision benchmarks reveals that ID errors often dominate overall risk, highlighting the importance of strong ID performance as a foundation for effective OOD detection. Our framework offers both theoretical insights and practical guidelines for deploying ML models in high-stakes applications, where trust and reliability are paramount.

Keywords