IEEE Access (Jan 2025)

A Probabilistic Adversarial Autoencoder for Novelty Detection: Leveraging Lightweight Design and Reconstruction Loss

  • Muhammad Asad,
  • Ihsan Ullah,
  • Muhammad Adeel Hafeez,
  • Ganesh Sistu,
  • Michael G. Madden

DOI
https://doi.org/10.1109/ACCESS.2025.3577080
Journal volume & issue
Vol. 13
pp. 98530 – 98541

Abstract

Read online

A novelty detection task involves identifying whether a data point is an outlier, given a training dataset that primarily captures the distribution of inliers. The novel class is usually absent, poorly sampled, or not well defined in the training data. A common technique for anomaly detection at present is to use an adversarial network generator to generate an anomaly score for inputs using the reconstruction loss. However, because this technique uses a competitive training process, it can be unreliable, with its performance being inconsistent during each adversarial training step. This inconsistency arises from changes in the network’s ability to detect anomalies. In this paper, we propose a revised framework for generative probabilistic novelty detection. We use a similar adversarial autoencoder-based framework but with a lightweight deep network, a novel training paradigm, and a probabilistic score to compute the reconstruction loss. Our methodology calculates the probability of whether a sample comes from the inlier distribution or not. The proposed approach can be applied to anomaly and outlier detection in images and videos. We present the results on multiple benchmark datasets, including the challenging UCSD Ped2 dataset for video anomaly detection. Our results illustrate that our proposed method learns the inlier classes and differentiates them from the outlier classes effectively, leading to better results than the baseline and state-of-the-art methods in several benchmark datasets.

Keywords