IEEE Access (Jan 2023)

Deep Neural Network Ensembles Using Class-vs-Class Weighting

  • Rene Fabricius,
  • Ondrej Such,
  • Peter Tarabek

DOI
https://doi.org/10.1109/ACCESS.2023.3298057
Journal volume & issue
Vol. 11
pp. 77703 – 77715

Abstract

Read online

Ensembling is a popular and powerful technique to utilize predictions from several different machine learning models. The fundamental precondition of a well-working ensemble model is a diverse set of combined constituents. Rapid development in the deep learning field provides an ever-increasing palette of diverse model architectures. This rich variety of models provides an ideal situation to improve classification accuracy by ensembling. In this regard, we propose a novel weighted ensembling classification approach with unique weights for each combined classifier and each pair of classes. The novel weighting scheme allows us to account for the different abilities of individual classifiers to distinguish between pairs of classes. First, we analyze a theoretical scenario, in which our approach yields optimal classification. Second, we test its practical applicability on computer vision benchmark datasets. We evaluate the effectiveness of our proposed method and averaging ensemble baseline on an image classification task using the CIFAR-100 and ImageNet1k benchmarks. We use deep convolutional neural networks, vision transformers, and an MLP-Mixer as ensemble constituents. Statistical tests show that our proposed method provides higher accuracy gains than a popular baseline ensemble on both datasets. On the CIFAR-100 dataset, the proposed method attains accuracy improvements ranging from 2% to 5% compared to the best ensemble constituent. On the Imagenet dataset, these improvements range from 1% to 3% in most cases. Additionally, we show that when constituent classifiers are well -calibrated and have similar performance, the simple averaging ensemble yields good results.

Keywords