IEEE Access (Jan 2019)

Improving Failure Prediction by Ensembling the Decisions of Machine Learning Models: A Case Study

  • Joao R. Campos,
  • Ernesto Costa,
  • Marco Vieira

DOI
https://doi.org/10.1109/ACCESS.2019.2958480
Journal volume & issue
Vol. 7
pp. 177661 – 177674

Abstract

Read online

The complexity of software has grown considerably in recent years, making it nearly impossible to detect all faults before pushing to production. Such faults can ultimately lead to failures at runtime. Recent works have shown that using Machine Learning (ML) algorithms it is possible to create models that can accurately predict such failures. At the same time, methods that combine several independent learners (i.e., ensembles) have proved to outperform individual models in various problems. While some well-known ensemble algorithms (e.g Bagging) use the same base learners (i.e., homogeneous), using different algorithms (i.e., heterogeneous) may exploit the different biases of each algorithm. However, this is not a trivial task, as it requires finding and choosing the most adequate base learners and methods to combine their outputs. This paper presents a case study on using several ML techniques to create heterogeneous ensembles for Online Failure Prediction (OFP). More precisely, it attempts to assess the viability of combining different learners to improve performance and to understand how different combination techniques influence the results. The paper also explores whether the interactions between learners can be studied and leveraged. The results suggest that the combination of certain learners and techniques, not necessarily individually the best, can improve the overall ability to predict failures. Additionally, studying the synergies in the best ensembles provides interesting insights into why some are able to perform better.

Keywords