IEEE Access (Jan 2024)

<italic>AC.Rank<sub>A</sub></italic>: Rule Ranking Method via Aggregation of Objective Measures for Associative Classifiers

  • Maicon Dall'Agnol,
  • Veronica Oliveira de Carvalho

DOI
https://doi.org/10.1109/ACCESS.2024.3419130
Journal volume & issue
Vol. 12
pp. 88862 – 88882

Abstract

Read online

Among the inherently interpretable learning algorithms are associative classifiers, which are induced in steps. Regarding the ranking step, it is carried out using objective measures in order to sort the rules. Generally, the CSC method is used based on the two standard measures of association rules (support and confidence). However, several measures are available in the literature, leading to a secondary problem, as there is no measure that is suitable for all explorations. In this context, new proposals have emerged, one of which aims to aggregate a set of measures in order to use them simultaneously. The idea is to reduce the need to choose a single measure, also considering different aspects (semantics) for ranking the rules. Works in this context have been proposed. However, they present problems in relation to the performance and/or interpretability of the generated models. In them it is possible to observe an inverse relationship between performance and interpretability, i.e., when model performance is high, interpretability is low (and vice versa). Therefore, this work presents a rule ranking method via aggregation of objective measures, named $AC.Rank_{A}$ , to be incorporated into associative classifiers induction flows, aiming to obtain models that present a better balance between performance and interpretability. The method was evaluated by comparing several induction flows when ranking takes place via CSC (baseline) and via $AC.Rank_{A}$ . The results demonstrate that $AC.Rank_{A}$ can maintain the performance of the models, but with better interpretability.

Keywords