Complex & Intelligent Systems (Apr 2024)
Fairness optimisation with multi-objective swarms for explainable classifiers on data streams
Abstract
Abstract Recently, advanced AI systems equipped with sophisticated learning algorithms have emerged, enabling the processing of extensive streaming data for online decision-making in diverse domains. However, the widespread deployment of these systems has prompted concerns regarding potential ethical issues, particularly the risk of discrimination that can adversely impact certain community groups. This issue has been proven to be challenging to address in the context of streaming data, where data distribution can change over time, including changes in the level of discrimination within the data. In addition, transparent models like decision trees are favoured in such applications because they illustrate the decision-making process. However, it is essential to keep the models compact because the explainability of large models can diminish. Existing methods usually mitigate discrimination at the cost of accuracy. Accuracy and discrimination, therefore, can be considered conflicting objectives. Current methods are still limited in controlling the trade-off between these conflicting objectives. This paper proposes a method that can incrementally learn classification models from streaming data and automatically adjust the learnt models to balance multi-objectives simultaneously. The novelty of this research is to propose a multi-objective algorithm to maximise accuracy, minimise discrimination and model size simultaneously based on swarm intelligence. Experimental results using six real-world datasets show that the proposed algorithm can evolve fairer and simpler classifiers while maintaining competitive accuracy compared to existing state-of-the-art methods tailored for streaming data.
Keywords