OpenFL-XAI: Federated learning of explainable artificial intelligence models in Python
Mattia Daole,
Alessio Schiavo,
José Luis Corcuera Bárcena,
Pietro Ducange,
Francesco Marcelloni,
Alessandro Renda
Affiliations
Mattia Daole
Department of Information Engineering, University of Pisa, Largo Lucio Lazzarino 1, 56122 Pisa, Italy; Corresponding author.
Alessio Schiavo
Department of Information Engineering, University of Pisa, Largo Lucio Lazzarino 1, 56122 Pisa, Italy; LogObject AG, Ambassador House Thurgauerstrasse 101 A, CH-8152 Opfikon, Switzerland
José Luis Corcuera Bárcena
Department of Information Engineering, University of Pisa, Largo Lucio Lazzarino 1, 56122 Pisa, Italy
Pietro Ducange
Department of Information Engineering, University of Pisa, Largo Lucio Lazzarino 1, 56122 Pisa, Italy
Francesco Marcelloni
Department of Information Engineering, University of Pisa, Largo Lucio Lazzarino 1, 56122 Pisa, Italy
Alessandro Renda
Department of Information Engineering, University of Pisa, Largo Lucio Lazzarino 1, 56122 Pisa, Italy
Artificial Intelligence (AI) systems play a significant role in manifold decision-making processes in our daily lives, making trustworthiness of AI more and more crucial for its widespread acceptance. Among others, privacy and explainability are considered key requirements for enabling trust in AI. Building on these needs, we propose a software for Federated Learning (FL) of Rule-Based Systems (RBSs): on one hand FL prioritizes user data privacy during collaborative model training. On the other hand, RBSs are deemed as interpretable-by-design models and ensure high transparency in the decision-making process. The proposed software, developed as an extension to the Intel® OpenFL open-source framework, offers a viable solution for developing AI applications balancing accuracy, privacy, and interpretability.