MATEC Web of Conferences (Jan 2021)
A Validation Model for Ethical Decisions in Artificial Intelligence Systems using Personal Data
Abstract
Decision making, a fundamental human process, has been more and more supported by computer systems in the second part of the last century. In the 21st century, intelligent decision support systems utilize Artificial Intelligence (AI) techniques to enhance and improve support for decision makers. Often decisions suggested by an AI system are based on personal data, such as credit scoring in financial institutions or purchase behavior in online shops and the like. Beyond the protection or personal data by the General Data Protection Regulation (GDPR), developers and operators of decisional AI systems need to ensure ethical standards are met. In respect to individuals, arguably the most relevant ethical aspect is the fairness principle, to ensure individuals are treated fairly. In this paper we present an evaluation model for decision ethicality of AI systems in respect to the fairness principle. The presented model treats any AI system as a “black-box”. It separates sensitive from general attributes in the input matrix. The model measures the distance between predicted values on altering inputs for sensitive attributes. The variance of the outputs is interpreted as individual fairness, that is treating similar individuals similarly. In addition, the model also informs about the group fairness. The validation model helps to determine to what extent an AI System, decides fairly in general for individuals and groups, thus can be used as a test tool in development and operation of AI Systems using personal data.