Applied AI Letters (Dec 2021)
XAITK: The explainable AI toolkit
Abstract
Abstract Recent advances in artificial intelligence (AI), driven mainly by deep neural networks, have yielded remarkable progress in fields, such as computer vision, natural language processing, and reinforcement learning. Despite these successes, the inability to predict how AI systems will behave “in the wild” impacts almost all stages of planning and deployment, including research and development, verification and validation, and user trust and acceptance. The field of explainable artificial intelligence (XAI) seeks to develop techniques enabling AI algorithms to generate explanations of their results; generally these are human‐interpretable representations or visualizations that are meant to “explain” how the system produced its outputs. We introduce the Explainable AI Toolkit (XAITK), a DARPA‐sponsored effort that builds on results from the 4‐year DARPA XAI program. The XAITK has two goals: (a) to consolidate research results from DARPA XAI into a single publicly accessible repository; and (b) to identify operationally relevant capabilities developed on DARPA XAI and assist in their transition to interested partners. We first describe the XAITK website and associated capabilities. These place the research results from DARPA XAI in the wider context of general research in the field of XAI, and include performer contributions of code, data, publications, and reports. We then describe the XAITK analytics and autonomy software frameworks. These are Python‐based frameworks focused on particular XAI domains, and designed to provide a single integration endpoint for multiple algorithm implementations from across DARPA XAI. Each framework generalizes APIs for system‐level data and control while providing a plugin interface for existing and future algorithm implementations. The XAITK project can be followed at: https://xaitk.org.
Keywords