EXplainable Data Science (EXoDuS). NWA/TNO.
The impact of artificial intelligence (AI) on autonomous driving, robot-assisted surgery, and home automation has resulted in an increased reliance on AI systems. In mission-critical applications, the inherent vulnerability of such systems to adversarial attacks poses a serious challenge. We propose to immerse humans in the process of robustifying AI systems against problems such as adversarial learning or data poisoning. The key element is to enable for humans to understand AI-made decisions in an adversarial environment. Decision-making is sufficiently captured by so-called strategies; a deep neural network, e.g., represents a strategy that has been learned. For such strategies, data scientists and system engineers lack tools to answer transparency-related questions such as “why is it doing that?” or “was that a good decision?”