Artificial Intelligence (AI) is entering our everyday lives, with applications in fields like healthcare, transportation, finance, or robotics. Many of those fields require strong safety requirements on the AI systems that are used. A particular AI, or machine learning, technique is a reinforcement learning, which generally learns to behave optimally via trial and error. Consequently, and despite its huge success in the past years, reinforcement learning generally lacks mechanisms to constantly ensure safe behavior. On the other hand, formal verification is a research area that aims at providing formal correctness guarantees on a system’s correctness and safety, based on rigorous methods and precise specifications. However, fundamental challenges obstruct the effective application of verification to reinforcement learning so far.
The main objective the DEUCE. project is to develop novel and data-driven verification methods that tightly integrate with reinforcement learning. In particular, he will develop techniques that address real-world challenges to the safety of AI systems in general: Scalability, expressiveness, and robustness against the uncertainty that occurs when operating in the real world. The overall goal is to advance the real-world deployment of reinforcement learning.