Associate Professor, Institute for Computing and Information Sciences, Radboud University Nijmegen.
This page is new, my old webpage is available here.
My group conducts broad foundational and application-driven research in artificial intelligence (AI), in particular neurosymbolic AI. We bring together the areas of machine learning and formal methods, in particular formal verification. We tackle problems that are inspired by autonomous systems, industrial projects, and in particular planning problems in robotics.
The following goals are central to our efforts:
- Increase the dependability of AI in safety-critical environments.
- Render AI models robust against uncertain knowledge about the environment they operate in.
- Enhance the capabilities of verification to handle real-world problems using learning techniques.
We are interested in various aspects of dependability and safety in AI, intelligent decision-making under uncertainty, and safe reinforcement Learning. A key aspect of our research is a thorough understanding of the (epistemic or aleatoric) uncertainty that may occur when AI systems operate in the real world.
- I gave keynote talk at FM 2023, the 25th International Symposium on Formal Methods. I talked about our approaches to Neuro-Symbolic AI, Intelligent and Dependable Decision-Making Under Uncertainty, and the effective combination of Formal Methods, Artificial Intelligence, and Machine Learning. It was a lot of fun, thanks to the organizers for inviting me.
- I became the vice head of the Department of Software Science at Radboud University.
- Two papers accepted at ICAPS 2023! (1) Act-Then-Measure: Reinforcement Learning for Partially Observable Environments with Active Measuring. Congratulations to our Master student Merlijn Krale, co-supervised with Thiago D. Simão. (2) Model Checking for Adversarial Multi-Agent Reinforcement Learning with Reactive Defense Methods. Congratulations to Dennis Groß and Christoph Schmidl, co-supervised with Guillermo A. Pérez.
- I will co-organize two Dagstuhl seminars! (1) Artificial Intelligence and Formal Methods Join Forces for Reliable Autonomy with Mykel Kochenderfer, Jan Křetínský, and Jana Tumova, and (2) Model Learning for Improved Trustworthiness in Autonomous Systems with Ellen Enkel, Mohammadreza Mousavi, and Kristin Y. Rozier.
- Our paper Safe Reinforcement Learning From Pixels Using a Stochastic Latent Representation was accepted to ICLR 2023. We propose Safe SLAC, an algorithm that uses a stochastic latent variable model combined with a safety critic to address the problem of safe reinforcement learning in realistic, high-dimensional settings. Big congratulations to Yannick Hogewind, who did this work as part of his ELLIS fellowship within our group, supervised by Thiago!
- Our paper Robust Almost-Sure Reachability in Multi-Environment MDPs was accepted to TACAS 2023, co-authored with Marck van der Vegt and Sebastian Junges.
- I received a Starting Grant from the European Research Council (ERC) for my project DEUCE: Data-Driven Verification and Learning Under Uncertainty. I will work on real-world challenges to the safety of artificial intelligence and, in particular, safe reinforcement learning. The overall goal of my project is to advance the real-world deployment of reinforcement learning. I will soon be opening multiple PhD and Postdoc positions.
- 3 papers accepted at AAAI 2023! 1. Safe RL via Shielding under Partial Observability, 2. Probabilities Are Not Enough: Formal Controller Synthesis for Stochastic Dynamical Models with Epistemic Uncertainty, 3. Safe Policy Improvement for POMDPs via Finite-State Controllers. More details on these results will come soon. Congrats to Thiago, Thom, and Marnix!
- Our paper Robust Control for Dynamical Systems with Non-Gaussian Noise via Formal Abstractions has been accepted for the Journal of Artificial Intelligence Research (JAIR). The publication will be part of a JAIR special issue dedicated to award winning AI papers and is a thorough extension of the distinguished AAAI paper. Congrats Thom!
- Our paper Robust Anytime Learning of Markov Decision Processes has been accepted at NeurIPS 2022. The work is a collaboration with David Parker from the University of Oxford. Congratulations, Marnix and Thiago!