Nils Jansen.

Associate Professor, Institute for Computing and Information Sciences, Radboud University Nijmegen.


This page is under construction, my old webpage is available here.

My group conducts broad foundational and application-driven research in artificial intelligence (AI). We take a broad stance on AI that brings together the areas of machine learning and formal methods, in particular formal verification. We tackle problems that are inspired by autonomous systems, industrial projects, and in particular planning problems in robotics.

The following goals are central to our efforts:

  • Increase the dependability of AI in safety-critical environments.
  • Render AI models robust against uncertain knowledge about the environment they operate in.
  • Enhance the capabilities of verification to handle real-world problems using learning techniques.

We are interested in various aspects of dependability and safety in AI, intelligent decision-making under uncertainty, and safe reinforcement Learning. A key aspect of our research is a thorough understanding of the (epistemic or aleatoric) uncertainty that may occur when AI systems operate in the real world.




  • Our paper Safe Reinforcement Learning From Pixels Using a Stochastic Latent Representation was accepted to ICLR 2023. We propose Safe SLAC, an algorithm that uses a stochastic latent variable model combined with a safety critic to address the problem of safe reinforcement learning in realistic, high-dimensional settings. Big congratulations to Yannick Hogewind, who did this work as part of his ELLIS fellowship within our group, supervised by Thiago!



  • Our paper Robust Almost-Sure Reachability in Multi-Environment MDPs was accepted to TACAS 2023, co-authored with Marck van der Vegt and Sebastian Junges.


  • I received a Starting Grant from the European Research Council (ERC) for my project DEUCE: Data-Driven Verification and Learning Under Uncertainty. I will work on real-world challenges to the safety of artificial intelligence and, in particular, safe reinforcement learning. The overall goal of my project is to advance the real-world deployment of reinforcement learning. I will soon be opening multiple PhD and Postdoc positions.
  • 3 papers accepted at AAAI 2023! 1. Safe RL via Shielding under Partial Observability, 2. Probabilities Are Not Enough: Formal Controller Synthesis for Stochastic Dynamical Models with Epistemic Uncertainty, 3. Safe Policy Improvement for POMDPs via Finite-State Controllers. More details on these results will come soon. Congrats to Thiago, Thom, and Marnix!
  • Our paper Robust Control for Dynamical Systems with Non-Gaussian Noise via Formal Abstractions has been accepted for the Journal of Artificial Intelligence Research (JAIR). The publication will be part of a JAIR special issue dedicated to award winning AI papers and is a thorough extension of the distinguished AAAI paper. Congrats Thom!