The chair of Artificial Intelligence and Formal Methods has a mission:
Increase the trustworthiness of Artificial Intelligence (AI).
We conduct broad foundational and application-driven research. Our vision of neurosymbolic AI brings together the areas of machine learning and formal methods, in particular, formal verification. We tackle problems that are inspired by autonomous systems, industrial projects, and planning problems in robotics.
The following goals are central to our efforts:
- Increase the dependability of AI in safety-critical environments.
- Render AI models robust against uncertain knowledge about their environment.
- Enhance the capabilities of formal verification to handle real-world problems using learning techniques.
We are interested in various aspects of dependability and safety in AI, intelligent decision-making under uncertainty, and safe reinforcement Learning. A key aspect of our research is a thorough understanding of the (epistemic or aleatoric) uncertainty that may occur when AI systems operate in the real world.
Read more at the webpage of my group at Radboud University.
- I started a new position as Full Professor at the Ruhr-University Bochum in Germany. I will lead the Chair of Artificial Intelligence and Formal Methods. I am extremely excited about this new role, and will soon open multiple PhD and Postdoc positions. More news coming!
- Our paper Reinforcement Learning by Guided Safe Exploration has been accepted at ECAI 2023. We consider the following fundamental problem: A reinforcement learning agent is usually trained (to act safely) in a controlled lab environment. So, how can you ensure safety if you are deploying your trained agent in another, potentially much richer, environment?
- I’m delighted to serve as an area chair for Learning and Adaptation at AAMAS 2024 in Auckland, New Zealand.
- I co-wrote a chapter Shared Control with Human Trust and Workload Models for the book Cyber-Physical-Human Systems: Fundamentals and Applications.
- Paper accepted at UAI 2023! Risk-aware Curriculum Generation for Heavy-tailed Task Distributions. Yet another great work by Thiago D. Simão together with our friends Cevahir Koprulu and Ufuk Topcu from The University of Texas at Austin. Congratulations!
- Paper accepted at CAV 2023! Efficient Sensitivity Analysis for Parametric Robust Markov Chains, with Thom Badings, in great collaboration with Sebastian Junges, Ahmadreza Marandi, and Ufuk Topcu. Congratulations Thom!
- Two papers from my group accepted at IJCAI 2023! (1) More for Less: Safe Policy Improvement with Stronger Performance Guarantees, with Marnix Suilen and Thiago D. Simão. Great collaboration with Patrick Wienhöft, Clemens Dubslaff, and Christel Baier (TU Dresden). (2) Recursive Small-Step Multi-Agent A* for Dec-POMDPs, with our Master student and ELLIS excellence fellow Wietze Koops, co-supervised with Sebastian Junges and Thiago D. Simão. Congratulations to everyone!
- I gave a keynote talk at FM 2023, the 25th International Symposium on Formal Methods. I talked about our approaches to Neuro-Symbolic AI, Intelligent and Dependable Decision-Making Under Uncertainty, and the effective combination of Formal Methods, Artificial Intelligence, and Machine Learning. It was a lot of fun, thanks to the organizers for inviting me.
- I became the vice head of the Department of Software Science at Radboud University.
- Two papers accepted at ICAPS 2023! (1) Act-Then-Measure: Reinforcement Learning for Partially Observable Environments with Active Measuring. Congratulations to our Master student Merlijn Krale, co-supervised with Thiago D. Simão. (2) Model Checking for Adversarial Multi-Agent Reinforcement Learning with Reactive Defense Methods. Congratulations to Dennis Groß and Christoph Schmidl, co-supervised with Guillermo A. Pérez.
- I will co-organize two Dagstuhl seminars! (1) Artificial Intelligence and Formal Methods Join Forces for Reliable Autonomy with Mykel Kochenderfer, Jan Křetínský, and Jana Tumova, and (2) Model Learning for Improved Trustworthiness in Autonomous Systems with Ellen Enkel, Mohammadreza Mousavi, and Kristin Y. Rozier.
- Our paper Safe Reinforcement Learning From Pixels Using a Stochastic Latent Representation was accepted to ICLR 2023. We propose Safe SLAC, an algorithm that uses a stochastic latent variable model combined with a safety critic to address the problem of safe reinforcement learning in realistic, high-dimensional settings. Big congratulations to Yannick Hogewind, who did this work as part of his ELLIS fellowship within our group, supervised by Thiago!
- Our paper Robust Almost-Sure Reachability in Multi-Environment MDPs was accepted to TACAS 2023, co-authored with Marck van der Vegt and Sebastian Junges.
- I received a Starting Grant from the European Research Council (ERC) for my project DEUCE: Data-Driven Verification and Learning Under Uncertainty. I will work on real-world challenges to the safety of artificial intelligence and, in particular, safe reinforcement learning. The overall goal of my project is to advance the real-world deployment of reinforcement learning. I will soon be opening multiple PhD and Postdoc positions.
- 3 papers accepted at AAAI 2023! 1. Safe RL via Shielding under Partial Observability, 2. Probabilities Are Not Enough: Formal Controller Synthesis for Stochastic Dynamical Models with Epistemic Uncertainty, 3. Safe Policy Improvement for POMDPs via Finite-State Controllers. More details on these results will come soon. Congrats to Thiago, Thom, and Marnix!
- Our paper Robust Control for Dynamical Systems with Non-Gaussian Noise via Formal Abstractions has been accepted for the Journal of Artificial Intelligence Research (JAIR). The publication will be part of a JAIR special issue dedicated to award winning AI papers and is a thorough extension of the distinguished AAAI paper. Congrats Thom!
- Our paper Robust Anytime Learning of Markov Decision Processes has been accepted at NeurIPS 2022. The work is a collaboration with David Parker from the University of Oxford. Congratulations, Marnix and Thiago!