ESE PhD Thesis Defense – “Learning-based Safe and Robust Control for Multi-Agent Systems”
September 12, 2025 at 12:00 PM - 1:30 PM
Details
Organizer
Venue
AI-enabled systems have become ubiquitous and integral to safety-critical domains, e.g., autonomous vehicles and aerial robotics. Despite promising empirical results, decision-making processes for critical systems incorporating AI components require careful consideration, as failures may have catastrophic consequences. One key challenge is that various uncertainties will inevitably arise from system limitations, black-box models, or environmental factors, and inaccurate estimation of intrinsic uncertainties or failure to account for other agents in the environment can lead to hazardous behaviors.
In this dissertation, we study how to develop safe and robust learning-based control policies under various uncertainties. In particular, it explores how tools from statistics, game theory and formal methods can empower uncertainty quantification, adaptation to other agents, and robust policy synthesis. The first part focuses on safe learning and control multi-agent systems, where we show how to develop safe, robust, and adaptive control strategies in safety-critical systems when encountering other agents. The second part studies how to synthesize safe perception-based control policy for robotic systems under uncertainties.

