ESE Guest Seminar – “Safe Offline RL for Constrained Markov Decision Process: Theory and Practice”
October 16, 2024 at 11:00 AM - 12:00 PM
Organizer
Venue
Many constrained sequential decision-making processes such as safe AV navigation, wireless network control, caching, cloud computing, etc., can be cast as Constrained Markov Decision Processes (CMDP). Reinforcement Learning (RL) algorithms have been used to learn optimal policies for unknown unconstrained MDP. Extending these RL algorithms to unknown CMDP, brings the additional challenge of not only maximizing the reward but also satisfying the constraints. Further, in most of the practical applications, one has to rely on the offline database as online interaction might be costly or infeasible.
While the unconstrained offline RL setting is relatively well-understood, the offline CMDP or safe offline RL setup is not. For example, consider a database that consists of data coming from a safe behavioral policy, it remained an open problem on how to develop an algorithm that would provide safety while maximizing the reward with provable guarantee. In particular, the existing works on safe offline RL rely on the assumption that the database must contain state-action pairs coming from all the policies which is not practical in safety-critical setup as the database might not contain unsafe state-action pairs. We closed the gap in our recent research. In our work, we developed a weighted safe actor-critic (WSAC) algorithm that can produce a policy that outperforms any behavioral policy while maintaining the same level of safety, which is critical to designing a safe algorithm for offline RL. Additionally, we compare WSAC with existing state-of-the-art safe offline RL algorithms in several continuous control environments. WSAC outperforms all baselines across a range of tasks, supporting the theoretical results.

