IDEAS/STAT Optimization Seminar: “ML for an Interactive World: From Learning to Unlearning”
February 27, 2025 at 12:00 PM - 1:15 PM
The remarkable recent success of Machine Learning (ML) is driven by our ability to develop and deploy interactive models that can solve complicated tasks by understanding and adapting to the ever-changing state of the world. However, the development of such models demands significant data and computing resources. Moreover, as these models increasingly interact with humans, new post-deployment challenges emerge, including privacy concerns, data integrity, and the potential for model misuse. Addressing these issues necessitates innovative algorithmic solutions.
Reinforcement Learning (RL) is the preferred method for training interactive models. In the first part of my talk, I will discuss my work on Hybrid RL, which has led to the development of the first general-purpose, computationally efficient, and theoretically rigorous algorithms for RL. Our method learns effective policies by integrating the trial-and-error processes of RL with pre-collected interaction data logs, demonstrating strong performance in practical applications.
In the second half of my talk, I will discuss my work on the foundations of machine unlearning, a newly emerging field with significant practical applications. Machine unlearning involves updating trained ML models to exclude specific data samples from the trained model upon their deletion request, without retraining from scratch. I will delve into how machine unlearning presents a more viable alternative to traditional methods like differential privacy for data deletion, thus providing a more practical solution for ensuring data privacy post-deployment.
Zoom link: https://upenn.zoom.us/j/94999851890 Meeting ID: 949 9985 1890

