Loading Events

ESE PhD Thesis Defense: “Algorithms for Adversarially Robust Deep Learning”

June 4, 2024 at 1:00 PM - 3:00 PM
Details
Date: June 4, 2024
Time: 1:00 PM - 3:00 PM
  • Event Tags:
  • Organizer
    Electrical and Systems Engineering
    Phone: 215-898-6823
    Venue
    Wu and Chen Auditorium (Room 101), Levine Hall 3330 Walnut Street
    Philadelphia
    PA 19104
    Google Map

    Given the widespread use of deep learning models in safety-critical applications, ensuring that the decisions of such models are robust against adversarial exploitation is of fundamental importance.  In this thesis, we discuss recent progress toward designing algorithms that exhibit desirable robustness properties.  First, we discuss the problem of adversarial examples in computer vision, for which we introduce new technical results, training paradigms, and certification algorithms.  Next, we consider the problem of domain generalization, wherein the task is to train neural networks to generalize from a family of training distributions to unseen test distributions.  We present new algorithms that achieve state-of-the-art generalization in medical imaging, molecular identification, and image classification.  Finally, we study the setting of jailbreaking large language models (LLMs), wherein an adversarial user attempts to design prompts that elicit objectionable content from an LLM.  We propose new attacks and defenses, which represent the frontier of progress toward designing robust language-based agents.