BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Penn Engineering Events - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Penn Engineering Events
X-ORIGINAL-URL:https://seasevents.nmsdev7.com
X-WR-CALDESC:Events for Penn Engineering Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240227T110000
DTEND;TZID=America/New_York:20240227T120000
DTSTAMP:20260403T174717
CREATED:20240201T135754Z
LAST-MODIFIED:20240201T135754Z
UID:10620-1709031600-1709035200@seasevents.nmsdev7.com
SUMMARY:ESE & CIS Spring Seminar - "Towards Transparent Representation Learning"
DESCRIPTION:Machine learning models trained on vast amounts of data have achieved remarkable success across various applications. However\, they also pose new challenges and risks for deployment in real-world high-stakes domains. Decisions made by deep learning models are often difficult to interpret\, and the underlying mechanisms remain poorly understood. Given that deep learning models operate as black-boxes\, it is challenging to understand\, much less resolve\, various types of failures in current machine learning systems. \nIn this talk\, I will describe our work towards building transparent machine learning systems through the lens of representation learning. First\, I will present a white-box approach to understanding transformer models. I will show how to derive a family of mathematically interpretable transformer-like deep network architectures by maximizing the information gain of the learned representations. Furthermore\, I will demonstrate that the proposed interpretable transformer achieves competitive empirical performance on large-scale real-world datasets\, while learning more interpretable and structured representations than black-box transformers. Next\, I will present our work on training the first set of vision and vision-language foundation models with rigorous differential privacy guarantees\, and demonstrate the promise of high-utility differentially private representation learning. To conclude\, I will discuss future directions towards transparent and safe AI systems we can understand and trust.
URL:https://seasevents.nmsdev7.com/event/ese-spring-seminar-tbd-4/
LOCATION:Raisler Lounge (Room 225)\, Towne Building\, 220 South 33rd Street\, Philadelphia\, PA\, 19104\, United States
CATEGORIES:Colloquium
ORGANIZER;CN="Electrical and Systems Engineering":MAILTO:eseevents@seas.upenn.edu
END:VEVENT
END:VCALENDAR