BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Penn Engineering Events - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Penn Engineering Events
X-ORIGINAL-URL:https://seasevents.nmsdev7.com
X-WR-CALDESC:Events for Penn Engineering Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20220222T110000
DTEND;TZID=America/New_York:20220222T120000
DTSTAMP:20260406T052217
CREATED:20220216T145905Z
LAST-MODIFIED:20220216T145905Z
UID:6373-1645527600-1645531200@seasevents.nmsdev7.com
SUMMARY:ESE Spring Colloquium - "Towards Fair and Efficient Machine Learning with Large Models"
DESCRIPTION:Deep networks often achieve better accuracy as we employ larger models. However\, modern machine learning applications involve multiple considerations alongside accuracy\, such as resource-efficiency\, robustness\, or fairness. Deploying ML in the real-world requires sound solutions addressing these considerations. \nIn this talk\, I will first discuss optimizing fairness objectives for imbalanced data. We observe that a large model can easily achieve “perfect fairness” on training data but dramatically fail at the test-time due to overfitting. To address this\, we propose two strategies\, (1) A new family of fairness-seeking loss functions\, (2) Algorithms that optimize validation (rather than training) objective\, and combine them to achieve state-of-the-art performance. We also introduce new optimization methods that extend these to decentralized settings. \nI will then discuss training efficient sparse models. While conventional wisdom strongly advocates the use of regularization\, we observe that perfectly fitting a large model to data and then pruning it achieves stellar accuracy. We demystify this surprising feature-selection ability through a flexible theory which can answer “How good is the pruned model?”. \nIn summary\, our results provide several insights on learning with large models: (1) Our theory based on linear and random-feature models provide useful intuitions for understanding modern deep learning\, (2) Large models can benefit from unconventional training strategies such as new loss functions\, and (3) Validation phase is particularly helpful for large models that are susceptible to overfitting.
URL:https://seasevents.nmsdev7.com/event/ese-spring-colloquium-towards-fair-and-efficient-machine-learning-with-large-models/
LOCATION:Raisler Lounge (Room 225)\, Towne Building\, 220 South 33rd Street\, Philadelphia\, PA\, 19104\, United States
CATEGORIES:Seminar,Colloquium
ORGANIZER;CN="Electrical and Systems Engineering":MAILTO:eseevents@seas.upenn.edu
END:VEVENT
END:VCALENDAR