BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Penn Engineering Events - ECPv6.15.18//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Penn Engineering Events
X-ORIGINAL-URL:https://seasevents.nmsdev7.com
X-WR-CALDESC:Events for Penn Engineering Events
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20191103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20201101T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20201215T120000
DTEND;TZID=America/New_York:20201215T130000
DTSTAMP:20260407T072926
CREATED:20201208T163504Z
LAST-MODIFIED:20201208T163504Z
UID:3757-1608033600-1608037200@seasevents.nmsdev7.com
SUMMARY:ESE Seminar: "Learning is Pruning"
DESCRIPTION:The strong lottery ticket hypothesis (LTH) postulates that any neural network can be approximated by simply pruning a sufficiently larger network of random weights. Recent work establishes that the strong LTH is true if the random network to be pruned is a large poly-factor wider than the target one. This polynomial over-parameterization is at odds with experimental research that achieves good approximation by pruning networks that are only a small factor wider than the target one. In this talk\, I will tell you how we close this gap and offer an exponential improvement to the over-parameterization requirement. I will give a sketch of the proof that any target network can be approximated by pruning a random one that is only a logarithmic factor wider. This is possible by establishing a connection between pruning random ReLU networks and random instances of the weakly NP-hard SubsetSum problem. Our work indicates the existence of a universal striking phenomenon: neural network training is equivalent to pruning slightly overparameterized networks of random weights. I will conclude with sharing hints of a general framework indicating the existence of good pruned networks for a variety of activation functions\, architectures\, even applicable for the case where both initialization weights and activations are binary.
URL:https://seasevents.nmsdev7.com/event/ese-seminar-learning-is-pruning/
LOCATION:Zoom – Email ESE for Link jbatter@seas.upenn.edu
CATEGORIES:Seminar,Faculty,Colloquium,Graduate,Undergraduate
ORGANIZER;CN="Electrical and Systems Engineering":MAILTO:eseevents@seas.upenn.edu
END:VEVENT
END:VCALENDAR