Loading Events

ESE Seminar: “Learning is Pruning”

December 15, 2020 at 12:00 PM - 1:00 PM
Details
Date: December 15, 2020
Time: 12:00 PM - 1:00 PM
  • Event Tags:
  • Organizer
    Electrical and Systems Engineering
    Phone: 215-898-6823
    Venue
    Zoom – Email ESE for Link jbatter@seas.upenn.edu

    The strong lottery ticket hypothesis (LTH) postulates that any neural network can be approximated by simply pruning a sufficiently larger network of random weights. Recent work establishes that the strong LTH is true if the random network to be pruned is a large poly-factor wider than the target one. This polynomial over-parameterization is at odds with experimental research that achieves good approximation by pruning networks that are only a small factor wider than the target one. In this talk, I will tell you how we close this gap and offer an exponential improvement to the over-parameterization requirement. I will give a sketch of the proof that any target network can be approximated by pruning a random one that is only a logarithmic factor wider. This is possible by establishing a connection between pruning random ReLU networks and random instances of the weakly NP-hard SubsetSum problem. Our work indicates the existence of a universal striking phenomenon: neural network training is equivalent to pruning slightly overparameterized networks of random weights. I will conclude with sharing hints of a general framework indicating the existence of good pruned networks for a variety of activation functions, architectures, even applicable for the case where both initialization weights and activations are binary.