ESE Fall Seminar – “Acceleration by Stepsize Hedging”
/
Glandt Forum, Singh Center for Nanotechnology
3205 Walnut Street, Philadelphia, PA, United States
Can we accelerate convergence of gradient descent without changing the algorithm --- just by optimizing stepsizes? Surprisingly, we show that the answer is yes. Our proposed Silver Stepsize Schedule optimizes strongly convex functions in $k^{\log_p 2} = k^{0.7864}$ iterations, where $p=1+\sqrt{2}$ is the silver ratio and $k$ is the condition number. This is intermediate between […]

