Loading Events

IDEAS/STAT Optimization Seminar: “Data-Driven Algorithm Design and Verification for Parametric Convex Optimization”

March 6, 2025 at 12:00 PM - 1:15 PM
Details
Date: March 6, 2025
Time: 12:00 PM - 1:15 PM
Event Category: SeminarColloquium
  • Event Tags:, , , ,
  • Organizer
    IDEAS Center
    Venue
    Amy Gutmann Hall, Room 414 3333 Chestnut Street
    Philadelphia
    19104
    Google Map

    Zoom link

    https://upenn.zoom.us/j/98220304722

     

    Abstract
    We present computational tools for analyzing and designing first-order methods in parametric convex optimization. These methods are popular for their low per-iteration cost and warm-starting capabilities. However, precisely quantifying the number of iterations required to compute high-quality solutions remains a key challenge, especially in real-time applications. First, we introduce a numerical framework for verifying the worst-case performance of first-order methods in parametric quadratic optimization. We formulate this as a mixed-integer linear program that maximizes the infinity norm of the fixed-point residual after a given number of iterations. Our approach captures a broad class of gradient, projection, and proximal iterations through affine or piecewise-affine constraints, with strong polyhedral formulations. To improve scalability, we incorporate bound-tightening techniques that exploit operator-theoretic bounds. Numerical results show that our method closely matches true worst-case performance, achieving significant reductions in worst-case fixed-point residuals compared to standard convergence analyses. Second, we present a data-driven approach for analyzing the performance of first-order methods using statistical learning theory. We establish generalization guarantees for classical optimizers using sample convergence bounds and for learned optimizers using the Probably Approximately Correct (PAC)-Bayes framework. We then apply this framework to learn accelerated first-order methods by directly minimizing the PAC-Bayes bound over key algorithmic parameters (e.g., gradient steps and warm-starts). Numerical experiments demonstrate that our approach provides strong generalization guarantees for both classical and learned optimizers, with statistical bounds that closely match true out-of-sample performance.