102 Benedum Hall

View map

Abstract: We consider a convex-concave primal-dual optimization
framework in which the coupling between primal and dual variables is
bilinear. This framework admits linearly constrained optimization
together with a variety of interesting problems in machine learning,
including (linear) empirical risk minimization with various
regularization terms. It also includes a formulation that we term
``generalized linear programming'' (GLP) in which regularization terms
and constraints are added to the traditional linear programming
formulation, provided they admit efficient prox operations. Problems
from differentially robust optimization (DRO), using either
f-divergence metrics or Wasserstein metrics, can be formulated as
GLPs.

We describe algorithms for our framework that take prox-gradient steps
alternately in the primal and dual variables, but incorporate such
additional features as coordinate descent, variance reduction, dual
averaging, importance sampling, and iterate averaging.  Our methods
can also exploit sparsity in the matrix that couples primal and dual
variables.  Our methods match or improve on the best known worst-case
complexity bounds in various settings. Computational experiments
indicate that our methods also have good practical performance.

The talk represents joint work with Ahmet Alacaoglu, Jelena
Diakonikolas, Chaobing Song, Eric Lin, and Volkan Cevher

 

Event Details

Please let us know if you require an accommodation in order to participate in this event. Accommodations may include live captioning, ASL interpreters, and/or captioned media and accessible documents from recorded events. At least 5 days in advance is recommended.

University of Pittsburgh Powered by the Localist Community Event Platform © All rights reserved