Department of Automatic Control, Lund University
Thursday, February 08, 2018, 11:00 - 12:00
"Room 01-012, Georges-Köhler-Allee 102, Freiburg 79110, Germany"
A plethora of algorithms have over the last decade been invented or proposed for solving large-scale convex optimization problems. Convergence proofs are often obtained on a one-by-one basis that does not reveal connections between methods. In this talk, we will present a simple and coherent view for many optimization algorithms for large-scale convex optimization. We will show that they can be seen as deterministic, coordinate-wise, or stochastic methods to find a fixed-point to one out of two nonexpansive operators, namely either the forward-backward operator or the Douglas-Rachford operator. These operators encode in their fixed-point sets the solution to the underlying optimization problem. We will present a simple sufficient and necessary condition for convergence of any sequence towards those sets. Further, we will present a high-level sufficient condition on how convergence proof are obtained in practice.