Quoc Tran-Dinh
Department of Statistics and Operations Research The University of North Carolina at Chapel Hill
Tuesday, May 19, 2026, 11:00 - 11:59
SR 01-012
In recent years, there has been a renewed surge of interest in the theory and algorithms that go beyond standard optimization, including minimax optimization, variational inequalities (VIPs), and fixed-point problems. This growing attention is largely driven by emerging applications in distributionally robust optimization, adversarial training and learning, and reinforcement learning.
In this talk, we will discuss some recent developments in algorithms for approximating fixed points of nonexpansive operators, a framework that covers various approaches to solving minimax optimization and variational inequality problems (VIPs). Unlike classical methods such as Banach and Krasnosel’skiǐ-Mann (KM) iterations, our approach incorporates acceleration, delayed inexactness, and stochastic [variance reduction] approximation techniques. These enhancements lead to improved theoretical convergence rates and stronger convergence guarantees, while are potentially applicable to large-scale and stochastic settings. In particular, whereas classical KM-type methods, including their asynchronous variants, typically achieve an O(1/k) convergence rate, we establish both O(1/k2) non-asymptotic and o(1/k2) asymptotic convergence rates in expectation for the squared norm of the residual. These results significantly improve upon the classical O(1/k) rate. We also prove an o(1/k2) almost sure convergence rate, together with the almost sure convergence of the iterates to a solution. Finally, we will present several concrete algorithms and numerical examples to illustrate the proposed methods and demonstrate their practical performance.