Tuesday, November 30, 2021, 11:00 - 11:59
Building 102 - SR 01-012
We introduce the eXtended Gauss-Newton (XGN) method, which extends the Generalized Gauss-Newton (GGN) method to a class of nonconvex loss functions with a well-behaved global minimum. We show linear local convergence of the XGN method to a stationary point, as well as global convergence in case of a linear residual function and linear constraints.
As one possible application, we illustrate how the considered class of nonconvex loss functions can be used to approximate time-optimal behaviour within a Model Predictive Control (MPC) framework. Additional applications are estimation problems with zero-mean mixture models and softmax classification.
Remote participation is possible via Zoom:
Meeting-ID: 627 9173 7415