Turnpikes in trajectory optimization: From optimization-based state estimation to recurrent neural network training

Julian Schiller

Leibniz University Hannover - Institute of Automatic Control

Tuesday, December 16, 2025, 11:00

SR 01-012

Trajectory optimization underlies many modern control methods, including model predictive control and moving horizon estimation. The latter, for example, employs repeated optimization over a limited window of past measurements to estimate the current unknown internal state of the system. This raises a fundamental question: How much do we actually lose by only looking at a truncated part of the measurement data? We address this by considering an omniscient infinite-horizon benchmark estimator that has access to all past and future data, which we show acts as a turnpike for finite-horizon estimation problems and ultimately allows us to quantify truncation effects on the estimation performance.

In the second part of this talk, we extend these ideas to the domain of machine learning, particularly focusing on training recurrent neural networks (RNNs). RNNs are an active area of research, with applications in natural language processing, text generation, systems modeling, and time series forecasting. They are commonly trained using the truncated backpropagation through time algorithm and mini-batch stochastic gradient descent, leveraging both modern programming tools and efficient computations on GPUs. However, truncation can introduce undesirable artifacts into training. By exploiting the similarities with trajectory optimization, we develop practical sufficient conditions for turnpike behavior and derive theoretical bounds on the performance loss caused by truncation. Our analysis shows that the burn-in phase of RNNs in particular is an important hyper-parameter in their training, with a significant impact on the resulting performance. Experiments on standard benchmark data sets from the fields of system identification and time series forecasting confirm our theoretical findings.