Thursday, June 24, 2021, 10:00 - 11:59
In this talk, I will provide an overview of my thesis with the title: "Moving Horizon Estimation for Inertial Motion Tracking - Algorithms and Industrial Applications."
After giving a short introduction to relevant topics such as state estimation, inertial motion tracking, sensor fusion, and moving horizon estimation, I will present a novel approach that uses moving horizon estimation to estimate and compensate for time-delayed measurements. Sensor measurements are typically captured at discrete time instances with a fixed sampling time. As a result, many approaches to sensor fusion are formulated in discrete time. In particular, for loosely coupled sensor fusion setups, unknown processing delays on sensor nodes can degrade the overall estimation performance. The presented approach uses a direct collocation approach to find a continuous-time approximation of the relevant state and control trajectories in the estimation horizon, allowing us to identify the optimal time delay of sensor measurements. The approach is validated using simulations and experiments, proving it capable of recovering the desired estimation accuracy.
The talk will be closed by highlighting the results and contributions of the thesis. For more details on the thesis, we include the original abstract below.
Inertial motion tracking describes the application of estimating the relevant navigation states, e.g., position, velocity, and orientation of a moving system using inertial measurement data. The measurements are provided by an inertial measurement unit which describes a collection of sensors measuring linear acceleration and angular velocity in all relevant directions. As a result of technological advances, such sensors are nowadays widely available for a broad range of applications in robotics, automation, animation, ergonomics, and biomechanics.
Like every sensor, also an inertial measurement unit suffers from measurement inaccuracies, which cause a drift in estimates when integrated over time in order to obtain the desired navigation state. To compensate for this accumulating error and to obtain estimates with high accuracy instead, information from other sensors is considered in a sensor fusion approach. The Kalman filter, including its extensions for nonlinear systems, represents the working horse in state-of-the-art sensor fusion approaches.
In this thesis, we use moving horizon estimation for the sensor fusion problems arising in inertial motion tracking. Moving horizon estimation is an approach for state estimation which uses the powerful framework of nonlinear numerical optimization over a window of recent measurements in order to increase the robustness and accuracy of the estimates. Estimating the motion in three-dimensional space is a highly nonlinear problem and therefore well-suited for moving horizon estimation. We present and evaluate methods to handle necessary over-parameterizations of the orientation state in moving horizon estimation. We propose several estimator implementations for inertial motion tracking using position, velocity, and magnetometer measurements to aid against drifting estimates. Furthermore, we propose new algorithms beyond the current state-of-the-art motion tracking approaches by embedding problems like sensor calibration and delay compensation into our moving horizon estimation algorithms. The proposed approaches are evaluated and validated by adapting state-of-the-art simulation approaches to our problem and using real-world experiments adopting high-grade sensing technologies.
The results of the proposed algorithms demonstrate that moving horizon estimation is not only a valid alternative to traditional Kalman filtering but represents a possible way to decrease the dependency on costly calibration and design processes.
Link to the meeting: