Hamilton-Jacobi-Bellman equation

An efficient DP algorithm on a tree-structure for finite horizon optimal control problems

The classical dynamic programming (DP) approach to optimal control problems is based on the characterization of the value function as the unique viscosity solution of a Hamilton-Jacobi-Bellman equation. The DP scheme for the numerical approximation of viscosity solutions of Bellman equations is typically based on a time discretization which is projected on a fixed state-space grid. The time discretization can be done by a one-step scheme for the dynamics and the projection on the grid typically uses a local interpolation.

High-order approximation of the finite horizon control problem via a tree structure algorithm

Solving optimal control problems via Dynamic Programming is a difficult task that suffers for the”curse of dimensionality”. This limitation has reduced its practical impact in real world applications since the construction of numerical methods for nonlinear PDEs in very high dimension is practically unfeasible. Recently, we proposed a new numerical method to compute the value function avoiding the construction of a space grid and the need for interpolation techniques.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma