An efficient DP algorithm on a tree-structure for finite horizon optimal control problems
The classical dynamic programming (DP) approach to optimal control problems is based on the characterization of the value function as the unique viscosity solution of a Hamilton-Jacobi-Bellman equation. The DP scheme for the numerical approximation of viscosity solutions of Bellman equations is typically based on a time discretization which is projected on a fixed state-space grid. The time discretization can be done by a one-step scheme for the dynamics and the projection on the grid typically uses a local interpolation.