In this article, we consider an iterative adaptive dynamic programming (ADP) algorithm within the Hamiltonian-driven framework to solve the Hamilton-Jacobi-Bellman (HJB) equation for the infinite-horizon optimal control problem in continuous time for nonlinear systems. First, a novel function, ``min-Hamiltonian,'' is defined to capture the fundamental properties of the classical Hamiltonian. It is shown that both the HJB equation and the policy iteration (PI) algorithm can be formulated in terms of the min-Hamiltonian within the Hamiltonian-driven framework. Moreover, we develop an iterative ADP algorithm that takes into consideration the approximation errors during the policy evaluation step. We then derive a sufficient condition on the iterative value gradient to guarantee closed-loop stability of the equilibrium point as well as convergence to the optimal value. A model-free extension based on an off-policy reinforcement learning (RL) technique is also provided. Finally, numerical results illustrate the efficacy of the proposed framework.
- Approximation Algorithms,
- Approximation Error,
- Costs,
- Dynamic Programming,
- Hamilton-Jacobi-Bellman (HJB) Equation,
- Hamiltonian-Driven Framework,
- Inexact Adaptive Dynamic Programming (ADP),
- Iterative Algorithms,
- Mathematical Model,
- Optimal Control,
- Stability Analysis
Available at: http://works.bepress.com/donald-wunsch/447/