This paper introduces an online reinforcement learning scheme with exploration for distributed approximate optimal control of uncertain nonlinear interconnected system. The subsystem, interconnection dynamics and input gain matrix are approximated using neural network (NN) identifiers with event-based state feedback. A second NN is designed at each subsystem to construct the mapping of states to future reward prediction via reinforcement signals with which a sequence of approximately optimal distributed control actions are generated. Since the identifiers and the controllers at each subsystem requires the local and other subsystem state vector with non-zero interconnections, a decentralized event-triggering mechanism using Lyapunov theory is developed to dynamically determine the feedback instants so as to reduce the communication overhead. Further, a novel strategy to incorporate exploration in the online control framework using identifiers is proposed to minimize the overall cost during the learning phase. The effects of network delay are discussed and finally, simulation results are presented to verify the effectiveness of the proposed controller.
- Controllers,
- Distributed parameter control systems,
- Natural resources exploration,
- Optimal control systems,
- Reinforcement learning,
- State feedback,
- Communication overheads,
- Decentralized event-triggering,
- Distributed control,
- Lyapunov theories,
- Neural network (NN),
- Nonlinear interconnected systems,
- On-line controls,
- Reinforcement signal,
- Feedback,
- Event sampled control,
- Exploration
Available at: http://works.bepress.com/jagannathan-sarangapani/146/