The problem of constrained Markov decision process is considered. An agent aims to maximize the expected accumulated discounted reward subject to multiple constraints on its costs (the number of constraints is relatively small). A new dual approach is proposed with the integration of two ingredients: entropy regularized policy optimizer and Vaidya's dual optimizer, both of which are critical to achieve faster convergence. The finite-time error bound of the proposed approach is provided. Despite the challenge of the nonconcave objective subject to nonconcave constraints, the proposed approach is shown to converge (with linear rate) to the global optimum. The complexity expressed in terms of the optimality gap and the constraint violation significantly improves upon the existing primal-dual approaches. Copyright © 2022, The Authors. All rights reserved.
- Machine learning,
- Constrained Markov decision process,
- Discounted reward,
- Dual approach,
- Error bound,
- Fast convergence,
- Finite-time,
- Linear convergence,
- Linear rate,
- Multiple constraint,
- Optimizers,
- Markov processes,
- Machine Learning (cs.LG),
- Optimization and Control (math.OC)
IR Deposit conditions: non-described
Preprint available on arXiv