Skip to main content
Article
Decision Processes with Total-Cost Criteria
The Annals of Probability
  • Theodore P. Hill, Georgia Institute of Technology - Main Campus
  • Steven Demko, Georgia Institute of Technology - Main Campus
Publication Date
1-1-1981
Abstract

By a decision process is meant a pair (X,Γ), where X is an arbitrary set (the state space), and Γ associates to each point x in X an arbitrary nonempty collection of discrete probability measures (actions) on X. In a decision process with nonnegative costs depending on the current state, the action taken, and the following state, there is always available a Markov strategy which uniformly (nearly) minimizes the expected total cost. If the costs are strictly positive and depend only on the current state, there is even a stationary strategy with the same property. In a decision process with a fixed goal g in X, there is always a stationary strategy which uniformly (nearly) minimizes the expected time to the goal, and, if X is countable, such a stationary strategy exists which also (nearly) maximizes the probability of reaching the goal.

Citation Information
Theodore P. Hill and Steven Demko. "Decision Processes with Total-Cost Criteria" The Annals of Probability Vol. 9 Iss. 2 (1981) p. 293 - 301
Available at: http://works.bepress.com/tphill/15/