We study a class of stochastic programs where some of the elements in the objective function are random, and their probability distribution has unknown parameters. The goal is to ﬁnd a good estimate for the optimal solution of the stochastic program using data sampled from the distribution of the random elements. We investigate two common optimization criteria for evaluating the quality of a solution estimator, one based on the diﬀerence in objective values, and the other based on the Euclidean distance between solutions. We use risk as the expected value of such criteria over the sample space. Under a Bayesian framework, where a prior distribution is assumed for the unknown parameters, two natural estimation-optimization strategies arise. A separate scheme ﬁrst ﬁnds an estimator for the unknown parameters, and then uses this estimator in the optimization problem. A joint scheme combines the estimation and optimization steps by directly adjusting the distribution in the stochastic program. We analyze the risk diﬀerence between the solutions obtained from these two schemes for several classes of stochastic programs, while providing insight on the computational eﬀort to solve these problems.
Available at: http://works.bepress.com/danial-davarnia/3/