We study the classical multi-stage stochastic decision problem, wherein the exact distribution of uncertainty remains unknown. To measure the distributional uncertainty over the entire planning horizon, an ambiguity set defined by a probability distance metric is adopted and the problem is modeled within the conventional framework of distributionally robust optimization. We use Kullback-Leibler divergence as the underlying probability distance metric and demonstrate that solving such a problem is computationally demanding. Then, we introduce the robust satisficing framework which employs a target-driven approach to encompass all possible probability distributions. Under this framework, we show that the multi-stage stochastic decision problem can be reformulated into a series of dynamic programming problems, and the optimal policy can be determined through a Bellman-type backward induction. Additionally, we extend the analysis from KL-divergence to the general utility-based probability distance and yield analogous theoretical results. Later, we apply our approach to two different operation management problems: joint inventory-pricing problem and capacity expansion problem. In these two applications, we illustrate the optimality of base-stock policy under the robust satisficing framework. Finally, through our numerical results, we show the promising performance of our approach in effectively addressing uncertain scenarios and achieving predefined target.
06月28日
2024
07月01日
2024
注册截止日期