Robust decomposable Markov decision processes motivated by allocating school budgets
Loading...
Authors
Dimitrov, Nedialko B.
Dimitrov, Stanko
Chukova, Stefanka
Subjects
Markov processes
Dynamic programming-optinal control
School funding
Dynamic programming-optinal control
School funding
Advisors
Date of Issue
2014
Date
Publisher
Monterey, California. Naval Postgraduate School
Language
Abstract
Motivated by an application to school funding, we introduce the notion of a robust decomposable Markov decision process (MDP). A robust decomposable MDP model applies to situations where several MDPs, with the transition probabilities in each only known through an uncertainty set, are coupled together by joint resource constraints. Robust decomposable MDPs are different than both decomposable MDPs, and robust MDPs and can not be solved by a direct application of the solution methods from either of those areas. In fact, to the best of our knowledge, there is no known method to tractably compute optimal policies in robust, decomposable MDPs. We show how to tractably compute good policies for this model, and apply the derived method to a stylized school funding example.
Type
Article
Description
Series/Report No
Department
Operations Research
Organization
Naval Postgraduate School (U.S.)
Identifiers
NPS Report Number
Sponsors
Stanko Dimitrov would like to acknowledge the funding he received from Natural Sciences and Engineering Research Council of Canada (NSERC) that partially supported his work on this manuscript
Funder
Format
30 p.
Citation
Distribution Statement
Rights
This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. Copyright protection is not available for this work in the United States.