Symbolic Dynamic Programming for Discrete and Continuous State MDPs

Metadatas

Date

February 14, 2012

Identifier
Source

arXiv

Collection

arXiv

Organization

Cornell University


Keywords

Computer Science - Artificial Intelligence

Similar subjects En

Standard of value

Cite this document

Scott Sanner et al., « Symbolic Dynamic Programming for Discrete and Continuous State MDPs », arXiv, ID : 10670/1.eaxp4a


Metrics


Share / Export

Abstract 0

Many real-world decision-theoretic planning problems can be naturally modeled with discrete and continuous state Markov decision processes (DC-MDPs). While previous work has addressed automated decision-theoretic planning for DCMDPs, optimal solutions have only been defined so far for limited settings, e.g., DC-MDPs having hyper-rectangular piecewise linear value functions. In this work, we extend symbolic dynamic programming (SDP) techniques to provide optimal solutions for a vastly expanded class of DCMDPs. To address the inherent combinatorial aspects of SDP, we introduce the XADD - a continuous variable extension of the algebraic decision diagram (ADD) - that maintains compact representations of the exact value function. Empirically, we demonstrate an implementation of SDP with XADDs on various DC-MDPs, showing the first optimal automated solutions to DCMDPs with linear and nonlinear piecewise partitioned value functions and showing the advantages of constraint-based pruning for XADDs.

document thumbnail

From the same authors

On the same subjects

Within the same disciplines

Export in