|Stochastic Systems Group|
Dr. Terran Lane
MIT Artificial Intelligence Lab
Stochastic planning based on the Markov Decision Process (MDP) formalism is attractive because it offers principled, optimal responses to certain classes of noisy control environments. Unfortunately, the strongest planning methods we have for MDPs don't scale well to the exponential state spaces that are often of interest to AI and robotics researchers. Deterministic planning methods, on the other hand, often scale very well but may be fragile in the face of controller noise. In this talk, we demonstrate a class of hybrid techniques that employ both the MDP formalism, to model low-level controller noise, and deterministic graph planning algorithms, to reason about long-range interactions. We demonstrate our method on a set of package delivery and location monitoring problems that arise in mobile robotics. The MDP formulations of these problems are exponentially large in the number of delivery locations but the "hard" part of the space can be reduced to a polynomially large graph and treated as a relative of the traveling salesdroid problem (TSP). While the TSP is itself NP-hard, an enormous amount of effort has been dedicated to heuristics and approximation techniques for it and we find that cheap methods perform well in our examples. In empirical investigations, we find that TSP solution is close to the optimal MDP solution for tiny (i.e., directly solvable) MDP instances and gives reasonable results quickly for instances that are utterly intractable for direct MDP solution. Finally, we give conditions under which it is justified to perform the reduction from MDP to deterministic graph and show that these conditions can be verified with tractable, closed-form methods.
Problems with this site should be emailed to firstname.lastname@example.org