Stochastic Systems Group
Home Research Group Members Programs  
Demos Calendar Publications Mission Statement Alumni

SSG Seminar Abstract


Hierarchical Bayesian Methods for Reinforcement Learning

David Wingate
Computational Cognitive Science, MIT


Designing autonomous agents capable of coping with the complexity of the real world is a tremendous engineering challenge. Such agents must often deal with rich observations (such as images), unknown dynamics, and rich structure---perhaps consisting of objects, their properties/types and their dynamical interactions. An ability to learn from experience and generalize radically to new situations is essential; at the same time, the agent may bring substantial prior knowledge to bear on the environment it finds itself in.

In this talk, I will present recent work on the combination of reinforcement learning and nonparametric Bayesian modeling. Hierarchical Bayes provides a principled framework for incorporating prior knowledge and dealing explicitly with uncertainty, while reinforcement learning provides a framework for making sequential decisions under uncertainty. I will discuss how nonparametric Bayesian models can help answer two questions: 1) how can an agent learn a representation of state space in a structured domain? and 2) how can an agent learn how to search for good control laws in hard-to-search spaces?

I will illustrate the concepts on applications including modeling neural spike train data, causal sound source separation and optimal control in high-dimensional, simulated robotic environments.



Problems with this site should be emailed to jonesb@mit.edu