Stochastic Systems Group
Home Research Group Members Programs  
Demos Calendar Publications Mission Statement Alumni

SSG Seminar Abstract


Loop Series and Bethe Variational Bounds in Attractive Graphical Models

and

Hierarchical Dirichlet Process Hidden Markov Trees for Multiscale Image Analysis

Prof. Erik Sudderth
Brown


Loop Series and Bethe Variational Bounds in Attractive Graphical Models

Erik Sudderth (joint work with Martin Wainwright and Alan Willsky)

Variational methods are frequently used to approximate or bound the partition or likelihood function of a Markov random field. Methods based on mean field theory are guaranteed to provide lower bounds, whereas certain types of convex relaxations provide upper bounds. In general, loopy belief propagation (BP) provides often accurate approximations, but not bounds. We prove that for a class of attractive binary models, the so-called Bethe approximation associated with any fixed point of loopy BP always lower bounds the true likelihood. Empirically, this bound is much tighter than the naive mean field bound, and requires no further work than running BP. We establish these lower bounds using a loop series expansion due to Chertkov and Chernyak, which we show can be derived as a consequence of the tree reparameterization characterization of BP fixed points. We also briefly discuss generalizations to other families of attractive graphical models.



Hierarchical Dirichlet Process Hidden Markov Trees for Multiscale Image Analysis

Erik Sudderth (joint work with Jyri Kivinen and Michael Jordan)

We describe hierarchical, nonparametric Bayesian models of multiscale image representations. Individual wavelet coefficients or features are marginally distributed as Dirichlet process (DP) mixtures, yielding the heavy-tailed marginals characteristic of natural images. The hidden assignments of features to clusters are then globally linked via a tree-structured graphical model. The resulting multiscale stochastic process automatically adapts to the varying complexity of different datasets, and captures global, highly non-Gaussian statistical properties of natural images. This hierarchical Dirichlet process hidden Markov tree (HDP-HMT) framework extends prior work on hidden Markov trees, local Gaussian scale mixtures, and HDP hidden Markov models. By truncating the potentially infinite set of hidden states, we develop Monte Carlo methods which exploit belief propagation for efficient learning from large datasets. Our results show that the HDP-HMT captures interesting structure in natural scenes, and leads to effective algorithms for image categorization and denoising.



Problems with this site should be emailed to jonesb@mit.edu