Approximate inference in Markov Random Fields

a) Walk-sum analysis of Gaussian MRF
We use the walk-sum interpretation of infrence in Gaussian models developed by Johnson to analyze Gaussian loopy belief propagation (LBP). This research provides a very attractive graph-theoretic interpretation of Gaussian LBP in terms of summing weights of certain walks over the graph. Using this analysis we have derived the tightest known conditions for convergence of Gaussian LBP.

Collaborators: Jason Johnson and Alan Willsky

b) Low-rank variance approximation in large-scale GMRF

We develop a method for efficiently computing approximate variances in large-scale GMRF models based on low-rank covariance appxoximations. We develop a method for models with short-range correlations and extend it using wavelet-based techniques to models with long-range correlations, and to multi-scale models. One of the attractive features of the method is that it provides guarantees of the accuracy of variance approximations.

Collaborators: Jason Johnson, Myung Jin Choi, and Alan Willsky


Sparse signal representation

Consider the underdetermined linear equation y = Ax. To have a unique solution x one needs to have additional information about x. This research deals with using a 'sparsity' prior on x, meaning that x (or some linear transformation of x) has few non-zero components. We describe a few ways of posing and solving such problems efficiently in the presence of noise, and apply it to source localization in sensor arrays. Some fields related to this research include subset selection in regression, feature selection in machine learning, sparse PCA, and compressed sensing.

Collaborators: Mujdat Cetin, and Alan Willsky.

Miscelaneous:

a) Lagrangean relaxation for hard graphical models

Finding MAP estimates in discrete graphical models is in general a difficult problem with exponential complexity in the tree-width of the graph. We describe an approach of splitting the graph into tractable components and enforcing consistency between the corresponding variables in different components. By relaxing the strict equality constraints we obtain a Lagrangean formulation for the problem.

Collaborators: Jason Johnson and Alan Willsky

a) Analysis of belief propagation for matching problems, and applications in sensor network self-organization

Loopy belief propagation has been employed in a wide variety of applications with great empirical success, but it comes with few theoretical guarantees. In this paper we investigate the use of the max-product form of belief propagation for weighted matching problems on general graphs. We show that max-product converges to the correct answer if the linear programming (LP) relaxation of the weighted matching problem is tight and does not converge if the LP relaxation is loose. This provides an exact characterization of max-product performance and reveals connections to the widely used optimization technique of LP relaxation. In addition, we demonstrate that max-product is effective in solving practical weighted matching problems in a distributed fashion by applying it to the problem of self-organization in sensor networks.

Collaborators: Sujay Sanghavi and Alan Willsky

c) Distributed Video Coding using SCA codes

I was involved in a summer internship to design a distributed video coding system which shifts the complexity of the encoder to the decoder using Slepian-Wolf ideas (coding with side-information).

Collaborators: Johnny Chen, Ashish Khisti, Jonathan S. Yedidia, Anthony Vetro Emin Martinian, Joao Ascenso