|Stochastic Systems Group|
Tightening Convex Relaxations for Probabilistic Inference
Linear Programming (LP) relaxations have become powerful tools for finding the most probable (MAP) assignment in graphical models. These relaxations can be solved efficiently using message-passing algorithms such as belief propagation and, when the relaxation is tight, provably find the MAP assignment. The standard LP relaxation is not tight enough in many real-world problems, leading to the question of how to tighten the relaxation. We propose two algorithms that tighten the relaxation iteratively in a problem-specific way, at each stage using the existing solution to guide what constraints are added. We first give a cutting-plane algorithm for the primal LP, and show how to efficiently find valid constraints that are violated by the existing solution. Our second algorithm solves the constraint selection problem monotonically in the dual LP, selecting constraints that minimize an upper bound on the MAP assignment. Our algorithms find the MAP assignment in protein side-chain placement, protein design, and stereo problems, in cases where the standard LP relaxation fails.
Joint work with Talya Meltzer, Amir Globerson, Yair Weiss, and Tommi Jaakkola.
Problems with this site should be emailed to email@example.com