|Stochastic Systems Group|
Inference and Learning in Large-scale Relational Conditional Random Fields
University of Massachusetts Amherst
Advances in machine learning have enabled the research community to build fairly accurate models for individual components of a natural language processing system, such as noun phrase segmentation, named entity recognition and entity resolution. However there has been significantly less success stitching such components together into a useful, high-accuracy end-to-end system. This is because errors cascade and compound in a pipeline---for example, six components each having 90% accuracy may have only about 50% accuracy when pipelined.
In this talk I will describe work in large-scale, relational conditional random fields that perform joint inference across multiple components of an information processing pipeline in order to avoid the brittle accumulation of errors. In a single factor graph we seamlessly integrate multiple task components using our new probabilistic programming language to compactly express complex, mutable variable-factor structure both in first-order logic as well as in more expressive Turing-complete imperative procedures. We avoid unrolling this relational graphical model by using Markov-chain Monte Carlo for inference, and make inference more efficient with learned proposal distributions. Parameter estimation is performed by a method we call SampleRank, which avoids complete inference as a subroutine by learning simply to correctly rank successive states of the Markov-chain.
Joint work with colleagues at UMass: Charles Sutton, Aron Culotta, Khashayar Rohanemanesh, Chris Pal, Greg Druck, Karl Schultz, Sameer Singh, Pallika Kanani, Kedare Bellare, Michael Wick, and Rob Hall.
Problems with this site should be emailed to email@example.com