Stochastic Systems Group
Home Research Group Members Programs  
Demos Calendar Publications Mission Statement Alumni

SSG Seminar Abstract


Discriminative linear models for natural language processing

Michael Collins
CSAIL, MIT


Recent work in machine learning approaches to natural language problems has considered discriminative methods such as log-linear (or maximum-entropy) models, the perceptron algorithm, and algorithms based on support vector machines.

In this talk I will describe some recent results in this area. In particular, I'll describe a method that generalizes support vector machine methods to supervised training of Markov random fields, hidden markov models, probabilistic context-free grammars, and other structured models. I will describe a new algorithm for solving the "large-margin" optimization problem defined in (Taskar, Guestrin and Koller 2003). The optimization method makes use of algorithms such as the forward-backward or inside-outside algorithm, and relies on the application of exponentiated gradient updates (Kivinen and Warmuth, 1997) to quadratic programs.



Problems with this site should be emailed to jonesb@mit.edu