|Stochastic Systems Group|
Doctoral Student, EECS, MIT
We study the common problem of approximating a target matrix with a matrix of low rank. Low-rank approximation with respect to the Frobenius norm (i.e. when one seeks to minimizing the sum squared difference from the target) is widely studied and used, and can be easily solved using a singular value decomposition. For many applications, however, a different loss function is appropriate.
We first consider generalizing the loss function to a weighted Frobenius norm, with a distinct weight given for each entry in the target matrix. While this is conceptually straightforward and arises naturally in several contexts, the standard approach for the unweighted case (using SVD) does not carry over to the weighted case. We show how the fundamental structure of the problem and its solution breaks when weights are introduced, present efficient optimization methods for finding a weighted low rank approximation, and demonstrate the utility of introducing weights in reconstructing an underlying low rank representation.
We further generalize the setting and motivate other loss functions. These include non-Gaussian noise models, unknown parametric noise models, and logistic models appropriate for classification (rather than real-valued) matrices and for collaborative filtering. We show how weighted low-rank approximation can be applied as a subroutine for solving these more general low-rank problems.
Joint work with Tommi Jaakkola
Problems with this site should be emailed to email@example.com