**Tel-Aviv****
University**

**Sunday, Oct 30, 2005, 11:15-12:15**

**Room 309**

**Schreiber****
Building**

--------------------------------------------------------------------------------

**Nati**** Srebro**

**University**** of Toronto**

Title:

Maximum Margin Matrix Factorization

Abstract:

Factor, or linear component (PCA), models are often natural
in the

analysis of many kinds of tabulated
data (e.g. collections of

documents or images, gene
expression measurements and user

preferences). The premise of such models is that important
aspects of

the data can be captured by a small
number dimensions ("components",

"factors" or "topics").

I will present a novel approach that allows an unbounded (infinite)

number of factors. This is achieved by limiting the norm of the

factorization instead of its
dimensionality. The approach is inspired

by, and has strong connections to, large-margin
linear discrimination.

Unlike learning low-dimensional factorizations, which lead to

non-convex optimization problems, I
will show how learning such a

max-margin matrix factorization is
a convex semi-definite program.

I will also present generalization error bounds for learning
with

low-dimensional and max-margin
factorizations, and discuss the

relationship between the two, and
the connection between learning a

factorization and the question of what classes can be efficiently

embedded as linear classification.

Joint work with Tommi
Jaakkola, Noga Alon and Adi Shraibman.