Tel-Aviv University - Computer Science Colloquium

Sunday, Nov 20, 2005, 11:15-12:15

Room 309
Schreiber Building


John Langford

TTI Chicago


Learning Reductions Formalize Learning Intuition



Learning reductions take a learning problem of type A

transform to a learning problem of type B and then analyze (and

algorithmically optimize) how performance e on the created problem

implies performance on problem A.  The central claim here is that this

process formalizes the intuitions of several empirically successful

learning algorithms including "error correcting output

codes" (Dietterich and Bakiri), "end-to-end learning" (Yann LeCun), and

constraint-based structured prediction (Dan Roth).  This formalization

makes it easy to describe the "why" of these algorithms, to propose

improved algorithms, and to manufacture new learning algorithms with a

reasonable expectation of empirical success.  I will discuss the

mathematical foundations of learning reductions and give several

examples of how they work.