Generalizing Binary Classifiers to the Multiclass Case

Machine Learning Final Project

Eliyahu Dain, Rotem Zach

Abstract

The course described several ways to solve binary classification problems. This paper describes the evaluation of several different ways to extend binary classifiers to the multiclass case. The focus is on empirical experiments rather than rigorous analysis of the different algorithms.
For the empirical experiments, AdaBoost was used as the binary classifier. The MNIST handwritten digit was the data set. Several different multiclass algorithms were tested.

Resources

References

[1] Alina Beygelzimer, John Langford, Pradeep D. Ravikumar. Error-Correcting Tournaments. CoRR, abs/0902.3176, 2009.

[2] Erin L. Allwein, Robert E. Schapire, Yoram Singer, Pack Kaelbling. Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers. Journal of Machine Learning Research, 113141, 2000.

[3] Yann Lecun, Corinna Cortes. The MNIST database of handwritten digits. URL http://yann.lecun.com/exdb/mnist/

[4] Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar. Foundations of Machine Learning. The MIT Press, 2012

[5] Thomas G. Dietterich, Ghulum Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 2:263286, 1995.

[6] Yoav Freund, Robert E. Schapire. A Decision-Theoretic Generalization of on-Line Learning and an Application to Boosting. 1995.