KDD Papers

Robust Top-k Multi-class SVM for Visual Category Recognition

Xiaojun Chang (Carnegie Mellon University);Yao-Liang Yu (University of Waterloo);Yi Yang (University of Technology Sydney)


Classification problems with a large number of classes inevitably involve overlapping or similar classes. In such cases it seems reasonable to allow the learning algorithm to make mistakes on similar classes, as long as the true class is still among the top-k (say) predictions. Likewise, in applications such as search engine or ad display, we are allowed to present $k$ predictions at a time and the customer would be satisfied as long as her interested prediction is included. Inspired by the recent work of \cite{LapinSH15}, we propose a very generic, robust multiclass SVM formulation that directly aims at minimizing a weighted and truncated combination of the ordered prediction scores. Our method includes many previous works as special cases. Computationally, using the Jordan decomposition Lemma we show how to rewrite our objective as the difference of two convex functions, based on which we develop an efficient algorithm that allows incorporating many popular regularizers (such as the $\el_2$ and $\el_1$ norms). We conduct extensive experiments on four real large-scale visual category recognition datasets, and obtain very promising performances.