Support Vector Machines, 2008
Information Science and Statistics Series

Authors:

Language: English

163.51 €

In Print (Delivery period: 15 days).

Add to cartAdd to cart
Support Vector Machines
Publication date:
603 p. · 15.5x23.5 cm · Paperback

232.09 €

Subject to availability at the publisher.

Add to cartAdd to cart
Support vector machines (Information science & statistics)
Publication date:
603 p. · 15.5x23.5 cm · Hardback
Every mathematical discipline goes through three periods of development: the naive, the formal, and the critical. David Hilbert The goal of this book is to explain the principles that made support vector machines (SVMs) a successful modeling and prediction tool for a variety of applications. We try to achieve this by presenting the basic ideas of SVMs together with the latest developments and current research questions in a uni?ed style. In a nutshell, we identify at least three reasons for the success of SVMs: their ability to learn well with only a very small number of free parameters, their robustness against several types of model violations and outliers, and last but not least their computational e?ciency compared with several other methods. Although there are several roots and precursors of SVMs, these methods gained particular momentum during the last 15 years since Vapnik (1995, 1998) published his well-known textbooks on statistical learning theory with aspecialemphasisonsupportvectormachines. Sincethen,the?eldofmachine learninghaswitnessedintenseactivityinthestudyofSVMs,whichhasspread moreandmoretootherdisciplinessuchasstatisticsandmathematics. Thusit seems fair to say that several communities are currently working on support vector machines and on related kernel-based methods. Although there are many interactions between these communities, we think that there is still roomforadditionalfruitfulinteractionandwouldbegladifthistextbookwere found helpful in stimulating further research. Many of the results presented in this book have previously been scattered in the journal literature or are still under review. As a consequence, these results have been accessible only to a relativelysmallnumberofspecialists,sometimesprobablyonlytopeoplefrom one community but not the others.
Preface.- Introduction.- Loss functions and their risks.- Surrogate loss functions.- Kernels and reproducing kernel Hilbert spaces.- Infinite samples versions of support vector machines.- Basic statistical analysis of SVMs.- Advanced statistical analysis of SVMs.- Support vector machines for classification.- Support vector machines for regression.- Robustness.- Computational aspects.- Data mining.- Appendix.- Notation and symbols.- Abbreviations.- Author index.- Subject index.- References.

Ingo Steinwart is a researcher in the machine learning group at the Los Alamos National Laboratory. He works on support vector machines and related methods.

Andreas Christmann is Professor of Stochastics in the Department of Mathematics at the University of Bayreuth. He works in particular on support vector machines and robust statistics.

Explains the principles that make support vector machines a successful modelling and prediction tool for a variety of applications

Rigorous treatment of state-of-the-art results on support vector machines

Suitable for both graduate students and researchers in statistical machine learning

Includes supplementary material: sn.pub/extras