http://www.lavoisier.eu/books/feeds/new-books-mathematics.atom
Lavoisier Bokseller: New books in mathematics
2017-09-01T12:00:00+01:00
Lavoisier Bokseller
service.client@lavoisier.fr
https://images.lavoisier.net/logo/lavoisier2014-print.jpg
https://images.lavoisier.net/logo/lavoisier2014-print.jpg
Books in mathematics just arrived on Lavoisier Bookseller
© 2017 Lavoisier Bokseller
http://www.lavoisier.eu/books/other/an-introduction-to-statistical-learning/description_2831897
An Introduction to Statistical Learning
2017-09-01T12:00:00+01:00
<img src="https://images.lavoisier.net/vignettes/1316307853.jpg" alt="Book's cover:An Introduction to Statistical Learning" /><br /><p>
<b>An Introduction to Statistical Learning</b> provides an accessible
overview of the field of statistical learning, an essential toolset for
making sense of the vast and complex data sets that have emerged in
fields ranging from biology to finance to marketing to astrophysics in
the past twenty years. This book presents some of the most important
modeling and prediction techniques, along with relevant applications.
Topics include linear regression, classification, resampling methods,
shrinkage approaches, tree-based methods, support vector machines,
clustering, and more. Color graphics and real-world examples are used to
illustrate the methods presented. Since the goal of this textbook is to
facilitate the use of these statistical learning techniques by
practitioners in science, industry, and other fields, each chapter
contains a tutorial on implementing the analyses and methods presented
in R, an extremely popular open source statistical software platform.
</p>
<p>
Two of the authors co-wrote The Elements of Statistical Learning
(Hastie, Tibshirani and Friedman, 2nd edition 2009), a popular reference
book for statistics and machine learning researchers. <b>An Introduction
to Statistical Learning</b> covers many of the same topics, but at a
level accessible to a much broader audience. This book is targeted at
statisticians and non-statisticians alike who wish to use cutting-edge
statistical learning techniques to analyze their data. The text assumes
only a previous course in linear regression and no knowledge of matrix
algebra.
</p>
http://www.lavoisier.eu/books/mathematics/statistical-implications-of-turing-s-formula/zhang/description_3615740
Statistical Implications of Turing′s Formula
2017-05-01T12:00:00+01:00
<img src="https://images.lavoisier.net/vignettes/1317011965.jpg" alt="Book's cover:Statistical Implications of Turing′s Formula" /><br />Features a broad introduction to recent research on Turing’s formula and
presents modern applications in statistics, probability, information
theory, and other areas of modern data science<br><br>Turing's formula is,
perhaps, the only known method for estimating the underlying
distributional characteristics beyond the range of observed data without
making any parametric or semiparametric assumptions. This book presents a
clear introduction to Turing’s formula and its connections to statistics.
Topics with relevance to a variety of different fields of study are
included such as information theory; statistics; probability; computer
science inclusive of artificial intelligence and machine learning; big
data; biology; ecology; and genetics.<br><br>The author provides
examinations of many core statistical issues within modern data science
from Turing's perspective. A systematic approach to long-standing problems
such as entropy and mutual information estimation, diversity index
estimation, domains of attraction on general alphabets, and tail
probability estimation is presented in light of the most up-to-date
understanding of Turing's formula.<br><br>Featuring numerous exercises and
examples throughout, the author provides a summary of the known properties
of Turing's formula and explains how and when it works well; discusses the
approach derived from Turing's formula in order to estimate a variety of
quantities, all of which mainly come from information theory, but are also
important for machine learning and for ecological applications; and uses
Turing's formula to estimate certain heavy-tailed distributions.
http://www.lavoisier.eu/books/mathematics/the-elements-of-statistical-learning-data-mining-inference-and-prediction-springer-series-in-statistics/hastie/description_1274454
The elements of statistical learning : data mining, inference and prediction (2nd Ed.)
2017-04-01T12:00:00+01:00
<img src="https://images.lavoisier.net/vignettes/1275229.jpg" alt="Book's cover:The elements of statistical learning : data mining, inference and prediction (2nd Ed.) " /><br />During the past decade there has been an explosion in computation and
information technology. With it have come vast amounts of data in a
variety of fields such as medicine, biology, finance, and marketing. The
challenge of understanding these data has led to the development of new
tools in the field of statistics, and spawned new areas such as data
mining, machine learning, and bioinformatics. Many of these tools have<br><br>This
book describes the important ideas in these areas in a common conceptual
framework. While the approach is statistical, the emphasis is on concepts
rather than mathematics. Many examples are given, with a liberal use of
color graphics. It should be a valuable resource for statisticians and
anyone interested in data mining in science or industry. The book's
coverage is broad, from supervised learning (prediction) to unsupervised
learning. The many topics include neural networks, support vector
machines, classification trees and boosting, the first comprehensive
treatment of this topic in any book.<br><br>This major new edition
features many topics not covered in the original, including graphical
models, random forests, ensemble methods, least angle regression and path
algorithms for the lasso, non-negative matrix factorization, and spectral
clustering. There is also a chapter on methods for "wide" data (p bigger
than n), including multiple testing and false discovery rates. Trevor
Hastie, Robert Tibshirani, and Jerome Friedman are professors of
statistics at Stanford University. They are prominent researchers in this
area: Hastie and Tibshirani developed generalized additive models and
wrote a popular book of that title. Hastie co-developed much of the
statistical modeling software and environment in R/S-PLUS and invented
principal curves and surfaces. Tibshirani proposed the lasso and is
co-author of the very successful An Introduction to the Bootstrap.
Friedman is the co-inventor of many data-mining tools including CART,
MARS, projection pursuit and gradient boosting.
http://www.lavoisier.eu/books/mathematics/applied-biclustering-methods-for-big-and-high-dimensional-data-using-r/kasim/description_3541360
Applied Biclustering Methods for Big and High-Dimensional Data Using R
2016-10-01T12:00:00+01:00
<img src="https://images.lavoisier.net/vignettes/1316972018.jpg" alt="Book's cover:Applied Biclustering Methods for Big and High-Dimensional Data Using R" /><br />As big data has become standard in many application areas, challenges have
arisen related to methodology and software development, including how to
discover meaningful patterns in the vast amounts of data. Addressing these
problems, <i>Applied Biclustering Methods for Big and High-Dimensional
Data Using R</i> shows how to apply biclustering methods to find local
patterns in a big data matrix.<br><br>The book presents an overview of
data analysis using biclustering methods from a practical point of view.
Real case studies in drug discovery, genetics, marketing research,
biology, toxicity, and sports illustrate the use of several biclustering
methods. References to technical details of the methods are provided for
readers who wish to investigate the full theoretical background. All the
methods are accompanied with R examples that show how to conduct the
analyses. The examples, software, and other materials are available on a
supplementary website.