Bayesian Reasoning and Machine Learning

Author:

A practical introduction perfect for final-year undergraduate and graduate students without a solid background in linear algebra and calculus.

Language: English
Cover of the book Bayesian Reasoning and Machine Learning

Subject for Bayesian Reasoning and Machine Learning

79.74 €

In Print (Delivery period: 14 days).

Add to cartAdd to cart
Publication date:
735 p. · 19.3x25.1 cm · Hardback
Machine learning methods extract value from vast data sets quickly and with modest resources. They are established tools in a wide range of industrial applications, including search engines, DNA sequencing, stock market analysis, and robot locomotion, and their use is spreading rapidly. People who know the methods have their choice of rewarding jobs. This hands-on text opens these opportunities to computer science students with modest mathematical backgrounds. It is designed for final-year undergraduates and master's students with limited background in linear algebra and calculus. Comprehensive and coherent, it develops everything from basic reasoning to advanced techniques within the framework of graphical models. Students learn more than a menu of techniques, they develop analytical and problem-solving skills that equip them for the real world. Numerous examples and exercises, both computer based and theoretical, are included in every chapter. Resources for students and instructors, including a MATLAB toolbox, are available online.
Preface; Part I. Inference in Probabilistic Models: 1. Probabilistic reasoning; 2. Basic graph concepts; 3. Belief networks; 4. Graphical models; 5. Efficient inference in trees; 6. The junction tree algorithm; 7. Making decisions; Part II. Learning in Probabilistic Models: 8. Statistics for machine learning; 9. Learning as inference; 10. Naive Bayes; 11. Learning with hidden variables; 12. Bayesian model selection; Part III. Machine Learning: 13. Machine learning concepts; 14. Nearest neighbour classification; 15. Unsupervised linear dimension reduction; 16. Supervised linear dimension reduction; 17. Linear models; 18. Bayesian linear models; 19. Gaussian processes; 20. Mixture models; 21. Latent linear models; 22. Latent ability models; Part IV. Dynamical Models: 23. Discrete-state Markov models; 24. Continuous-state Markov models; 25. Switching linear dynamical systems; 26. Distributed computation; Part V. Approximate Inference: 27. Sampling; 28. Deterministic approximate inference; Appendix. Background mathematics; Bibliography; Index.
David Barber is Reader in Information Processing in the Department of Computer Science, University College London.