Natural Language Processing
A Machine Learning Perspective

Authors:

This undergraduate textbook introduces essential machine learning concepts in NLP in a unified and gentle mathematical framework.

Language: English
Cover of the book Natural Language Processing

Subject for Natural Language Processing

Approximative price 71.34 €

In Print (Delivery period: 14 days).

Add to cartAdd to cart
Publication date:
484 p. · 19.3x25.2 cm · Hardback
With a machine learning approach and less focus on linguistic details, this gentle introduction to natural language processing develops fundamental mathematical and deep learning models for NLP under a unified framework. NLP problems are systematically organised by their machine learning nature, including classification, sequence labelling, and sequence-to-sequence problems. Topics covered include statistical machine learning and deep learning models, text classification and structured prediction models, generative and discriminative models, supervised and unsupervised learning with latent variables, neural networks, and transition-based methods. Rich connections are drawn between concepts throughout the book, equipping students with the tools needed to establish a deep understanding of NLP solutions, adapt existing models, and confidently develop innovative models of their own. Featuring a host of examples, intuition, and end of chapter exercises, plus sample code available as an online resource, this textbook is an invaluable tool for the upper undergraduate and graduate student.
Part I. Basics: 1. Introduction; 2. Counting relative frequencies; 3. Feature vectors; 4. Discriminative linear classifiers; 5. A perspective from information theory; 6. Hidden variables; Part II. Structures: 7. Generative sequence labelling; 8. Discriminative sequence labelling; 9. Sequence segmentation; 10. Predicting tree structures; 11. Transition-based methods for structured prediction; 12. Bayesian models; Part III. Deep Learning: 13. Neural network; 14. Representation learning; 15. Neural structured prediction; 16. Working with two texts; 17. Pre-training and transfer learning; 18. Deep latent variable models; Index.
Yue Zhang is an associate professor at Westlake University. Before joining Westlake, he worked as a research associate at the University of Cambridge and then a faculty member at Singapore University of Technology and Design. His research interests lie in fundamental algorithms for NLP, syntax, semantics, information extraction, text generation, and machine translation. He serves as an action editor for TACL, and as area chairs of ACL, EMNLP, COLING, and NAACL. He gave several tutorials at ACL, EMNLP and NAACL, and won a best paper award at COLING in 2018.
Zhiyang Teng is currently a postdoctoral research fellow in the natural language processing group of Westlake University, China. He obtained his Ph.D. from Singapore University of Technology and Design (SUTD) in 2018, and his Master's from the University of Chinese Academy of Science in 2014. He won the best paper award at CCL/NLP-NABD 2014, and published conference papers for ACL/TACL, EMNLP, COLING, NAACL, and TKDE. His research interests include syntactic parsing, sentiment analysis, deep learning, and variational inference.