Introduction to Deep Learning, 1st ed. 2018
From Logical Calculus to Artificial Intelligence

Undergraduate Topics in Computer Science Series

Author:

Language: English

Approximative price 52.74 €

In Print (Delivery period: 15 days).

Add to cartAdd to cart
Publication date:
Support: Print on demand

This textbook presents a concise, accessible and engaging first introduction to deep learning, offering a wide range of connectionist models which represent the current state-of-the-art. The text explores the most popular algorithms and architectures in a simple and intuitive style, explaining the mathematical derivations in a step-by-step manner. The content coverage includes convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks and autoencoders. Numerous examples in working Python code are provided throughout the book, and the code is also supplied separately at an accompanying website.

Topics and features: introduces the fundamentals of machine learning, and the mathematical and computational prerequisites for deep learning; discusses feed-forward neural networks, and explores the modifications to these which can be applied to any neural network; examines convolutional neural networks, and the recurrent connections to a feed-forward neural network; describes the notion of distributed representations, the concept of the autoencoder, and the ideas behind language processing with deep learning; presents a brief history of artificial intelligence and neural networks, and reviews interesting open research problems in deep learning and connectionism.

This clearly written and lively primer on deep learning is essential reading for graduate and advanced undergraduate students of computer science, cognitive science and mathematics, as well as fields such as linguistics, logic, philosophy, and psychology.

From Logic to Cognitive Science

Mathematical and Computational Prerequisites

Machine Learning Basics

Feed-forward Neural Networks

Modifications and Extensions to a Feed-forward Neural Network

Convolutional Neural Networks

Recurrent Neural Networks

Autoencoders

Neural Language Models

An Overview of Different Neural Network Architectures

Conclusion

Dr. Sandro Skansi is an Assistant Professor of Logic at the University of Zagreb and Lecturer in Data Science at University College Algebra, Zagreb, Croatia.

Offers a welcome clarity of expression, maintaining mathematical rigor yet presenting the ideas in an intuitive and colourful manner

Includes references to open problems studied in other disciplines, enabling the reader to pursue these topics on their own, armed with the tools learned from the book

Presents an accessible style and interdisciplinary approach, with a vivid and lively exposition supported by numerous examples, connected ideas, and historical remarks