Deep Learning for Beginners
上QQ阅读APP看书,第一时间看更新

What this book covers

Chapter 1, Introduction to Machine Learning, gives an overview of machine learning. It introduces the motivation behind machine learning and the terminology that is commonly used in the field. It also introduces deep learning and how it fits in the realm of artificial intelligence.

Chapter 2, Setup and Introduction to Deep Learning Frameworks, helps you in the process of setting up TensorFlow and Keras and introduces their usefulness and purpose in deep learning. This chapter also briefly introduces other deep learning libraries to get you acquainted with them in some small way.

Chapter 3, Preparing Data, introduces you to the main concepts behind data processing to make it useful in deep learning. It will cover essential concepts of formatting outputs and inputs that are categorical or real-valued, as well as exploring techniques for augmenting data or reducing the dimensions of data.

Chapter 4, Learning from Data, introduces the most elementary concepts around the theory of deep learning, including measuring performance on regression and classification as well as the identification of overfitting. It also offers some warnings about optimizing hyperparameters.

Chapter 5, Training a Single Neuron, introduces the concept of a neuron and connects it to the perceptron model, which learns from data in a simple manner. The perceptron model is key to understanding basic neural models that learn from data. It also exposes the problem of non-linearly separable data.

Chapter 6, Training Multiple Layers of Neurons, brings you face to face with the first challenges of deep learning using the multi-layer perceptron algorithm, such as gradient descent techniques for error minimization, and hyperparameter optimization to achieve generalization.

Chapter 7, Autoencoders, describes the AE model by explaining the necessity of both encoding and decoding layers. It explores the loss functions associated with the autoencoder problem and it applies it to the dimensionality reduction problem and data visualization.

Chapter 8, Deep Autoencoders, introduces the idea of deep belief networks and the significance of this type of deep unsupervised learning. It explains such concepts by introducing deep AEs and contrasting them with shallow AEs. 

Chapter 9, Variational Autoencoders, introduces the philosophy behind generative models in the unsupervised deep learning field and their importance in the production of models that are robust against noise. It presents the VAE as a better alternative to a deep AE when working with perturbed data. 

Chapter 10, Restricted Boltzmann Machines, complements the book's coverage of deep belief models by presenting RBMs. The backward-forward nature of RBMs is introduced and contrasted with the forward-only nature of AEs. The chapter compares RBMs and AEs on the problem of data dimensionality reduction using visual representations of the reduced data.

Chapter 11, Deep and Wide Neural Networks, explains the difference in performance and complexities of deep versus wide neural networks. It introduces the concept of dense networks and sparse networks in terms of the connections between neurons. 

Chapter 12, Convolutional Neural Networks, introduces CNNs, starting with the convolution operation and moving forward to ensemble layers of convolutional operations aiming to learn filters that operate over data. The chapter concludes by showing how to visualize the learned filters.

Chapter 13, Recurrent Neural Networks, presents the most fundamental concepts of recurrent networks, exposing their shortcomings to justify the existence and success of long short-term memory models. Sequential models are explored with applications for image processing and natural language processing.

Chapter 14, Generative Adversarial Networks, introduces the semi-supervised learning approach of GANs, which belong to the family of adversarial learning. The chapter explains the concepts of generator and discriminator and talks about why having good approximations to the distribution of the training data can lead to the success of a model in, for example, the production of data from random noise.

Chapter 15, Final Remarks on the Future of Deep Learning, briefly exposes you to the new exciting topics and opportunities in deep learning. Should you want to continue your learning, you will find here other resources from Packt Publishing that you can use to move forward in this field.