更新时间:2021-06-10 19:34:36
coaverpage
Title Page
Packt Upsell
Why subscribe?
Packt.com
Contributors
About the author
About the reviewer
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Conventions used
Get in touch
Reviews
The History of AI
The beginnings of AI –1950–1974
Rebirth –1980–1987
The modern era takes hold – 1997-2005
Deep learning and the future – 2012-Present
Summary
Machine Learning Basics
Technical requirements
Applied math basics
The building blocks – scalars vectors matrices and tensors
Scalars
Vectors
Matrices
Tensors
Matrix math
Scalar operations
Element–wise operations
Basic statistics and probability theory
The probability space and general theory
Probability distributions
Probability mass functions
Probability density functions
Conditional and joint probability
Chain rule for joint probability
Bayes' rule for conditional probability
Constructing basic machine learning algorithms
Supervised learning algorithms
Random forests
Unsupervised learning algorithms
Basic tuning
Overfitting and underfitting
K-fold cross-validation
Hyperparameter optimization
Platforms and Other Essentials
TensorFlow PyTorch and Keras
TensorFlow
Basic building blocks
The TensorFlow graph
PyTorch
The PyTorch graph
Keras
Wrapping up
Cloud computing essentials
AWS basics
EC2 and virtual machines
S3 Storage
AWS Sagemaker
Google Cloud Platform basics
GCP cloud storage
GCP Cloud ML Engine
CPUs GPUs and other compute frameworks
Installing GPU libraries and drivers
With Linux (Ubuntu)
With Windows
Basic GPU operations
The future – TPUs and more
Your First Artificial Neural Networks
Network building blocks
Network layers
Naming and sizing neural networks
Setting up network parameters in our MNIST example
Activation functions
Historically popular activation functions
Modern approaches to activation functions
Weights and bias factors
Utilizing weights and biases in our MNIST example
Loss functions
Using a loss function for simple regression
Using cross-entropy for binary classification problems
Defining a loss function in our MNIST example
Stochastic gradient descent
Learning rates
Utilizing the Adam optimizer in our MNIST example
Regularization
The training process
Putting it all together
Forward propagation