"Machine learning is essentially a form of applied statistics with increased emphasis on the use of computers to statistically estimate complicated functions and a decreased emphasis on proving confidence intervals around these functions; we therefore present the two central approaches to statistics: frequentist estimators and Bayesian inference.From "Deep Learning", Chapter 4, page 98.
"Most machine learning algorithms can be divided into the categories of supervised learning and unsupervised learning; we describe these categories and give some examples of simple learning algorithms from each category.
"Most deep learning algorithms are based on an optimization algorithm called stochastic gradient descent."
I've now skim-read the PDF version. It's plainly an engineering text - plenty of detail for practitioners. As a consequence, it's hard to see the wood for the trees. Machine Learning systems are currently super-sophisticated classifiers under some carefully chosen optimisation criteria (least squares estimation is one of the simplest).
However, many problems can be put into this form.
I'm still thinking that the only way we're going to understand higher brain functions is by building models in an experimental way. The philosophers in their armchairs have conspicuously failed to deliver on 'the hard problem'.
The key to this is the ability to build and connect enough artificial neuron-type components (Intel has a new chip). A new slogan is called for: 'In the neural capacity lies the consciousness'.
That, and some clever architecture and design ideas - we seem to have no shortage of smart people flocking into this new discipline.
Schrödinger's advice to his physics doctoral candidates: 'Learn more maths!'.
More: "The major advancements in Deep Learning in 2016" via here.