Daniel Dennett has been a guru of AI research since, .. well, certainly since I started doing it back in the 1980s. His "Intentional Stance" continues to be greatly clarifying.
Here's an extract from an early part of his latest book (above) contrasting 'Good Old-Fashioned AI' with how the human brain does it.
"... a benevolent scheduler doles out machine cycles to whatever process has highest priority, and although there may be a bidding mechanism of one sort or another that determines which processes get priority, this is an orderly queue, not a struggle for life.I think this is a good insight. Elsewhere he talks about the top-down, frozen and brittle paradigm of traditional AI (GOFAI). Sure, there are parameters and learning algorithms, but the basic framework is architecturally fixed within those limits the designer has anticipated.
"(It is a dim appreciation of this fact that perhaps underlies the common folk intuition that a computer could never "care" about anything. Not because it is made out of the wrong materials - why should silicon be any less suitable a substrate for caring than organic molecules? - but because its internal economy has no built-in risks or opportunities so its parts don't have to care.)
"The top-down hierarchical architecture of computer software, supported by an operating system replete with schedulers and other traffic cops, nicely implements Marx's dictum: "To each according to his needs, from each according to his talents." No adder circuit or flip-flop needs to "worry" about where it's going to get the electric power it needs to execute its duty, and there is no room for "advancement."
"A neuron, in contrast, is always hungry for work; it reaches out exploratory dendritic branches, seeking to network with its neighbors in ways that will be beneficial to it. Neurons are thus capable of self-organizing into teams that can take over important information-handling work, ready and willing to be given new tasks which they master with a modicum of trial-and-error rehearsal."
Biological intelligence is far more flexible than that due to the proactive, adaptationist and self-organising properties of neural nets. Because neural-net learning is sub-symbolic ('weights'), the structure of its converged fixed-point architecture is emergent rather than preordained.
Hence too the power of the new artificial neural net paradigm of AI..
I'll have more to say about Dennett's new theory when I've finished the book. At times Dennett comes across as an exceedingly widely-read magpie, taking an inordinate amount of time to get to the point. He seems to feel he should go at the speed of his slowest SJW reader.
I sometimes feel that a Reader's Digest condensed edition (c. 10%) should ship with the volume; I suspect there would be little compression loss.
Memes, the subject of cultural evolution, is his big idea and I'm reminded of Joseph Henrich's "The Secret of our Success" although that author hasn't been cited yet.
Update: March 21st 2017.
I have now finished this book and it's a let down: no big reveal, no advance over previous arguments that consciousness is the (still mysterious) result of sub-personal processes. A bubbly, jolly, agreeable-uncle writing style can't hide sloppy over-use of metaphor ("memes as apps"), the minimal payback from his embrace of 'meme theory', and his inability to explain why you and I can suffer while a brick can't.
If you've read Dennett before, this book really doesn't add any additional value.