Showing posts with label Daniel Dennett. Show all posts
Showing posts with label Daniel Dennett. Show all posts

Thursday, September 05, 2024

Compatibilist Molinism: a new approach to Theodicy

From ChatGPT

Compatibilist Molinism: a new approach to Theodicy

Introduction

How can you combine the concept of an omniscient and omnipotent deity with human free-will and therefore the possibility to do evil? For many centuries, Molinism was probably the best theoretical framework. But in a twenty-first century world where all observable phenomena have been found consistent with the laws of physics, and we now understand agency through the paradigm of the ‘Intentional Stance’, we can do much, much better. ChatGPT has summarised our lengthy discussion on these topics in the following conceptual essay.

---

Note: Theodicy specifically refers to the attempt to justify the goodness and omnipotence of God in the face of the existence of evil and suffering in the world. It deals with questions about why a good and all-powerful God would permit evil and suffering to exist.


Reframing Molinism within a Compatibilist Framework: An Essay on Divine Knowledge and Human Freedom

Molinism, a theological system developed by the 16th-century Jesuit theologian Luis de Molina, attempts to reconcile divine omniscience and omnipotence with human free will. Central to Molinism is the concept of "middle knowledge" — the idea that God possesses knowledge of what any free creature would do under any possible set of circumstances. This knowledge allows God to actualize a world that aligns with His divine purposes while preserving human freedom. However, this framework traditionally assumes a libertarian view of free will, where agents possess the genuine ability to choose between alternatives in a way that is not determined by prior causes.

Critics from a Compatibilist perspective argue that this libertarian assumption is problematic. Compatibilism posits that free will is compatible with determinism: the view that every event, including human actions, is determined by preceding causes in accordance with the laws of nature. In a determined world, the notion of free choices that could have been otherwise in a metaphysically indeterminate sense seems illusory. If every choice is the result of prior states and causal processes, the idea of "middle knowledge" as traditionally understood loses coherence. However, this essay argues that Molinism can be reframed within a Compatibilist framework, preserving its core theological insights while aligning with a determinist (modulo Schrödinger's equation with the MWI) understanding of the world. This reframing involves interpreting God’s middle knowledge, not as knowledge of undetermined free choices, but as knowledge of determined actions understood through the lens of the intentional stance, a concept introduced by philosopher Daniel Dennett.

Compatibilism, the Intentional Stance, and Free Will

To understand how Molinism can be reconciled with Compatibilism, it is essential first to outline the key concepts at play. Compatibilism is the philosophical position that determinism and free will are not mutually exclusive. According to this view, human beings can be free and morally responsible for their actions even if those actions are determined by prior causes. The essence of free will, under Compatibilism, is not the ability to have chosen otherwise in an absolute sense, but rather the ability to act according to one’s own desires, intentions, and reasoning processes without external coercion. This perspective maintains that freedom is found in voluntary action, guided by an agent’s internal states, even if those states themselves are the product of natural causal chains.

The ‘intentional stance’, as developed by Dennett, offers a useful paradigm for discussing free will within a deterministic framework. The intentional stance involves explaining and predicting behaviour by attributing beliefs, desires, and rationality to agents, regardless of the underlying physical or mechanical processes driving those behaviours. This approach operates at a higher, more abstract level of explanation, allowing us to discuss decisions, intentions, and moral responsibility in meaningful terms, even if these are underpinned by (usually unknown in detail) deterministic processes.

Reinterpreting Molinist Concepts within a Compatibilist Framework

By adopting the intentional stance, Molinist concepts such as natural knowledge, middle knowledge, and free knowledge can be reinterpreted in a way that is consistent with Compatibilism. This reinterpretation does not merely preserve Molinism but enriches it by providing a framework where divine omniscience and human freedom coexist harmoniously in a determined world.

Natural Knowledge’, in traditional Molinism, refers to God’s knowledge of all possible truths and scenarios independent of His will. This includes the knowledge of all possible worlds and the different ways creatures might act under various circumstances. Within a Compatibilist framework, natural knowledge remains fundamentally the same. God’s knowledge encompasses all logical and metaphysical possibilities, including every conceivable deterministic framework that could govern the universe. This means that God knows all potential configurations of the universe, the laws of nature that would apply, and the outcomes they would produce.

Middle Knowledge’ is where the most significant reinterpretation occurs. Traditionally, middle knowledge is God’s knowledge of what free creatures would do in any given circumstance, assuming libertarian free will. In a Compatibilist framework, however, middle knowledge can be understood as God’s knowledge of what agents would do in any given circumstance based on their determined desires, beliefs, and intentions. This knowledge is not about metaphysical possibilities in an undetermined sense but about knowing exactly how these internal states — shaped by prior causes — interact with external circumstances to produce specific actions.

In this reinterpretation, God’s middle knowledge still involves knowledge of counterfactuals, but these counterfactuals are understood within the bounds of determinism. God knows, for example, that if a person with a certain set of desires and beliefs were placed in a particular situation, they would act in a specific way. This knowledge is analogous to how a perfect psychologist might predict behaviour based on a deep understanding of a person’s psychological makeup and the external factors influencing them. The key difference is that God’s knowledge is infinitely more precise, encompassing every possible detail of every potential scenario.

Free Knowledge’, the third component of Molinist knowledge, remains largely the same in a Compatibilist framework. Free knowledge refers to God’s knowledge of the actual world, the one He chose to actualize from among the many possibilities He considered. 

After considering all possible deterministic scenarios (through middle knowledge), God actualizes the world that best fulfils His purposes. In this world, human beings make choices consistent with their desires and intentions—choices that are determined yet voluntary, and thus meaningful in the context of Compatibilist free will.

Implications for Divine Knowledge, Sovereignty, and Moral Responsibility

Reinterpreting Molinism within Compatibilism has significant implications for our understanding of divine knowledge, sovereignty, and human moral responsibility. 

Divine omniscience’ is preserved and even deepened in this framework. God possesses exhaustive knowledge of all possible and actual events, including every detail of every possible deterministic world. His knowledge encompasses all causal relationships and the resulting behaviours of agents, which He can predict with perfect accuracy. This knowledge does not require indeterminacy, to be meaningful; instead, it reflects God’s complete understanding of how every part of His creation functions within the laws He established.

Divine sovereignty’ is also maintained in this model. God’s omnipotence is expressed through His ability to actualize any possible world, choosing the initial conditions and laws that lead to outcomes aligning with His divine purposes. 

Sovereignty here does not imply micromanaging every event but rather setting the parameters and initial conditions in a way that ensures His will is ultimately fulfilled. In this sense, God’s choice of which world to actualize reflects His wisdom and goodness, as He selects from among all possible deterministic scenarios the one that best accomplishes His purposes.

Human ‘moral responsibility’ remains intact in a Compatibilist framework. Even though human actions are determined by prior causes, they are still considered free in the sense that they are voluntary and aligned with the agent’s desires and intentions. The intentional stance supports the attribution of praise and blame, as it allows us to talk meaningfully about choices, decisions, and responsibilities, even in a deterministic universe. Therefore, God’s judgement and moral order remain coherent, as they are based on agents acting according to their desires and intentions, making them responsible for their actions.

Addressing Challenges: The Problem of Evil and Divine Freedom

While this Compatibilist reframing of Molinism offers a coherent and philosophically robust model, it is not without its challenges. Two significant issues are the ‘problem of evil’ and concerns about ‘divine freedom’.

The problem of evil’ is often cited as a major challenge to theistic belief. If God actualizes a determined world, how can He not be responsible for the evil that occurs within it? In response, Compatibilist Molinism can argue that God permits the existence of evil as part of a broader plan that ultimately brings about greater goods. For instance, challenges, suffering, and moral dilemmas can contribute to moral development, compassion, and the deepening of relationships. While agents perform evil acts according to their desires and intentions, making them morally responsible, God’s foreknowledge and sovereignty ensure that even these acts are integrated into a purposeful narrative that contributes optimally to the fulfilment of His ultimate plan.

Divine freedom’ is another area of concern. If determinism limits possible outcomes, is God’s choice constrained? The Compatibilist Molinist response would argue that God’s freedom is expressed in His ability to select from an immense array of possible deterministic worlds. Each possible world presents different initial conditions and outcomes, and God’s choice reflects His perfect wisdom and goodness. While determinism dictates the course of events within any given world, the range of possible worlds remains vast and varied, allowing God to exercise true freedom in choosing which world to actualize.

Conclusion: The Coherence and Advantages of Compatibilist Molinism

To summarise the essential core of Compatibilist Molinism:

1. God's Control Over Possible Worlds: God has the power to determine the initial conditions, natural laws, and other factors that shape the possible evolution of the world. This results in a "space of possible worlds," each representing a different potential reality.

2. Compatibilist Free Will within Worlds: In each possible world, agents possess Compatibilist free will. This means that while their choices are determined by prior causes (such as their desires, beliefs, and intentions), they are still considered free in the sense that they act according to their own internal motivations without external coercion, perceiving themselves not be be solely determined by underlying laws of nature.

3. God's Evaluation and Actualization of the Optimal World: God can evaluate the outcomes across the space of possible worlds, considering how the determined actions of free agents will play out within each scenario. Based on this evaluation, God chooses to actualise the world that best aligns with His desired outcomes, whatever those may be (e.g., maximising good, fulfilling a divine plan, etc.).

This framework attempts to reconcile divine sovereignty and omniscience with human freedom, aligning Molinism with a Compatibilist understanding of free will.


I found this level of analysis by the latest incarnation of ChatGPT extremely impressive. But what would John Dominic Crossan make of all this abstract discussion? I asked ChatGPT: 

"Crossan might very well view the debate over Compatibilism and Molinism as somewhat disconnected from what he considers the core of Christian faith — namely, the life, teachings, and example of Jesus. 

"He might argue that such mediaeval-style scholastic theological discussions, while intellectually stimulating, risk becoming overly abstract and removed from the lived experience and practical implications of following Jesus. 

"In his view, the essence of Christianity lies in imitating Jesus's actions, embodying his message of love, justice, and compassion, and addressing the pressing social issues of the day."

Saturday, June 06, 2020

An engineering look at p-zombies

"A p-zombie would be indistinguishable from a normal human being but lack conscious experience, qualia, or sentience. For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain. The thought experiment sometimes takes the form of imagining a zombie world, indistinguishable from our world, but lacking first person experiences in any of the beings of that world."
This is the kind of thing philosophers get up to, but why would anyone think this critique of materialism (or physicalism) could make sense? Here is the modal argument, due to David Chalmers:
"According to Chalmers one can coherently conceive of an entire zombie world, a world physically indistinguishable from this world but entirely lacking conscious experience. The counterpart of every conscious being in our world would be a p-zombie. Since such a world is conceivable, Chalmers claims, it is metaphysically possible, which is all the argument requires. Chalmers states: "Zombies are probably not naturally possible: they probably cannot exist in our world, with its laws of nature." The outline structure of Chalmers' version of the zombie argument is as follows;

1. According to physicalism, all that exists in our world (including consciousness) is physical.

2. Thus, if physicalism is true, a metaphysically possible world in which all physical facts are the same as those of the actual world must contain everything that exists in our actual world. In particular, conscious experience must exist in such a possible world.

3. In fact we can conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world). From this (so Chalmers argues) it follows that such a world is metaphysically possible.

4. Therefore, physicalism is false. (The conclusion follows from 2. and 3. by modus tollens.)"
This sounds way too much like the Ontological Argument.

Here's an argument. Consciousness is an evolved faculty, like language. It requires brain resources. It must be doing a job mediating what it is that humans do. If you subtract consciousness while leaving a human active in the world you surely will have impaired behaviour - as when consciousness is removed by sleep, anaesthesia or injury.

We could model a p-zombie as a a Rodney Brooks subsumption architecture. A low-level runtime system (like the hind brain) controlling basic bodily functions - heart rate, breathing, reflexes, digestion - plus a top-level planner or theorem-prover managing the agent's progress through the world to meet its survival and social goals. This is a very conventional architecture in AI which no-one has plausibly claimed to be conscious.

This p-zombie would certainly do the basics. It could answer questions, claim that it 'felt' pain if it were damaged and in general execute any social behaviour which we could classify (in the deep learning sense) or theorise (in the sense of GOFAI).

So what's missing? What's necessarily missing?

Maybe that's the core mystery, the hard problem in a nutshell.

---

Here's a final thought. In a Darwinian world of predators and prey, humans and other animals are notoriously prone to see agency everywhere. What Dennett calls Second-Order Intentional Systems Theory:
"A first-order intentional system is one whose behavior is predictable by attributing (simple) beliefs and desires to it. A second-order intentional system is predictable only if it is [itself] attributed beliefs about beliefs, or beliefs about desires, or desires about beliefs, and so forth." -- Daniel Dennett, Intentional Systems Theory. 1971.
Aside from some experimental systems utilising clunky modal logic (a blind alley in my view) we don't build AI systems today which understand that their environments are populated by entities which have agency.

In fact I don't think we know how to do that.

And without such a requirement, perhaps it's no surprise that our AI systems lack the capability to model themselves as having agency. And so to a lack of self-consciousness - and p-zombiehood.

If autistic people lack a theory of mind (which I don't believe they do, at least in that straightforwardly simplistic form) then we could say that the p-zombie would present itself as autistic, in the guise of a smoothly-performant Asperger's syndrome.

Tuesday, October 30, 2018

Machine Common Sense (MCS) - DARPA

The DARPA challenge

A past DARPA challenge kickstarted the self-driving car phenomenon. Will this new attempt to equip robots with common sense reasoning and interpersonal skills be as successful?

For some value of 'successful' of course.

DARPA's proposal starts with a short review of the disappointing record on 'common sense'.
"Since the early days of AI, researchers have pursued a variety of efforts to develop logic-based approaches to common sense knowledge and reasoning, as well as means of extracting and collecting commonsense knowledge from the Web.

While these efforts have produced useful results, their brittleness and lack of semantic understanding have prevented the creation of a widely applicable common sense capability."
DARPA breaks its new challenge into two substreams. The first bases itself on human infant cognitive development, as theorised by developmental psychology.
"The first approach will create computational models that learn from experience and mimic the core domains of cognition as defined by developmental psychology. This includes the domains of objects (intuitive physics), places (spatial navigation), and agents (intentional actors). Researchers will seek to develop systems that think and learn as humans do in the very early stages of development, leveraging advances in the field of cognitive development to provide empirical and theoretical guidance.

“During the first few years of life, humans acquire the fundamental building blocks of intelligence and common sense,” said Gunning. “Developmental psychologists have founds ways to map these cognitive capabilities across the developmental stages of a human’s early life, providing researchers with a set of targets and a strategy to mimic for developing a new foundation for machine common sense.”

To assess the progress and success of the first strategy’s computational models, researchers will explore developmental psychology research studies and literature to create evaluation criteria. DARPA will use the resulting set of cognitive development milestones to determine how well the models are able to learn against three levels of performance – prediction/expectation, experience learning, and problem solving."
The second stream is more bookish, mining the web.
"The second MCS approach will construct a common sense knowledge repository capable of answering natural language and image-based queries about common sense phenomena by reading from the Web.

DARPA expects that researchers will use a combination of manual construction, information extraction, machine learning, crowdsourcing techniques, and other computational approaches to develop the repository.

The resulting capability will be measured against the Allen Institute for Artificial Intelligence (AI2) Common Sense benchmark tests, which are constructed through an extensive crowdsourcing process to represent and measure the broad commonsense knowledge of an average adult."
I am impressed by neither approach.

It's too tempting to theorise a situated agent in terms of ungrounded abstractions, such as the belief–desire–intention software model. In this way we think of the frog, sat in its puddle, as busily creating in its brain a set of beliefs about the state of its environment, a set of desires such as 'not being hungry' and an intention such as 'catching that fly with a flick of its tongue'.

While we may describe the frog in such unecological folk-psychological terms - as is the wont of developmental psychologists - Maturana et al pointed out that is not what the frog does.

---

At this point it is tempting to bring in Daniel Dennett's ideas about first and second order intentionality but I distrust even this. Treating another animal as an agent (rather than an instrumental object in the environment) which is the hallmark of second-order intentionality, seems extraordinarily rare in the animal kingdom. The Wikipedia article, "Theory of mind in animals", suggests there is partial - but not compelling - evidence for it only in the case of some social animals. But the concept remains ill-defined.

We use a reified intentional language (note: language) with modal operators such as believes and desires to describe those objects we classify as agents. By virtue of  'possessing' their own beliefs, desires and intentions (= plans) they are taken to exhibit autonomy. We don't normally use such language to describe bricks and cauliflowers. We do use such language to describe spiders, mice and roombas - first-order intentional systems.

Some systems (such as people) seem able themselves to characterise objects in their environment (possibly including themselves) in intentional terms. They have the capability, for example, to look at us and see us as agents. We call these systems second order intentional systems. I would include cats and dogs and children here, noting how they manipulate us (spiders, mice and roombas don't seem to notice us as agents).

So how do they do that? That's the interesting architecture question DARPA is asking, and nobody knows.

I expect that a biological second order intentional system possesses neural circuitry which encodes a representation of intentional agents in its environment as persistent objects, together with links to situations (also neurally-encoded) representing beliefs, desires and intentions relativised to that agent. Think of the intuitions underlying Situation Semantics, implemented as computationally-friendly semantic nets: I wrote about it here.

I used to think that the only way forward was to design the best higher-order intentionality architecture possible, embody it in a robot child and expose it to the same experience-history as that involved in human infant socialisation. But I notice that DeepMind and similar have made huge leaps forward in simulated environments which decouple cognitive challenges from the messy (and slow) domains of robotic engineering.

So I imagine that's where the smart money will be.

Sunday, November 26, 2017

"Surfing Uncertainty" on Autism (and Schizophrenia)

Scott Alexander got pretty excited back in September about Andy Clark's "Surfing Uncertainty" (in this post) - but that's because he's a psychiatrist and Clark's model has some insightful things to say about both Autism (Asperger's Syndrome) and Schizophrenia.

Amazon link

I read the book and found Clark's approach (that biological agents, aka 'animals', cognitively function through a combination of top-down model-based prediction and bottom-up sensor-based verification) highly plausible, though not that new. Still he pushes the model quite a way - the details are instructive.

My main problem with the text is that the proposed model is really an architectural/engineering one, yet Clark is a philosopher. He writes in that over-abstract, bloated and padded style which people like Daniel Dennett have made so famous.

Somewhere in there, good ideas are trying to extricate themselves from the gloop.

---

Anyway, here's how Scott Alexander, channelling Andy Clark, talks about Autism.
"Various research in the PP [Predictive Processing] tradition has coalesced around the idea of autism as an unusually high reliance on bottom-up rather than top-down information, leading to “weak central coherence” and constant surprisal as the sensory data fails to fall within pathologically narrow confidence intervals.

Autistic people classically can’t stand tags on clothing – they find them too scratchy and annoying. Remember the example from Part III about how you successfully predicted away the feeling of the shirt on your back, and so manage never to think about it when you’re trying to concentrate on more important things? Autistic people can’t do that as well.

Even though they have a layer in their brain predicting “will continue to feel shirt”, the prediction is too precise; it predicts that next second, the shirt will produce exactly the same pattern of sensations it does now. But realistically as you move around or catch passing breezes the shirt will change ever so slightly – at which point autistic people’s brains will send alarms all the way up to consciousness, and they’ll perceive it as “my shirt is annoying”.

Or consider the classic autistic demand for routine, and misery as soon as the routine is disrupted. Because their brains can only make very precise predictions, the slightest disruption to routine registers as strong surprisal, strong prediction failure, and “oh no, all of my models have failed, nothing is true, anything is possible!

Compare to a neurotypical person in the same situation, who would just relax their confidence intervals a little bit and say “Okay, this is basically 99% like a normal day, whatever”. It would take something genuinely unpredictable – like being thrown on an unexplored continent or something – to give these people the same feeling of surprise and unpredictability."
As an AQ high-scorer, I relate to this. In many a social situation I'm walking on eggshells, never quite knowing how people will respond. I'll say something which seems amusing within my own private model of the subject of discourse, only to be met with incomprehension - or worse, consternation - as my poor unconscious predictive model of other people's likely response fails again.

---

The very next section (11) summarises the story on Schizophrenia:
"Schizophrenia. Converging lines of research suggest this also involves weak priors, apparently at a different level to autism and with different results after various compensatory mechanisms have had their chance to kick in.

One especially interesting study asked neurotypicals and schizophrenics to follow a moving light, much like the airplane video in Part III above. When the light moved in a predictable pattern, the neurotypicals were much better at tracking it; when it was a deliberately perverse video specifically designed to frustrate expectations, the schizophrenics actually did better.

This suggests that neurotypicals were guided by correct top-down priors about where the light would be going; schizophrenics had very weak priors and so weren’t really guided very well, but also didn’t screw up when the light did something unpredictable. ...

The exact route from this sort of thing to schizophrenia is really complicated, and anyone interested should check out Section 2.12 and the whole of Chapter 7 from the book. But the basic story is that it creates waves of anomalous prediction error and surprisal, leading to the so-called “delusions of significance” where schizophrenics believe that eg the fact that someone is wearing a hat is some sort of incredibly important cosmic message.

Schizophrenics’ brains try to produce hypotheses that explain all of these prediction errors and reduce surprise – which is impossible, because the prediction errors are random. This results in incredibly weird hypotheses, and eventually in schizophrenic brains being willing to ignore the bottom-up stream entirely – hence hallucinations.

All this is treated with antipsychotics, which antagonize dopamine, which – remember – represents confidence level. So basically the medication is telling the brain “YOU CAN IGNORE ALL THIS PREDICTION ERROR, EVERYTHING YOU’RE PERCEIVING IS TOTALLY GARBAGE SPURIOUS DATA” – which turns out to be exactly the message it needs to hear.

An interesting corollary of all this – because all of schizophrenics’ predictive models are so screwy, they lose the ability to use the “adjust away the consequences of your own actions” hack discussed in Part 5 of this section.

That means their own actions don’t get predicted out, and seem like the actions of a foreign agent. This is why they get so-called “delusions of agency”, like “the government beamed that thought into my brain” or “aliens caused my arm to move just now”. And in case you were wondering – yes, schizophrenics can tickle themselves."
My overall take-home message from this book was that tabula rasa, blank slate paradigms of so much contemporary AI may suffice for crafting smart and powerful classificatory tools, but they won't hack it when we try to build socially-competent agents. In facial recognition and playing Go we're already superhuman; chatbots not so much.

Friday, November 24, 2017

Super-high-level programming languages

Not many posts recently as Alex is visiting. We were discussing programming languages, which divide between those focused on performance (C++, Go) and those focused on the problem domain (Java, Scala).

I floated Prolog past him, but with his engineering head on he wasn't that interested. The tutorial programs were 'hard to understand' and in any case 'could be coded much more efficiently in a procedural language such as Java'.

OK, I gave up on that but it did make me think: what would be a language at a much higher level of abstraction even than Prolog?

Richard Montague

There's a way of thinking about this as a logician, where you focus on different kinds of semantic models: those of higher-order logics, modal logics, type systems .. but I don't really want to go there. Richard Montague's 'throw the kitchen sink at it' logic for natural language is a kind of reductio ad absurdum for that kind of approach. You rapidly lose any computational capability.

Our intuitive idea of the inadequacy of current programming language expressivity derives from a comparison with natural language. What an advance it would be (we think) if we could engage with a computer system the way we today talk to the (human) analyst.

English as a super-high-level programming language?

For me the extra dimensions of natural language include the management of agency (hence speech acts) and context - the presumption of a detailed and extensive shared culture to make sense of implicit referents.

In the end it depends on what we think we're programming. If it's the behaviour of a non-intentional black box (every business system to-date) then a more-or-less souped-up predicate calculus specification language (which is adequately executable) will be optimal: a Prolog-variant.

If our target system is an intentional system, indeed a second-order intentional system - one which treats other systems such as ourselves as intentional systems - then the 'programming language' to engineer such systems will incorporate those additional capabilities we find in natural language.

Today's AI engineering community will hiss at this point: we don't program any more - our systems learn!

Don't worry, that pendulum will be swinging back soon enough. I expect legislation in due course that new AI systems will have to attend school.

Monday, March 13, 2017

From Bacteria to Bach and Back: The Evolution of Minds

Amazon Link

Daniel Dennett has been a guru of AI research since, .. well, certainly since I started doing it back in the 1980s. His "Intentional Stance" continues to be greatly clarifying.

Here's an extract from an early part of his latest book (above) contrasting 'Good Old-Fashioned AI' with how the human brain does it.
"... a benevolent scheduler doles out machine cycles to whatever process has highest priority, and although there may be a bidding mechanism of one sort or another that determines which processes get priority, this is an orderly queue, not a struggle for life.

"(It is a dim appreciation of this fact that perhaps underlies the common folk intuition that a computer could never "care" about anything. Not because it is made out of the wrong materials - why should silicon be any less suitable a substrate for caring than organic molecules? - but because its internal economy has no built-in risks or opportunities so its parts don't have to care.)

"The top-down hierarchical architecture of computer software, supported by an operating system replete with schedulers and other traffic cops, nicely implements Marx's dictum: "To each according to his needs, from each according to his talents." No adder circuit or flip-flop needs to "worry" about where it's going to get the electric power it needs to execute its duty, and there is no room for "advancement."

"A neuron, in contrast, is always hungry for work; it reaches out exploratory dendritic branches, seeking to network with its neighbors in ways that will be beneficial to it. Neurons are thus capable of self-organizing into teams that can take over important information-handling work, ready and willing to be given new tasks which they master with a modicum of trial-and-error rehearsal."
I think this is a good insight. Elsewhere he talks about the top-down, frozen and brittle paradigm of traditional AI (GOFAI). Sure, there are parameters and learning algorithms, but the basic framework is architecturally fixed within those limits the designer has anticipated.

Biological intelligence is far more flexible than that due to the proactive, adaptationist and self-organising properties of neural nets. Because neural-net learning is sub-symbolic ('weights'), the structure of its converged fixed-point architecture is emergent rather than preordained.

Hence too the power of the new artificial neural net paradigm of AI..

---

I'll have more to say about Dennett's new theory when I've finished the book. At times Dennett comes across as an exceedingly widely-read magpie, taking an inordinate amount of time to get to the point. He seems to feel he should go at the speed of his slowest SJW reader.

I sometimes feel that a Reader's Digest condensed edition (c. 10%) should ship with the volume; I suspect there would be little compression loss.

Memes, the subject of cultural evolution, is his big idea and I'm reminded of Joseph Henrich's "The Secret of our Success" although that author hasn't been cited yet.

Update: March 21st 2017.

I have now finished this book and it's a let down: no big reveal, no advance over previous arguments that consciousness is the (still mysterious) result of sub-personal processes. A bubbly, jolly, agreeable-uncle writing style can't hide sloppy over-use of metaphor ("memes as apps"), the minimal payback from his embrace of 'meme theory', and his inability to explain why you and I can suffer while a brick can't.

If you've read Dennett before, this book really doesn't add any additional value.