Showing posts with label Intentional System. Show all posts
Showing posts with label Intentional System. Show all posts

Saturday, June 06, 2020

An engineering look at p-zombies

"A p-zombie would be indistinguishable from a normal human being but lack conscious experience, qualia, or sentience. For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain. The thought experiment sometimes takes the form of imagining a zombie world, indistinguishable from our world, but lacking first person experiences in any of the beings of that world."
This is the kind of thing philosophers get up to, but why would anyone think this critique of materialism (or physicalism) could make sense? Here is the modal argument, due to David Chalmers:
"According to Chalmers one can coherently conceive of an entire zombie world, a world physically indistinguishable from this world but entirely lacking conscious experience. The counterpart of every conscious being in our world would be a p-zombie. Since such a world is conceivable, Chalmers claims, it is metaphysically possible, which is all the argument requires. Chalmers states: "Zombies are probably not naturally possible: they probably cannot exist in our world, with its laws of nature." The outline structure of Chalmers' version of the zombie argument is as follows;

1. According to physicalism, all that exists in our world (including consciousness) is physical.

2. Thus, if physicalism is true, a metaphysically possible world in which all physical facts are the same as those of the actual world must contain everything that exists in our actual world. In particular, conscious experience must exist in such a possible world.

3. In fact we can conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world). From this (so Chalmers argues) it follows that such a world is metaphysically possible.

4. Therefore, physicalism is false. (The conclusion follows from 2. and 3. by modus tollens.)"
This sounds way too much like the Ontological Argument.

Here's an argument. Consciousness is an evolved faculty, like language. It requires brain resources. It must be doing a job mediating what it is that humans do. If you subtract consciousness while leaving a human active in the world you surely will have impaired behaviour - as when consciousness is removed by sleep, anaesthesia or injury.

We could model a p-zombie as a a Rodney Brooks subsumption architecture. A low-level runtime system (like the hind brain) controlling basic bodily functions - heart rate, breathing, reflexes, digestion - plus a top-level planner or theorem-prover managing the agent's progress through the world to meet its survival and social goals. This is a very conventional architecture in AI which no-one has plausibly claimed to be conscious.

This p-zombie would certainly do the basics. It could answer questions, claim that it 'felt' pain if it were damaged and in general execute any social behaviour which we could classify (in the deep learning sense) or theorise (in the sense of GOFAI).

So what's missing? What's necessarily missing?

Maybe that's the core mystery, the hard problem in a nutshell.

---

Here's a final thought. In a Darwinian world of predators and prey, humans and other animals are notoriously prone to see agency everywhere. What Dennett calls Second-Order Intentional Systems Theory:
"A first-order intentional system is one whose behavior is predictable by attributing (simple) beliefs and desires to it. A second-order intentional system is predictable only if it is [itself] attributed beliefs about beliefs, or beliefs about desires, or desires about beliefs, and so forth." -- Daniel Dennett, Intentional Systems Theory. 1971.
Aside from some experimental systems utilising clunky modal logic (a blind alley in my view) we don't build AI systems today which understand that their environments are populated by entities which have agency.

In fact I don't think we know how to do that.

And without such a requirement, perhaps it's no surprise that our AI systems lack the capability to model themselves as having agency. And so to a lack of self-consciousness - and p-zombiehood.

If autistic people lack a theory of mind (which I don't believe they do, at least in that straightforwardly simplistic form) then we could say that the p-zombie would present itself as autistic, in the guise of a smoothly-performant Asperger's syndrome.

Tuesday, October 30, 2018

Machine Common Sense (MCS) - DARPA

The DARPA challenge

A past DARPA challenge kickstarted the self-driving car phenomenon. Will this new attempt to equip robots with common sense reasoning and interpersonal skills be as successful?

For some value of 'successful' of course.

DARPA's proposal starts with a short review of the disappointing record on 'common sense'.
"Since the early days of AI, researchers have pursued a variety of efforts to develop logic-based approaches to common sense knowledge and reasoning, as well as means of extracting and collecting commonsense knowledge from the Web.

While these efforts have produced useful results, their brittleness and lack of semantic understanding have prevented the creation of a widely applicable common sense capability."
DARPA breaks its new challenge into two substreams. The first bases itself on human infant cognitive development, as theorised by developmental psychology.
"The first approach will create computational models that learn from experience and mimic the core domains of cognition as defined by developmental psychology. This includes the domains of objects (intuitive physics), places (spatial navigation), and agents (intentional actors). Researchers will seek to develop systems that think and learn as humans do in the very early stages of development, leveraging advances in the field of cognitive development to provide empirical and theoretical guidance.

“During the first few years of life, humans acquire the fundamental building blocks of intelligence and common sense,” said Gunning. “Developmental psychologists have founds ways to map these cognitive capabilities across the developmental stages of a human’s early life, providing researchers with a set of targets and a strategy to mimic for developing a new foundation for machine common sense.”

To assess the progress and success of the first strategy’s computational models, researchers will explore developmental psychology research studies and literature to create evaluation criteria. DARPA will use the resulting set of cognitive development milestones to determine how well the models are able to learn against three levels of performance – prediction/expectation, experience learning, and problem solving."
The second stream is more bookish, mining the web.
"The second MCS approach will construct a common sense knowledge repository capable of answering natural language and image-based queries about common sense phenomena by reading from the Web.

DARPA expects that researchers will use a combination of manual construction, information extraction, machine learning, crowdsourcing techniques, and other computational approaches to develop the repository.

The resulting capability will be measured against the Allen Institute for Artificial Intelligence (AI2) Common Sense benchmark tests, which are constructed through an extensive crowdsourcing process to represent and measure the broad commonsense knowledge of an average adult."
I am impressed by neither approach.

It's too tempting to theorise a situated agent in terms of ungrounded abstractions, such as the belief–desire–intention software model. In this way we think of the frog, sat in its puddle, as busily creating in its brain a set of beliefs about the state of its environment, a set of desires such as 'not being hungry' and an intention such as 'catching that fly with a flick of its tongue'.

While we may describe the frog in such unecological folk-psychological terms - as is the wont of developmental psychologists - Maturana et al pointed out that is not what the frog does.

---

At this point it is tempting to bring in Daniel Dennett's ideas about first and second order intentionality but I distrust even this. Treating another animal as an agent (rather than an instrumental object in the environment) which is the hallmark of second-order intentionality, seems extraordinarily rare in the animal kingdom. The Wikipedia article, "Theory of mind in animals", suggests there is partial - but not compelling - evidence for it only in the case of some social animals. But the concept remains ill-defined.

We use a reified intentional language (note: language) with modal operators such as believes and desires to describe those objects we classify as agents. By virtue of  'possessing' their own beliefs, desires and intentions (= plans) they are taken to exhibit autonomy. We don't normally use such language to describe bricks and cauliflowers. We do use such language to describe spiders, mice and roombas - first-order intentional systems.

Some systems (such as people) seem able themselves to characterise objects in their environment (possibly including themselves) in intentional terms. They have the capability, for example, to look at us and see us as agents. We call these systems second order intentional systems. I would include cats and dogs and children here, noting how they manipulate us (spiders, mice and roombas don't seem to notice us as agents).

So how do they do that? That's the interesting architecture question DARPA is asking, and nobody knows.

I expect that a biological second order intentional system possesses neural circuitry which encodes a representation of intentional agents in its environment as persistent objects, together with links to situations (also neurally-encoded) representing beliefs, desires and intentions relativised to that agent. Think of the intuitions underlying Situation Semantics, implemented as computationally-friendly semantic nets: I wrote about it here.

I used to think that the only way forward was to design the best higher-order intentionality architecture possible, embody it in a robot child and expose it to the same experience-history as that involved in human infant socialisation. But I notice that DeepMind and similar have made huge leaps forward in simulated environments which decouple cognitive challenges from the messy (and slow) domains of robotic engineering.

So I imagine that's where the smart money will be.

Friday, November 24, 2017

Super-high-level programming languages

Not many posts recently as Alex is visiting. We were discussing programming languages, which divide between those focused on performance (C++, Go) and those focused on the problem domain (Java, Scala).

I floated Prolog past him, but with his engineering head on he wasn't that interested. The tutorial programs were 'hard to understand' and in any case 'could be coded much more efficiently in a procedural language such as Java'.

OK, I gave up on that but it did make me think: what would be a language at a much higher level of abstraction even than Prolog?

Richard Montague

There's a way of thinking about this as a logician, where you focus on different kinds of semantic models: those of higher-order logics, modal logics, type systems .. but I don't really want to go there. Richard Montague's 'throw the kitchen sink at it' logic for natural language is a kind of reductio ad absurdum for that kind of approach. You rapidly lose any computational capability.

Our intuitive idea of the inadequacy of current programming language expressivity derives from a comparison with natural language. What an advance it would be (we think) if we could engage with a computer system the way we today talk to the (human) analyst.

English as a super-high-level programming language?

For me the extra dimensions of natural language include the management of agency (hence speech acts) and context - the presumption of a detailed and extensive shared culture to make sense of implicit referents.

In the end it depends on what we think we're programming. If it's the behaviour of a non-intentional black box (every business system to-date) then a more-or-less souped-up predicate calculus specification language (which is adequately executable) will be optimal: a Prolog-variant.

If our target system is an intentional system, indeed a second-order intentional system - one which treats other systems such as ourselves as intentional systems - then the 'programming language' to engineer such systems will incorporate those additional capabilities we find in natural language.

Today's AI engineering community will hiss at this point: we don't program any more - our systems learn!

Don't worry, that pendulum will be swinging back soon enough. I expect legislation in due course that new AI systems will have to attend school.