Tuesday, October 30, 2018

Machine Common Sense (MCS) - DARPA

The DARPA challenge

A past DARPA challenge kickstarted the self-driving car phenomenon. Will this new attempt to equip robots with common sense reasoning and interpersonal skills be as successful?

For some value of 'successful' of course.

DARPA's proposal starts with a short review of the disappointing record on 'common sense'.
"Since the early days of AI, researchers have pursued a variety of efforts to develop logic-based approaches to common sense knowledge and reasoning, as well as means of extracting and collecting commonsense knowledge from the Web.

While these efforts have produced useful results, their brittleness and lack of semantic understanding have prevented the creation of a widely applicable common sense capability."
DARPA breaks its new challenge into two substreams. The first bases itself on human infant cognitive development, as theorised by developmental psychology.
"The first approach will create computational models that learn from experience and mimic the core domains of cognition as defined by developmental psychology. This includes the domains of objects (intuitive physics), places (spatial navigation), and agents (intentional actors). Researchers will seek to develop systems that think and learn as humans do in the very early stages of development, leveraging advances in the field of cognitive development to provide empirical and theoretical guidance.

“During the first few years of life, humans acquire the fundamental building blocks of intelligence and common sense,” said Gunning. “Developmental psychologists have founds ways to map these cognitive capabilities across the developmental stages of a human’s early life, providing researchers with a set of targets and a strategy to mimic for developing a new foundation for machine common sense.”

To assess the progress and success of the first strategy’s computational models, researchers will explore developmental psychology research studies and literature to create evaluation criteria. DARPA will use the resulting set of cognitive development milestones to determine how well the models are able to learn against three levels of performance – prediction/expectation, experience learning, and problem solving."
The second stream is more bookish, mining the web.
"The second MCS approach will construct a common sense knowledge repository capable of answering natural language and image-based queries about common sense phenomena by reading from the Web.

DARPA expects that researchers will use a combination of manual construction, information extraction, machine learning, crowdsourcing techniques, and other computational approaches to develop the repository.

The resulting capability will be measured against the Allen Institute for Artificial Intelligence (AI2) Common Sense benchmark tests, which are constructed through an extensive crowdsourcing process to represent and measure the broad commonsense knowledge of an average adult."
I am impressed by neither approach.

It's too tempting to theorise a situated agent in terms of ungrounded abstractions, such as the belief–desire–intention software model. In this way we think of the frog, sat in its puddle, as busily creating in its brain a set of beliefs about the state of its environment, a set of desires such as 'not being hungry' and an intention such as 'catching that fly with a flick of its tongue'.

While we may describe the frog in such unecological folk-psychological terms - as is the wont of developmental psychologists - Maturana et al pointed out that is not what the frog does.

---

At this point it is tempting to bring in Daniel Dennett's ideas about first and second order intentionality but I distrust even this. Treating another animal as an agent (rather than an instrumental object in the environment) which is the hallmark of second-order intentionality, seems extraordinarily rare in the animal kingdom. The Wikipedia article, "Theory of mind in animals", suggests there is partial - but not compelling - evidence for it only in the case of some social animals. But the concept remains ill-defined.

We use a reified intentional language (note: language) with modal operators such as believes and desires to describe those objects we classify as agents. By virtue of  'possessing' their own beliefs, desires and intentions (= plans) they are taken to exhibit autonomy. We don't normally use such language to describe bricks and cauliflowers. We do use such language to describe spiders, mice and roombas - first-order intentional systems.

Some systems (such as people) seem able themselves to characterise objects in their environment (possibly including themselves) in intentional terms. They have the capability, for example, to look at us and see us as agents. We call these systems second order intentional systems. I would include cats and dogs and children here, noting how they manipulate us (spiders, mice and roombas don't seem to notice us as agents).

So how do they do that? That's the interesting architecture question DARPA is asking, and nobody knows.

I expect that a biological second order intentional system possesses neural circuitry which encodes a representation of intentional agents in its environment as persistent objects, together with links to situations (also neurally-encoded) representing beliefs, desires and intentions relativised to that agent. Think of the intuitions underlying Situation Semantics, implemented as computationally-friendly semantic nets: I wrote about it here.

I used to think that the only way forward was to design the best higher-order intentionality architecture possible, embody it in a robot child and expose it to the same experience-history as that involved in human infant socialisation. But I notice that DeepMind and similar have made huge leaps forward in simulated environments which decouple cognitive challenges from the messy (and slow) domains of robotic engineering.

So I imagine that's where the smart money will be.

11 comments:

  1. (I am reading a book on Machine Learning which I will need to digest before reviewing, but it proposes a new idea which I had not considered before. So more later to come on that aspect …)

    The emphasis in this blog on second order intentionality could well be important too, of course. However (and I need to study the mathematical logic associated with 2nd order intentionality and situation semantics more closely) I mentioned in the referenced 2017 blog the point that we still need to be sure that "consiousness" and "higher order learning/reasoning" is actually modeled by a Turing related computation - ie that this problem is "just" computer science. Physics does offer more "resources" than bit shuffling.

    The failure of hard AI could be down to its presumed basis in computer science. Referencing Deep Mind for example the point was made in an article I read recently that Deep Mind used 5000 Watts and Lee Sedol used 20 Watts (with similar ratios for Kasparov vs Deep Blue, Jeopardy vs Watson, etc). The human brain is just not acting as a high powered bit shuffler, despite its near equivalence and wider superiority.

    So although we might be able to formulate what is wanted in mathematical logic terms, we might also need to recognise that the (implementational) basis is not bit shuffling a la Deep Mind, etc. So part of the theory is to work out what else is involved in the architecture.

    Where is the funding for that?

    ReplyDelete
    Replies
    1. It's a marker of our lack of progress that so much of this debate is speculative. There's an outline theory of intentional description using traditional modal logic but it's never struck me as sufficiently ecologically grounded, detailed or practically useful. Situation Semantics was conceptually an improvement but got bogged down in foundational issues and in any event, wasn't computational.

      An adequate theory of interacting second-order intentional agents, factoring issues such as belief-desire-intention learning, conversing and reasoning would surely have computational models. But my asserting that would not convince a (Penrose-style) incorrigible doubter!

      Delete
    2. I have only just seen the WP page on Situation Semantics which is rather thin, and it is generally rather a sad ending. Apparently they tried to use Azcel's set theory for a while but that seems to have petered out too. The claim that it is non-computational itself is interesting though …

      On my list is to study an updated paper on Distributed Knowledge in AI to see where it goes, and how "second order" the ideas might be.

      However I have studied a logic object which I suspect is closely related to all the above theories, but which has no computational model. However every model: (a) contains a submodel (a bit like the Azcel theory - hence Situation Semantics); and (b) contains a *subset* which is computational. I don't think that it is seriously possible to develop a higher order theory without bumping into this entity - maybe that is what happened to Situation Semantics, and may be the case with "second-order intentional agents" too?

      Only time and more (unfunded) work will tell...

      Delete
    3. The whole Situation Semantics gig (a project of a logician and a philosopher) was driven by worries about the Kripke semantics of traditional epistemic-doxastic conative operators. Like an analogous issue with Everettian possible-worlds, the conventional modal semantics was felt to be too generous in propositions known, believed or desired (all of mathematics to start with).

      Barwise and Perry wanted to restrict the 'possible worlds' to small sets of propositions (situations) which were 'ecologically-realistic' as regards what any agent could realistically perceive, know, believe or want. Their application was natural language semantics and their super-villain was the orthodoxy of Montague semantics.

      They then got lost in a morass of technicalities because they never considered the real constraints on a situated agent. Perhaps they should have worried more about physics and evolutionary biology than non-well-founded set theory .. .

      Delete
    4. I am still interested in the proof that it was non-computational. Is there a reference to this result somewhere - or an outline explanation as to how it came to be?

      Delete
    5. I think we're at cross purposes here. The AI community called Situation Semantics 'non-computational' not in the sense that it couldn't be implemented by any Turing complete system, but in the sense that it was not *easy* to see how a computer system could implement the semantics. Computational implementation was not a concern of Barwise and Perry at all.

      In fact I dimly recall that there were attempts to implement situation semantics in Natural Language systems ... but it all got blown away by the statistical techniques which now dominate.

      Delete
    6. I reckon that the Aczel theory is non-computational. It uses the Axiom of Choice, but even ignoring large sets I see that some of its theorems rely on non computable constructs.

      Axcel's theory was originally meant for the semantics of CCS and thus be relevant computationally, but that has petered out too. The type of non-computationality though is a "weak" non-computationality, not immediately distinguishable from a computational theory. If not recognised this can cause confusion because (e.g.) the axiom of choice guarantees that all required sets, mappings, functions and properties all exist. But any attempt to define things using proof-rules will find an exception, later there will be another exception in an obscure area, when that is corrected we get a contradiction, requiring subtle obscure redefinitions, and off we go again, forever....

      This is the very definition of "petering out into obscurity" - and several modelling projects have suffered this fate!

      From what I can currently tell Situation Semantics used non-well founded sets to model their ontology that "every situation contains more detailed situations". This atomless or "Turtles all the way down" (q.v. in WP) ontology Barwise found needed non-wellfoundedness. Therefore it too will be non-computable in the above weak sense.

      To make matters worse still for Situation Semantics, the eventual conclusion from the Liar Paradox was that the "world" (of the Agent) could not be a "situation".

      I guess this was the point that Agent Theorists packed up and left...

      Delete
    7. As an agent theorist at the time, I assure you my bags had been packed and I'd departed well before that point!

      There's a reason I use the phrase, 'the *intuitions* underlying situation semantics'.

      Delete
    8. Now the lesson I take from this is that "intuitions" in AI often seem to refer to constructs, which when formalised, eventually turn out *not* to be computable, but weakly non-computable. This whole "Situations" saga is an (extended) example of that.

      Even the word "Semantics" is not generally computable. To express this differently: Computable structures only exist naturally in computer science textbooks - most formalisations of things end up being "weakly non-computable". Computer scientists then restrict the formalisation to make it computable (usually without recognising what they are doing): this is the "tragedy of Software Engineering".

      Another example term is "Neural Net". Clearly the finite state models are fully computable. However when more naturally realistic "timings" and "frequencies" are introduced (both of which are real numbers, as are weights), the possibility opens up (still being studied) that a "Timed Neural Net" is weakly non-computable for some subtle reason.

      In short: the brain might be *classically* non-computable, via this route (or similar). So we need to be mathematically wary of all terms (and intuitions): Architecture, emotion, vision, learning, perception, movement, etc that are used in AI discussions.

      Go tell that to DARPA!

      Delete
    9. The map and the territory. Didn't Feynman or someone point out that there are no real numbers in nature? I believe our best theory of biological neurons sees them as thresholded digital devices in operation.

      If pushed, my last line of defence is the Nyquist Theorem .. .

      Delete
    10. One of the subtleties here is that "other number systems exist" - not just the familiar ones. Of course, so do computable reals.

      Noise (more accurately "true randomness") is formalised now as "Weak Weak non-computability". This means that a complete theory of "weak noncomputability" needs to include randomness as an (irritating) subtheory; computation is another (less irritating) subtheory. In model theory terms this says that every model has a random set submodel (as well as a computable subset). I have drawn a diagram for this only to find it in a book on randomness by Nies!

      I am aware that Shannon Information Theory needs re-examination in the light of all this too, since it relies on randomness and deliberate choice of sender signals, with uncertain reception. This looks like weak non-computability too!

      The work is piling up!

      Delete

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.