Tuesday, March 28, 2017

Roger Atkins: Mind Design notebook

Roger Atkins's career path from contracted neural network designer to chief designer at Mind Design was not a smooth one. His work was marked by dead ends, false starts and much groping around for insights. Here are extracts from his early notebooks.

---

" ... How much progress have we really made since the dawn of our discipline?

Back in 1959, Lettvin, Maturana and McCulloch wrote their famous paper: "What the Frog's Eye Tells the Frog's Brain".
'The frog does not seem to see or, at any rate, is not concerned with the detail of stationary parts of the world around him. He will starve to death surrounded by food if it is not moving. His choice of food is determined only by size and movement. He will leap to capture any object the size of an insect or worm, providing it moves like one. He can be fooled easily not only by a bit of dangled meat but by any moving small object.

'His sex life is conducted by sound and touch. His choice of paths in escaping enemies does not seem to be governed by anything more devious than leaping to where it is darker. Since he is equally at home in water and on land, why should it matter where he lights after jumping or what particular direction he takes? He does remember a moving thing providing it stays within his field of vision and he is not distracted.'
We think the frog sees what we see, being anthropomorphic. Instead, the frog 'sees' what evolution has designed its visual apparatus to process. The rest of their paper describes the neural net which implements the frog's visual task.

In 1982 David Marr's famous book "Vision" was posthumously published. Marr explained in mathematical terms the formal theory of visual scene recognition, starting from raw image-data, and exploiting regularities in the world. Laplacian of Gaussian convolution was followed by edge-detection and finally 3D scene acquisition. The theory could be implemented by computer code .. or by neural nets.


Marr's levels of abstraction and of visual processing (NN is neural net)

Neural networks are, in the most general sense, engineering not science. If we take the common task of scene recognition we start from an image bitmap which we process at a low level using convolutional methods to extract mid-level features and then group these to reconstruct a high level scene description.  Although the neural net is doing all this by using and/or adjusting weights between its 'neurons' we can capture the overall data structuring and processing using higher level formalisms.

If the original bitmap is really a matrix of numbers, the set of mid-level features can be more clearly expressed as a conjunction of mid-level predicates {(edge(...), vertex(...)}  while the high-level scene description could use predicate logic to explicitly represent discrete objects, attributes and relationships.

The more formal and mathematical descriptions/specifications are nevertheless implemented by weightings and connectivity in the neural net.

Neural nets do inference by linkage activation. If AB then activation in areas of the neural net associated with A cause activation in areas associated with B with probability 1. Less decisive or unambiguous weightings yield fuzzier inferences.

Similarly, modal concepts such as 'Believes(A, φ)' - as in an agent A believing the proposition φ - are represented by the neural net as an activation in the area representing the agent A being associated with another neural area representing the situation which φ describes. The activation link between those two areas captures the notion of believing, but it's a little bit mysterious as to how that believes-type link ever got learned .. perhaps it's innate?

Proceeding in this way we can imagine a neural net which creates effective representations of its environment (like the frog), which can create associations between conditions and actions, which can signal actions and thus control an effective agent in the world.

So far absolutely none of this is conscious.

---

Thinkers as far back as Karl Marx have believed that consciousness is a condition, and by-product, of social communication, although to be strictly accurate Karl Marx was not talking of the introspective consciousness of the psychologist, but about consciousness as a kind of revealed preference, that which is revealed through the actions of the masses.

---

I imagine human psychology to be implemented as a collection of semantic networks.

In the framework of neural networks, we're simply talking about a set of modularised, 'trained' neural net areas which link and communicate with each other through appropriate weights. But we can capture more of the 'aboutness' of these mini-networks by modelling them as semantic networks: semantic-net nodes are mini-theories: little collections of facts and rules; links create associations between nodal mini-theories representing relationships such as actions, or believing, knowing or wanting.

I imagine one's concept of oneself as being implemented as a large set of semantic networks capturing one's life-history memories, one's self-model of typical behaviours and future plans.


Roger Atkins brain-model of himself and girl-friend Jill

When you think about someone else that person is also modelled as a collection of semantic networks representing much the same thing. I understand cognitive processes as metalanguage activities: operations over semantic networks which strengthen or weaken link-associations; add, modify or delete nodes, that kind of thing.

This is all very conventional but it does take us to the outer limit of design and theorising.
  • Where in this architecture is the sense of personal consciousness? 
  • Where is the sense of active awareness of one's environment? 
  • Where is pain and what would it even mean for such an architecture to be in pain?

There is an engineering approach to 'the hard problem'. We imagine a system which we think would (for example) be in pain and ask how it works.

First the pain sensors fire, then as a consequence the pain nodes in the 'semantic net' higher in the chain activate.  In turn, they invoke avoidant routines. In a lower level animal that directly generates activities designed to run or get away from the pain stimulus.

However in social creatures like ourselves, amenable to social coordination, this immediate reaction should be suspended because it could be in conflict with other plans generated, for example by 'duty'.

From an engineering point of view this suggests a multilevel system: a higher level neural network supervising low level systems.



This is hardly very original however, and worse, it's all cognitive.


The higher 'social' level control system is semantically rich - but it's all cognitive and affect-free

We never get insight into how emotions or experiences emerge from this kind of architecture. We always know that there's something missing.

We say to ourselves: in the end it's all neurons. Consciousness seems to be something which is not architecturally that far from the other things the brain is doing. It's easy to divert through day-dreaming or inattention, or to turn it off with anaesthesia.

From an evolutionary/phenotype point of view the conscious brain doesn't seem to be some tremendously new thing, or a new kind of thing and yet somewhere in this apparently small cortical delta, this small change in brain architecture, a whole new phenomenon somehow enters the game.

And nobody at all can figure out how that could be the case."

---

As we know, Roger Atkins went on to design Jill/AXIS - and yet still artificial self-awareness/ consciousness was not intellectually cracked. The designers nervously waited upon it as an emergent phenomenon.

1 comment:

  1. Perhaps in discussing these issues one should post a view on the listing of "AI Positions" discussed by Penrose, Searle originating from Johnstone-Laird 1987 (I believe).

    A. Strong AI - AI can equal conscious intelligent behaviour
    B. Weak AI - AI can correctly simulate (but no more) the above
    C. Non-computational physics is required to (i) simulate or (ii) equal the above
    D. Science cannot define/describe consciousness.

    Turing (1936) introduced the model to define paper-and-pencil proof - but it has turned out to be a lot more general than that. Nevertheless Turing did not want an infinite number of states as these would "get confused". Ironically in his first computer, the ACE, the memory technology could get states confused.

    So do we necessarily benefit from this kind of classical model? I have just found a paper saying QM is necessary for consciousness modelling (not Penrose, but Stapp et al).

    ReplyDelete

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.