Sunday, May 16, 2010

What do you make of Artificial Intelligence?

I was active in AI research during the 1980s. I was quite well-known in the UK/European AI community but in retrospect I think I made a terrible colleague. I was quite intolerant of most of the work that was being done and made no secret of it. The picture below shows me on the occasion of my first published paper - at IJCAI in 1983.



Someone at the time compared most AI to the contribution that making paper aeroplanes makes towards a theory of aerodynamics. The idea is that just because you can write a program to replicate some micro-behaviour doesn't necessarily mean you have any handle on understanding it.

Marvin Minsky was alleged to have said recently that AI had made no real progress since the 1970s. It's scary how little we know, we don't even know what the fundamental problem is, let alone how to solve it. The fundamental problem is not likely to be "intelligence" per se. So what has worked?

Designing and building systems which can execute a perceive-process-act cycle in fulfilment of arbitrary "missions" (including homeostasis) has been a major sub-project of AI. This research "thread" has had to tour such areas of unorthodoxy as neural nets and reactive systems but there have been major success, often in areas of interest to NASA and the military. "AI" has paradoxically been quite successful at solving computer science problems.

However, no-one has any idea how a physical system can experience pain, for example (affective, not cognitive). A dogma of AI is that if we had a mathematical theory of how an agent could experience pain, then a program instantiating such a theory would, when it ran as a computer process, actually experience pain. By hypothesis, such a computer process could be tortured (this is a plot device in the SF novel "The War in 2020" - Ralph Peters, 1991). I guess most of us just shake our heads at our lack of any intuition as to how that could work. Ditto for whatever we think occurs when we consider ourselves to be conscious entities. My childhood illusion that the human race has a basic handle on all the obvious problems continues to generate genuine visceral surprise that we still just don't know.

My methodology for my own AI research was always something like this: "what set of ecological problems 100,000 years ago in Africa created conditions which selected for the cognitive-affective solution which human beings exemplify", or if you like "what was the problem to which we are a solution". This poses the problem in a "requirements-implementation" paradigm, or more formally the paradigm of setting up an axiomatic theory and looking at instances of models which satisfy it (models which are architectural/automata-theoretic in character, making explicit the architectural principles underlying human psychology). Evolutionary psychology is a lot more popular now than it was back in the 1980s.

My Ph. D. research applied this methodology in the case of simpler autonomous agents, but I failed to get a handle on how to model a deeper level of complexity - something along the lines of problem-solving communicating agents, but nothing seemed very compelling, and then I had to give it up when the organisation I worked for (STL) was acquired by Nortel. A lot of people think that human psychology has a layered character, with the pre-human "unconscious" overlaid with more symbolic neo-cortical functions - the "triune brain".

I basically buy this, but I have yet to see formal models which either convincingly "reproduce the phenomena" such as the awfulness of pain, or ground such models in an ecological framework. I think that the Jungian theory of human psychological type has some interesting insights (see "Evolutionary Psychiatry" - Stevens & Price, 2001). However, any formalisation will be extremely challenging.