Wednesday, June 26, 2013

Impure thoughts on theorem provers

We're so totally blasé about smart systems today. The first time I thought seriously about how to program a computer to play noughts and crosses (tic-tac-toe) I was puzzled. How to make the machine play intelligently? 

The answer is search: intelligence looks very much like pruning down the space of possible options or plans to find the ones which allow your goal to be achieved. In the presence of an adversary, the opponent's possible reactions also enlarge the search-space.  In the AI jargon: generate and test.

The search tree for noughts and crosses is small  and easily traversed to find an optimal move. Games like chess have unimaginably large trees but fast and clever algorithms prune away the dross and look deeply ahead down promising subtree lines of attack.

There is a philosophy in the automated deduction 'community' that you should take a very general logical formalism (eg first-order predicate calculus with equality) and super-optimise the inferential machinery - often enhancements of the basic Resolution principle. This is similar to chess program design,  or even to the preoccupations of guys who want to make the fastest possible computing hardware platform,  ignoring issues of algorithmic efficiency.

I find this approach admirable in its place but unduly syntactic in intelligent system design. The ecological 'computation' of animals and people is situated and time constrained. It leverages existing knowledge encoded in efficient representations and processing can look more like data processing than formal theorem proving (although obviously equivalent).

So my memo to self on intelligent system design is not to start with the paradigm that we're talking an application of ATP here: automated theorem proving is an important tool but probably only a small part of the overall architecture.