Thursday, March 16, 2017

My theorem-prover is (basically) working

My resolution theorem prover is working: find the documentation and code here.

My initial, rather simple test data concerns Sally Fowler's situation aboard the battleship MacArthur ('The Mote in God's Eye').



Who does Sally like: ←  (LIKES SALLY ?WHO)

Well, here are the Prolog-style facts.
 (LIKES ROD HORVATH) ←
 (LIKES SALLY RENNER) ←
 (LIKES SALLY ROD) ←
 (LIKES HARDY RENNER) ←
 (LIKES HARDY ROD) ←
 (AMUSING HARDY) ←
 (LIKES HORVATH MOTIES) ←
 (LIKES SALLY ?X) ← (LIKES ?X MOTIES)
 (LIKES ROD ?X) ← (LIKES ?X RENNER) (LIKES ?X ROD)
 (LIKES ?X ?Y) ← (AMUSING ?Y) (LIKES ROD ?Y)
 (LIKES ?X ?X) ←
Based on the above, the theorem-prover offers the following five responses. I doubt you are overwhelmed ... but then, test data is designed to be easy to understand.
1, Because everyone likes themselves, Sally likes Sally.

2. Because it's mentioned Sally likes Rod, she does.

3. Because it's mentioned Sally likes Renner, she does.

4. Sally likes those who like Moties. Moties like Moties because everyone likes themselves. So Sally likes Moties.

5. Sally likes people who like Moties. Horvath like Moties.  So Sally likes Horvath.
More precisely, after a little editing (inserted line-breaks), this is what was printed out.
((((<- ((LIKES SALLY SALLY))))
  (((13 NIL NIL) ((LIKES ?X ?X) <-))
   ((14 NIL NIL) (<- ((LIKES SALLY ?WHO))))
   ((19 14 13) +EMPTY-CLAUSE+)))

 (((<- ((LIKES SALLY ROD))))
  (((5 NIL NIL) ((LIKES SALLY ROD) <-))
   ((14 NIL NIL) (<- ((LIKES SALLY ?WHO))))
   ((16 14 5) +EMPTY-CLAUSE+)))

 (((<- ((LIKES SALLY RENNER))))
  (((4 NIL NIL) ((LIKES SALLY RENNER) <-))
   ((14 NIL NIL) (<- ((LIKES SALLY ?WHO))))
   ((15 14 4) +EMPTY-CLAUSE+)))

 (((<- ((LIKES SALLY MOTIES))))
  (((13 NIL NIL) ((LIKES ?X ?X) <-))
   ((10 NIL NIL) ((LIKES SALLY ?X) <- ((LIKES ?X MOTIES))))
   ((14 NIL NIL) (<- ((LIKES SALLY ?WHO))))
   ((17 14 10) (<- ((LIKES ?WHO MOTIES))))
   ((24 17 13) +EMPTY-CLAUSE+)))

 (((<- ((LIKES SALLY HORVATH))))
  (((9 NIL NIL) ((LIKES HORVATH MOTIES) <-))
   ((10 NIL NIL) ((LIKES SALLY ?X) <- ((LIKES ?X MOTIES))))
   ((14 NIL NIL) (<- ((LIKES SALLY ?WHO))))
   ((17 14 10) (<- ((LIKES ?WHO MOTIES))))
   ((20 17 9) +EMPTY-CLAUSE+)))

Figure 1
---

Q&A.

1. Isn't this just Prolog?

Prolog is encountered as a black box. You provide a program and a query (as above) and you get back bindings, like this:
?- likes(sally, W).

W = sally;
W = rod;
... and so on

If you want proofs, to add the 'occurs check' or to change the order in which resolution steps are tried (the search strategy) - well tough: all those things are both fixed and opaque in Prolog. To make them explicit for modification, you have to write an explicit theorem-prover in Prolog (which can of course be done).

2. You are using Horn clauses?

Yes. I initially thought to implement a general clausal prover, but Horn clauses make the resolution step particularly simple (just one positive literal to match) and you lose neither generality nor expressive power. But since everything in the code is both modular and explicit, it would be easy to extend the program.

3. The style is functional?

Yes. Resolution theorem provers on the Internet are heavily optimised with clever, imperative algorithms and direct access data structures such as hash tables. This makes the code efficient but obscure - very hard to understand.

I didn't want to do that. This is a tool-kit and I'm not trying to create thousands of logical inferences per second. My intended application area (simple inference over an ever-changing knowledge-base for a chatbot) never requires massive inferential chains so clarity and easy modifiability was my objective.

4. What was hard?

The program is architected at three levels.

(a) Unification

This is basically already quite modular and straightforward. I re-used Peter Norvig's code.

(b) Resolution

Binary resolution itself - specially with Horn clauses - is a straightforward procedure - as you will see when I publish the code. There are some subtleties with bindings and substitutions, but once you realise that resolution is fundamentally a local operation it's not too difficult.

(c) Control and proof construction

The process of creating new goals and resolving them with the axioms is somewhat complex although again, sticking with Horn clauses makes it a lot easier: just think of a naive Prolog execution model.

However, if you want to return proofs, you need to number the axioms and number-and-archive goals as you process them, capturing the resulting tree of inferences. At the end, for each empty-clause = successful proof, you need to trace that index-tree backwards to recover the proof steps. That's what created figure 1 above. I found it a bit intricate.

5. What's next?

In theory you can do anything with a theorem-prover (Prolog being the existence proof) but it's not necessarily the best architecture. For a planner, where state changes are part of the problem definition, I need to adapt the tool-kit to a design centred around actions with pre- and post-condition in the context of a goal-state and an updating world-state. Such a dynamic model can be used both for actions in a (virtual) world and conversation planning via speech acts.

The theorem-prover remains optimal as a separate module, managing the crystallized knowledge base using inference to draw conclusions - for example to drive questions and answers.

I'm thinking of using the already-programmed Eliza front-end as a robust conversational interface. Doesn't matter if it's bluffing some of the time if it can use learning, inference and planning often enough.

Onwards to the GOFAI smart chatbot ...

---

Here is the documentation and code.

2 comments:

  1. Q&A Continued 6. What about scaling up to larger N?

    A. As I explained in (3) above I will not be using a large inference base and so do not expect the Combinatorial Explosion to be relevant to the Chatbot application. It probably scales fairly badly but I hope that wont matter.

    ReplyDelete
    Replies
    1. In fact if N is the number of facts and rules in the axioms, then I expect the chatbot's to be an extensible list of 'microtheories' so filtering on relevance solves the problem.

      If N is inference depth (eg number of relevant rules, or recursion depth) then efficiency is more of an issue but due to lack of inferential depth in normal conversation, I'm not that worried!

      Delete

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.