So I got 'Eliza' working today, the simplified version from 'The Art of Prolog'. Time to take stock and figure out where to go next.
First, Prolog. Its supporters always touted it as a higher-level language than Lisp - though I always found Lisp more congenial, I like to set up data structures and manipulate them explicitly. With Prolog you define relationships between things and the miraculous powers of unification and depth-first search with backtracking pull magical rabbits out of hats. The Eliza program in Prolog can be read in its entirety on one screen, ditto for the blocks world planning system.
This procedural power is the result of enormously complex recursive structures built at execution time by the Prolog system. It frequently defies one's powers of abstraction, short-term memory and inference to visualise what's actually going on. I know you're meant to read and understand the programs declaratively, but in reality you don't get too far without a consideration of what actually happens at run-time.
Still, the power to write ridiculously-powerful programs in just a few lines of code is addictive. It reminds me of the first time I fired the General-Purpose Machine Gun (GPMG). I was good with the Lee-Enfield rifle and prided myself on my accuracy; the GPMG just bounced around and hosed the target. So much power and so little control!
Eliza and the Blocks World Planner were little milestones I had set myself, like climbing Pen y Fan. Items on my bucket list if you like. So what next?
Once you know how to set up knowledge bases and inferential systems you have the tools for developing intelligent agents. But, as I have cited before on these pages, 'in the knowledge lies the power'. If your agent lives in a closed-world with a fixed and limited database and rule set, it's going to run out of new things to do pretty fast. The interest comes from its interactions with the wider world.
Yet as Doug Lenat noted in the context of his 'Cyc' project:
"Any time you look at any kind of real life piece of text or utterance that one human wrote or said to another human, it's filled with analogies, modal logic, belief, expectation, fear, nested modals, lots of variables and quantifiers," Lenat said. "Everyone else is looking for a free-lunch way to finesse that. Shallow chatbots show a veneer of intelligence or statistical learning from large amounts of data. Amazon and Netflix recommend books and movies very well without understanding in any way what they're doing or why someone might like something.Cyc has been in development since 1984 and its knowledge base currently contains over one million human-defined assertions, rules or common sense ideas. Yet it's still barely found practical use. I'm certainly not planning on reproducing that level of effort.
"It's the difference between someone who understands what they're doing and someone going through the motions of performing something."
Cycorp's product, Cyc, isn't "programmed" in the conventional sense. It's much more accurate to say it's being "taught." Lenat told us that most people think of computer programs as "procedural, [like] a flowchart," but building Cyc is "much more like educating a child."
"We're using a consistent language to build a model of the world," he said.
This means Cyc can see "the white space rather than the black space in what everyone reads and writes to each other." An author might explicitly choose certain words and sentences as he's writing, but in between the sentences are all sorts of things you expect the reader to infer; Cyc aims to make these inferences.
Consider the sentence, "John Smith robbed First National Bank and was sentenced to 30 years in prison." It leaves out the details surrounding his being caught, arrested, put on trial, and found guilty. A human would never actually go through all that detail because it's alternately boring, confusing, or insulting. You can safely assume other people know what you're talking about. It's like pronoun use - he, she, it - one assumes people can figure out the referent. This stuff is very hard for computers to understand and get right, but Cyc does both.
"If computers were human," Lenat told us, "they'd present themselves as autistic, schizophrenic, or otherwise brittle. It would be unwise or dangerous for that person to take care of children and cook meals, but it's on the horizon for home robots. That's like saying, 'We have an important job to do, but we're going to hire dogs and cats to do it.'"
I think we're back to virtualizing the cat ...