Friday, March 31, 2017

Naive generate-and-test won't hack it

When I was young I toyed with the following idea.

Pretty much any concept can be adequately expressed in a mini-essay of a thousand words.

Simply generate all possible articles of a thousand words and somewhere you will find the answer to all problems.

Want the design of a stardrive engine? Immortality? The theory of perfect governance?

It's all in there somewhere.


How many essays though? Apparently the average educated speaker of English knows about 40,000 words. So for our first estimate, we could simply raise 40,000 to the power of 1,000 .. but most of those 104,602 essays would be wildly ungrammatical. We can do better.

I reviewed a sample text: the introductory quote in Peter Seibel's "Practical Common Lisp".

The first five sentences comprised 100 words in total which broke down into:
  • nouns: 20%
  • verbs: 15%
  • adjectives: 10%
  • others: 55%
A certain amount of hand-wavy rounding of course. Assume we adopt the very restrictive constraint of exactly one syntactic structure for the entire set of essays, then the total number reduces to a product of:
(number-of-English-words-in-category) (number-of-words-of-this-category-in-essay)
8,000200 * 6,000150 * 4,000100 * 22,000550 = 104,092
That's still a big number*. Suppose only one 'essay' in a billion was semantically sensible and we could read one essay per second. That's 104,083 seconds .. or 3 * 104,066 billion years.

The merits of a compact notation.


Exhaustive search through the space of all possible candidates isn't a very good way of proceeding. And this has important implications for DARPA's third wave - contextual AI - which I wrote about previously.

In his excellent exposition (YouTube), John Launchbury highlighted the very large number of training instances needed to force convergence for today's artificial neural networks. By comparison, children learn new concepts from very few examples.

John Launchbury's proposed solution was - correctly - to identify additional constraints which might dramatically collapse the search space. His chosen example showed the benefits of adding the dynamics of handwriting characters to the resultant bitmaps normally used for training. It turns out that if you consider how the image might have been created, it makes recognition a lot easier.

It's not hard to identify the extra constraints about the world which children use. They interact with new objects, touch them, throw them, bite them and try to break them. Thus are acquired notions of 3D structure, composition and texture to augment what their visual systems are telling them.

I really do think that a high priority should be given to embodied robotics in the next wave of AI research.


Another example John Launchbury discussed was the Microsoft Internet-chatbot "Tay".

Apparently this was the least-offensive tweet Launchbury could find. But what would an AI have to know about contemporary mores to self-reject statements like that?

For extra credit, discuss the 'situated cognition' thesis that only through active and corporeal participation in the social world can one truly understand social concepts.

Particularly emotionally-charged ones.


* Since
(i)  I don't consider all the syntactically-permissible permutations of the ways in which nouns, adjectives, verbs and others could be mixed up in the thousand words, while

(ii)  the size of the 'others' vocabulary is likely to be way smaller than 22,000 (so if, for example, the 'others' vocabulary size was 2,200, this would reduce the overall essay-set size by a factor of 10550 - a distinction, however, without a practical difference),
this calculation counts as pretty bogus. I only wanted to demonstrate, however, that no matter how you cut it, the numbers involved are simply ginormous.

DARPA: three waves of AI

High production values for DARPA's US Military roadmap and vision for AI (February 2017).

This will be the basis of funding going forward. The images below are taken from this slide-pack, more sophisticated than anything I've seen from the likes of Accenture.

Click on any of the pictures to make larger - or better, review the entire slide-set.

Although this 'three wave' model is not too surprising, it's still an accurate view as to where research is heading.


If human beings are taken as exemplars of neural nets which can explain their own, contextual operation, it's worth noting that such explanations have a curious character.

No human can explain their own sub-conscious neural processes. If asked to explain how you know that a picture of a cat is indeed that of a cat, you are not going to elucidate details of early visual processing in your visual cortex.

Instead, you are going to traffic in high-level, symbolic descriptions of putative intermediate stages in scene interpretation. The talk will be of features such as fur, shape, the environment of said animal.

These intermediate-level symbolic descriptions are remote indeed from the actual neural processes which it is claimed implemented them .. and indeed will have only a contingent (although highly correlated if accurate) relationship with them.

Self-deception is never far away in the third wave!


If you have sixteen minutes, John Launchbury's presentation of DARPA's strategy is excellent.

Interestingly, John Launchbury is British.

Thursday, March 30, 2017

Open systems meet closed automation

Let me start with this rather intriguing story (via Bruce Schneier).

"Prior to World War II, Abraham Wald was a rising mathematician in Europe. Unable to obtain an academic research position in Austria due to his Jewish heritage, Wald eventually made his way to the U.S. to become one of the most important statisticians of the 20th century.

"One of Wald’s most prominent works was produced for the U.S. government’s World War II-era Statistical Resource Group. The project examined aircraft that had returned from their combat missions and the locations of armor on the planes. Placement was, of course, no trivial matter. Misplaced armor would result in a negatively balanced, heavier and less maneuverable plane, not to mention a waste of precious wartime resources.

"Tasked with the overall goal of minimizing Allied aircraft losses by placing additional armor in strategic locations on the plane, Wald challenged the natural instincts of military commanders. Conventional wisdom suggested that the planes’ survival rates might benefit from additional armor placed in the areas that suffered the highest volume of direct hits. But Wald found that was not the case.

"Leveraging data stemming from his examinations of planes returning from combat, Wald made a critical recommendation based on the observation of what was not actually visible: He claimed it was more important to place armor on the areas of the plane without combat damage (e.g., bullet holes) than to place armor on the damaged areas. Any combat damage on returning planes, Wald contended, represented areas of the plane that could withstand damage, since the plane had returned to base.

"Wald reasoned that those planes that were actually hit in the undamaged areas he observed would not have been able to return. Hence, those undamaged areas constituted key areas to protect. A plane damaged in said areas would not have survived and thus would not have even been observed in the sample. Therefore, it would be logical to place armor around the cockpit and engines, areas observed as sustaining less damage than a bullet-riddled fuselage.

"The complex statistical research involved in these and Wald’s related findings led to untold numbers of airplane crews being saved, not only in World War II, but in future conflicts as well."

As designers we always have a theory of our proposed artefact in its intended environment. Sometimes we capture the theory in a formal specification, sometimes it's implicit in the examples we feed to some artificial neural net, frequently it's some fuzzy understanding we incorporate into a plain-language requirements document plus some test data.

In any event, the final engineered artefact embodies a theory - the theory of the environment in which it works correctly. That environment is often the real world and here we hit a problem: the real world is not a precisely-specified closed system*. Inevitably the artefact will encounter an event which is out of the envelope of its design - and then it will fail.

A good example of this is driving. Here, you are the artefact. Initially you learn in structured lessons how to control the car and tactics to safely navigate the streets.

As you gain experience, you statistically encounter fewer, rarer anomalous events. If you are lucky, your consequential mistakes will not be too serious. You update your protocol and become a better driver. But you will never be perfect.

Driving is an open system. There are (porous) boundaries around the theory of driving but as all experienced drivers know, that theory incorporates a great deal of real-world social knowledge - it's more than seeing the white lines in the rain. **


When we classify a human social role as routine, we're saying that the wider system into which the role is enrolled is effectively closed and can be pre-specified. No real systems are truly closed so we always provide an escalation route to a competent (ie more informed) authority. For truly routine roles, we don't expect that escalation to occur too frequently, or to be problematic when it does.

Bruce Schneier's excellent article is about countering cyber-attacks. This is far from routine. The adversary is using intelligence, novel tools and unfixed vulnerabilities to get you. That's pretty much the definition of an open system. Schneier describes the problem like this:
"You can only automate what you're certain about, and [...] when an uncertain process is automated, the results can be dangerous."
The right answer is to use automated systems within manageably closed subsystems (like antivirus routines) within the broader oversight of a computer-augmented human response team.

Perhaps one day we will have human-socialised AIs which have the intuitions, general knowledge and motivational insights which humans possess, and then we can hand things over to those said AIs, confident they will make no more mistakes than we would in those incredibly challenging not-sufficiently-closed systems.


*   Arguably it is from the point of view of modern physics - but that doesn't buy you anything.

** Here's a review about the implications for driverless cars.

Wednesday, March 29, 2017

Bob Monkhouse's top three jokes

During his lifetime comedian Bob Monkhouse was widely disdained for a public persona of cheesy smarminess. Something which, as an ENTP,* he shared with Tony Blair.

In a generation dominated by working class comedic vulgarity, his middle-class intelligence and sophistication was evident. Consequently he was not popular with his peers.

Bob Monkhouse

For me what saved him was his sense of self-deprecating irony. Here are three of his best jokes which - despite familiarity - are still pretty good.

"They laughed when I said I was going to be a comedian ... They're not laughing now."

"I can still enjoy sex at 74 - I live at 75, so it's no distance."

"I want to die like my father, peacefully in his sleep, not screaming and terrified like his passengers."



* ENTPs don't do (tertiary) Extraverted Feeling at all well: (Myers-Briggs personality theory).


If the connection between brain architecture and personality type interests you, take a look at this post. I've been reviewing recent results from the Human Connectome Project and my remarks back then seem to stand up pretty well.

Tuesday, March 28, 2017

Roger Atkins: Mind Design notebook

Roger Atkins's career path from contracted neural network designer to chief designer at Mind Design was not a smooth one. His work was marked by dead ends, false starts and much groping around for insights. Here are extracts from his early notebooks.


" ... How much progress have we really made since the dawn of our discipline?

Back in 1959, Lettvin, Maturana and McCulloch wrote their famous paper: "What the Frog's Eye Tells the Frog's Brain".
'The frog does not seem to see or, at any rate, is not concerned with the detail of stationary parts of the world around him. He will starve to death surrounded by food if it is not moving. His choice of food is determined only by size and movement. He will leap to capture any object the size of an insect or worm, providing it moves like one. He can be fooled easily not only by a bit of dangled meat but by any moving small object.

'His sex life is conducted by sound and touch. His choice of paths in escaping enemies does not seem to be governed by anything more devious than leaping to where it is darker. Since he is equally at home in water and on land, why should it matter where he lights after jumping or what particular direction he takes? He does remember a moving thing providing it stays within his field of vision and he is not distracted.'
We think the frog sees what we see, being anthropomorphic. Instead, the frog 'sees' what evolution has designed its visual apparatus to process. The rest of their paper describes the neural net which implements the frog's visual task.

In 1982 David Marr's famous book "Vision" was posthumously published. Marr explained in mathematical terms the formal theory of visual scene recognition, starting from raw image-data, and exploiting regularities in the world. Laplacian of Gaussian convolution was followed by edge-detection and finally 3D scene acquisition. The theory could be implemented by computer code .. or by neural nets.

Marr's levels of abstraction and of visual processing (NN is neural net)

Neural networks are, in the most general sense, engineering not science. If we take the common task of scene recognition we start from an image bitmap which we process at a low level using convolutional methods to extract mid-level features and then group these to reconstruct a high level scene description.  Although the neural net is doing all this by using and/or adjusting weights between its 'neurons' we can capture the overall data structuring and processing using higher level formalisms.

If the original bitmap is really a matrix of numbers, the set of mid-level features can be more clearly expressed as a conjunction of mid-level predicates {(edge(...), vertex(...)}  while the high-level scene description could use predicate logic to explicitly represent discrete objects, attributes and relationships.

The more formal and mathematical descriptions/specifications are nevertheless implemented by weightings and connectivity in the neural net.

Neural nets do inference by linkage activation. If AB then activation in areas of the neural net associated with A cause activation in areas associated with B with probability 1. Less decisive or unambiguous weightings yield fuzzier inferences.

Similarly, modal concepts such as 'Believes(A, φ)' - as in an agent A believing the proposition φ - are represented by the neural net as an activation in the area representing the agent A being associated with another neural area representing the situation which φ describes. The activation link between those two areas captures the notion of believing, but it's a little bit mysterious as to how that believes-type link ever got learned .. perhaps it's innate?

Proceeding in this way we can imagine a neural net which creates effective representations of its environment (like the frog), which can create associations between conditions and actions, which can signal actions and thus control an effective agent in the world.

So far absolutely none of this is conscious.


Thinkers as far back as Karl Marx have believed that consciousness is a condition, and by-product, of social communication, although to be strictly accurate Karl Marx was not talking of the introspective consciousness of the psychologist, but about consciousness as a kind of revealed preference, that which is revealed through the actions of the masses.


I imagine human psychology to be implemented as a collection of semantic networks.

In the framework of neural networks, we're simply talking about a set of modularised, 'trained' neural net areas which link and communicate with each other through appropriate weights. But we can capture more of the 'aboutness' of these mini-networks by modelling them as semantic networks: semantic-net nodes are mini-theories: little collections of facts and rules; links create associations between nodal mini-theories representing relationships such as actions, or believing, knowing or wanting.

I imagine one's concept of oneself as being implemented as a large set of semantic networks capturing one's life-history memories, one's self-model of typical behaviours and future plans.

Roger Atkins brain-model of himself and girl-friend Jill

When you think about someone else that person is also modelled as a collection of semantic networks representing much the same thing. I understand cognitive processes as metalanguage activities: operations over semantic networks which strengthen or weaken link-associations; add, modify or delete nodes, that kind of thing.

This is all very conventional but it does take us to the outer limit of design and theorising.
  • Where in this architecture is the sense of personal consciousness? 
  • Where is the sense of active awareness of one's environment? 
  • Where is pain and what would it even mean for such an architecture to be in pain?

There is an engineering approach to 'the hard problem'. We imagine a system which we think would (for example) be in pain and ask how it works.

First the pain sensors fire, then as a consequence the pain nodes in the 'semantic net' higher in the chain activate.  In turn, they invoke avoidant routines. In a lower level animal that directly generates activities designed to run or get away from the pain stimulus.

However in social creatures like ourselves, amenable to social coordination, this immediate reaction should be suspended because it could be in conflict with other plans generated, for example by 'duty'.

From an engineering point of view this suggests a multilevel system: a higher level neural network supervising low level systems.

This is hardly very original however, and worse, it's all cognitive.

The higher 'social' level control system is semantically rich - but it's all cognitive and affect-free

We never get insight into how emotions or experiences emerge from this kind of architecture. We always know that there's something missing.

We say to ourselves: in the end it's all neurons. Consciousness seems to be something which is not architecturally that far from the other things the brain is doing. It's easy to divert through day-dreaming or inattention, or to turn it off with anaesthesia.

From an evolutionary/phenotype point of view the conscious brain doesn't seem to be some tremendously new thing, or a new kind of thing and yet somewhere in this apparently small cortical delta, this small change in brain architecture, a whole new phenomenon somehow enters the game.

And nobody at all can figure out how that could be the case."


As we know, Roger Atkins went on to design Jill/AXIS - and yet still artificial self-awareness/ consciousness was not intellectually cracked. The designers nervously waited upon it as an emergent phenomenon.

Monday, March 27, 2017

Diary: garden maintenance

I noticed over the last week that I was completely out of energy for weight training.

I'd spin the bike for eight minutes throwing in some twenty-second high-intensity surges, do a few press-ups, stare at the dumbbells .. then go off and get showered: no energy.

Clare finally figured it out: no carbs. And I used to know this!


In the past I was very aware that if I'd had a carb-rich meal the previous evening I was good at the gym next day. But recently, experimenting with a protein and fat-centric diet, I've been avoiding cereals and carbs except for fruit (which is just water, right?).

Saturday evening I relaxed the regime; Sunday morning I did a half hour of solid iron-lifting, book-ended with bike HIT. And today I reaped the benefits.

Invited to take the saw to the dense tree at the bottom of the garden, I was all over it.

Sadly, a major trip to the municipal dump will now have to be scheduled.

While I'm writing this, Clare is out there with the loppers, stripping branches and getting the debris in shape for the car ... .

Saturday, March 25, 2017

The Bishop's Palace in early spring

Wells Cathedral with the Bishop's Palace moat foreground
Clare fronting the Bishop's Palace 

Houses abutting Wells Recreation Ground

Sunny but a cold wind from the north-east. People on the streets seemed bewildered by the sunshine: many were wearing not much more than shorts and tee-shirts. I was in full arctic dress .. 'no such thing as bad weather, only bad clothing' .. and quite comfortable.

Friday, March 24, 2017

My dream job: chief designer at Mind Design

Amazon link

Yesterday in a conversation with Clare the topic came up of my ideal job.

No problem.

Since reading Greg Bear's Queen of Angels (years ago), I've wanted to be Roger Atkins.

But with a better name - obviously.


[AXIS (Automated eXplorer of Interstellar Space) is an AI system currently orbiting planet B-2 of Alpha Centauri B. Jill is the stay-at-home duplicate. -- From pp. 128-130.]

"LitVid 21/I A Net (David Shine): "We're preparing for an interview with Roger Atkins, chief designer at Mind Design Inc. responsible for AXIS's thinker device. What questions would you like to ask of the nation's foremost designer of thinking machines? For you know of course that thinking is different from computing.

"Roger Atkins regards computers as an architect might regard bricks. He is at this moment working with his massive personal construct thinking system, which he calls Jill, after an old, that is, a former girlfriend. Part of Jill is in fact the AXIS Simulation we have been mentioning throughout this vid-week, used to model the activities of AXIS itself, which is not directly accessible.

"But there are many more parts to Jill. Jill's central mind and most of her memory and analytical peripherals are on the grounds of Mind Design Inc near Del Mar, California; Jill can access other thinkers and analytical peripherals at Mind Design Inc facilities around the world, some by satellite, most by direct optical cable connections. While we speak with Mr. Atkins, we hope also to ask a few questions of Jill.

"And we begin right now. Mr. Atkins, in the past twenty five years you have moved from the status of a contracted neural network computer designer to perhaps the most important figure in artificial intelligence research. You seem to be in an ideal position to tell us why complete, self-aware artificial intelligence has proven to be such a difficult problem."

Atkins: "First of all, my apologies, but Jill is asleep right now. Jill has been working very hard recently and deserves a rest.

"Why is artificial intelligence so difficult? I think we always knew it would be difficult. When we say artificial intelligence, of course what we mean is something that can truly imitate the human brain. We've long since had thinking systems that could far outstrip any of us in basic computation, memorizing, and for the past few decades, even in basic investigative and creative thinking, but until the design of AXIS and Jill, they were not versatile. In one way or another, these systems could not behave like human beings.

"And one important consideration was that none of these systems was truly self-aware. We believe that in time Jill, and perhaps even AXIS itself, will be capable of self-awareness. Self-awareness is the most obvious indicator of whether we have in fact created full artificial intelligence."

David Shine: "There's a joke about self-awareness ... Could you tell it to us?"

Atkins: "It's not much of a joke. No human would laugh at it. But all modern workers in artificial intelligence have installed a routine that will, so to speak, 'laugh' or perceive humor in this joke should self-awareness occur in a system."

David Shine: "And what is the joke?"

Atkins: "It's embarrassingly bad. Someday perhaps I'll change it.

   'Why did the self-aware individual look at his image in the mirror?"

David Shine: "I don't know. Why did he?"

Atkins: "'To get to the other side.'"

David Shine: "Ha."

Atkins: "See, not very funny."

David Shine: "LitVid 21 viewer Elaine Crosby, first question to Mr. Atkins please."

LVV E Crosby Chicago Crystal Brick: "Mr. Atkins, I've read your lit, and I've long admired your work, but I've always been curious. If you do awaken Jill or some other machine, what will you tell them about our world? I mean, they'll be as innocent as children. How do you explain to them why society wants to punish itself, why we're so set on lifting ourselves up by our bootstraps whatever it takes, and we don't even know where we're going?"

Atkins: "Jill is hardly innocent. Just a few minutes ago, she was examining the theory of social feedback loops, that is, checks and balances in a society. She could probably tell us more about what troubles our society than any single human scholar.

"But that's just recreation for her, in a way; unless someone comes along and specifically asks us - or rather, rents Jill - she won't provide her analysis, but it'll be stored away. I doubt that even if she did solve our problems for us, we'd listen to her"

The novel was written in 1990, twenty seven years ago. Yet the narrative on AI is completely contemporary. The 'joke' is interesting: what humour it possesses would seem to reside in its character as a weak pun. Perhaps that just shows I'm not self-aware.

At time of writing, I suspect the leading candidate for Roger Atkin's job is Andrew Ng.

Thursday, March 23, 2017

Terror: can AI help?

My go-to guy on the Labour Left, Phil Burton-Cartledge, had this to say about yesterday's terror attack in central London.
"And, in a very rare instance, I'm going to defend the intelligence services. There is a very good chance the assailant was on a terror watch list. It's quite possible he had been or was presently under surveillance.

"Inevitably, the questions will be asked why he wasn't detected and/or picked up before now and prevented from undertaking this afternoon's attack. Again, while it's right such issues should be explored, lessons drawn and, if there is a case of egregious carelessness that those responsible be held to account, what really has to be asked is what could have been done differently?

Thankfully, we don't have indefinite detention without trial of suspects, but unless there are teams on standby covering the move of every single suspect then the answer has to be very little.

"Watching someone getting into their car and driving into central London is not immediately suggestive of suspicious activity. There is no way his intent to kill could be inferred before the car mounted the pavement and started accelerating towards passers by.

"This kind of attack is next to impossible to prevent if someone is so minded to carry it out."
These are very good points. But could AI have helped prevent the attack?

There are two issues to separate:
  • recognising that an attack is in progress
  • dealing with it.
Both are difficult.

Modern naval vessels are subject to attack from hypervelocity sea-skimming missiles. The time between detection and impact is short - too short to allow humans in the defence loop. The ship's AI monitors sensor systems such as radar, and has direct control over its terminal defence systems. An example would be the Phalanx.

If the AI gets it wrong and is too aggressive .. well, stuff happens in the military.


Deep Learning systems are typically trained on vast datasets, essential to extract the relevant classificatory features from the enormous space of relevant variations.
  • I doubt we have large datasets of events as rare and varied as 'Islamic terrorist attacks'.

  • I therefore doubt we could extract a definitive feature set which would reliably partition attacks from the myriad of events which define normal life.

Still, in more constrained situation such as an entranceway, you could create plenty of simulation data of intruders armed with knives or guns and I suspect a recogniser could be made to work. The police or guards would call in an attack anyway, so the AI system might buy you only a few seconds (and I'd be worried about all those false positives and false negatives).

But it would really come into its own if it could be linked to a fast response system.

I rather admire those security guys who take-down armed assailants. Almost all of the time on their watch, nothing bad is happening. Then comes an exceedingly rare event which they have to classify within seconds as requiring lethal force - and get it right. The natural human reaction would be to hold off for fear of making a catastrophic error. It's a hard call.

In Richard K. Morgan's SF pulp noir "Altered Carbon", the AI system at the 'Hendrix Hotel' takes out hero Takeshi Kovacs's attackers (once he has thumb-printed his contract!):
"I straightened again and snapped my hand out to the keypad beside the screen. Traces of fresh spittle smeared over the matt black receiver. A split second later a calloused palm edge cracked into the left side of my skull and I collapsed to my hands and knees on the floor. A boot lashed into my face and I went the rest of the way down.

'Thank you sir.' I heard the voice of the hotel through a roaring in my head. 'Your account is being processed.'

I tried to get up and got a second boot in the ribs for the trouble. Blood dripped from my nose onto the carpet. The barrel of the gun ground into my neck.

'That wasn't smart, Kovacs.' The voice was marginally less calm. 'If you think the cops are going to trace us where you're going, then the stack must have fucked your brain. Now get up! '

He was pulling me to my feet when the thunder cut loose.

Why someone had seen fit to equip the Hendrix's security systems with twenty-millimetre automatic cannon was beyond me, but they did the job with devastating totality. Out of the corner of one eye I glimpsed the twin-mounted auto-turret come snaking down from the ceiling just a moment before it channelled a three-second burst of fire through my primary assailant. Enough fire-power to bring down a small aircraft. The noise was deafening. "
Decisions which seem difficult when you're thinking at human speeds seem a lot more tractable when time is slowed by a factor of ten or one hundred .. or when thinking speed is cranked up by a similar amount.

Something tells me the Hendrix Hotel is the way of the future.

Wednesday, March 22, 2017

A star orbitally skimming a black hole

Centauri Dreams had a post yesterday describing the unusual system 47 Tuc X9 which is 14,800 light years from Earth. It appears to be a white dwarf star in a very close orbit (radius about a million kilometres) around a black hole, with orbital period 28 minutes (!).

The post is illustrated by an artist's impression of the system:

and a statement of the curious orbital dynamics:
"This white dwarf is so close to the black hole that material is being pulled away from the star and dumped onto a disk of matter around the black hole before falling in,” says lead author Arash Bahramian (University of Alberta and Michigan State University). “Luckily for this star, we don’t think it will follow this path into oblivion, but instead will stay in orbit. ...

"We think the star may have been losing gas to the black hole for tens of millions of years and by now has now lost the majority of its mass. Over time, we think that the star’s orbit will get wider and wider as even more mass is lost, eventually turning into an exotic object similar to the famous diamond planet discovered a few years ago."
It isn't obvious why the star is winding its way out from the black hole and several commentators get confused (hint: it's not frame-dragging).

From the star's period and orbital radius we can work out the mass of the black hole: 94 solar masses. From this, we can calculate the black hole's schwarzschild radius - an event horizon of 277 km, just under 100 times larger than the event horizon of a solar mass black hole (3 km).

The black hole's gravity at the orbital radius of the star is 13.4 km/sec2, or just under 1,400g. You can see why it's whipping around so fast (c. 3,300 km/sec or 1% of the speed of light).

Try to imagine it. If this black hole were placed at the centre of the Earth, it would be an unimaginably tiny object (277 km!) in the middle of the core. The star, meanwhile, is two and a half times the distance of the Moon. The star's experience of the black hole comes down to some pretty crazy tidal forces.

We know about tidal forces: they try to tear the star apart and rearrange its material into an orbital ring. None of this would explain material infalling into the black hole or the star spiralling outwards. We don't see such phenomena at Saturn for example.

The secret is explained by the authors in this remark:
"Low mass X-ray binaries (LMXBs) are systems in which a compact object [neutron star (NS) or black hole (BH)] accretes matter from a low mass companion (typically a main sequence star) through Roche-lobe overflow or wind-fed accretion (from a red giant). ...

"In the most likely scenario, this particular star would have first started losing mass to the suspected black hole several tens of millions of years ago when it was much closer, in an orbit with a period of just minutes.

"Over time, as that star has lost most of its mass, the size of the orbit would have increased, and the rate at which mass has been lost to the black hole would have decreased. The rate of mass loss would once have been a billion times higher. So yes, the star would initially have been much closer to the black hole.

"How close a star can get to a black hole before starting to lose mass to the black hole depends on the kind of star it is. Big, fluffy giant stars can lose gas to a black hole when they are much further away than small, compact stellar remnants like this white dwarf, whose gravity is strong enough that they are able to hold onto their mass more tightly, so need to get much closer before mass can be torn away.

"We also think that this star will have been gradually losing mass over tens to hundreds of millions of years; in this case it is not being torn apart in a single cataclysmic event that results in it being shredded into streams of debris, as we have seen in spectacular outbursts from the centres of some external galaxies (known as tidal disruption events).

"Rather, in this case, we have a steady loss of mass to the black hole over time."
The Roche-lobe overflow effect is an interesting one (Wikipedia article). If debris from the star can reach the L1 Lagrange point (between the star and the black hole) it can migrate to the black hole itself. The remaining stellar material has higher than before angular momentum and its orbital radius increases. [Note: but apparently not - see comments.]

Roche Lobe potential: from the Wikipedia article

There are few things more counter-intuitive than orbital mechanics.

How language irritates!

Yesterday the BBC News carried long minutes of programming devoted to the life of Martin McGuinness: warrior/terrorist turned statesman/peacemaker in the official narrative.

And there were repeated references to 'Londonderry'.

Yes, Derry is called 'Londonderry' whenever the BBC wants to line up behind the Unionists; it used to be a signifier for a nationalist atrocity. Conversely, whenever the Protestants did something awful to the Catholics, we'd hear about 'Derry'.

I was shouting at the TV: "It's Derry!".


There was a moment when the media got confused about the US/UK laptop/tablet hand-baggage ban on flights from the Middle-East. The BBC Newsreader put on her best pinched moue and spoke of the ban on flights from 'six mainly-muslim' countries.

This recent stock-phrase, 'mainly-muslim', is virtue-signalling for "You are a right-wing Islamophobe and we are better than you'.

Then they discovered that the bans were not Trump-fascism but a sensible and graduated intelligence-led assessment based on well-founded threats from ISIS ('so-called Islamic State'). Suddenly, 'mainly-muslim' was dropped and it became: 'flights from six Middle-Eastern countries'.

And I was able to stop shouting at the TV.

Tuesday, March 21, 2017

"International Trotskyism 1929–1985"

Robert J. Alexander wrote this magisterial tome in 1991: "International Trotskyism 1929–1985: A Documented Analysis of the Movement".

At 1,141 pages and out of print, it is a labour which will surely never be repeated. However, the PDF exists on the Internet and you can download it here.

The history of the International Marxist Group, the IMG is covered in pages 492-496. The quote from my previous post, on the occasion of Martin McGuinness's death, is on page 493.

There is a detailed and somewhat critical review of the book here (1993).

"Victory for the IRA"

In 1972 I was a member of the International Marxist Group as we marched through London in support of the Irish struggle.

Martin McGuinness: 23 May 1950 – 21 March 2017

The IMG became rather notorious for its slogan: "Victory for the IRA"*. I was present at the leadership committee where this was discussed. Tariq Ali was the main proponent, arguing in his typical romantic-revolutionary way that the IRA were so vilified by the British establishment that we revolutionaries - in the belly of the beast - were morally obligated to stand against the current and show our full solidarity for the national liberation struggle.

John Ross, who I supported, argued mildly that we thoroughly disapproved of the IRA's campaign of voluntarist violence and bombings. Nevertheless we were duty bound to be in solidarity with their anti-imperialist struggle, and that the correct, Marxist slogan was "Solidarity with the IRA".

He was undoubtedly correct, but Tariq's élan won the day.

I remember marching on the streets, braving the hostility of the watching crowds and the contempt of the stewarding police, chanting "Victory .. to .... the IRA!" and feeling proud of my party discipline even as I compounded its error. It was not clear to any of us what a 'victory' for the IRA would even look like.

Interestingly​, on the occasion of  Martin McGuinness's death, we've not moved on that much.


* The quote is from this book.

Improved resolution theorem prover (RTP v. 1.11)

[Update: March 26th 2017: small tweak for incremental release RTP v. 1.11. The change is to the order of new goals in 'binary-resolve', which adopts a faster-converging pure depth-first search. See the READ-ME file for details.]


You will have previously seen the announcement about the first (tutorial) version of my resolution theorem prover in Lisp.

Today I'm releasing version 1.1 which is easier to use and gives a clearer output.

Below are the "Who does Lady Sally Fowler like?" proofs (compare with the previous version).

The code is available at the AI code in Lisp Resource on the right of your (web view) screen.

Or follow these links for the program file and genealogical test data.

Retrieve program file: RTP-v1.11 and test-data:  Genealogical-etc-test-data.

---    To determine (likes sally ?who)    ---

 > (prove-it *g-mge1* *mge-axioms* 8)

Do you want to see all the proof steps y/n?  n
Conclusion = ((<- ((LIKES SALLY SALLY))))

(13)   ((LIKES ?X ?X) <-)
(14)   (<- ((LIKES SALLY ?WHO)))
(19 14 13)   +EMPTY-CLAUSE+

Conclusion = ((<- ((LIKES SALLY ROD))))

(5)   ((LIKES SALLY ROD) <-)
(14)   (<- ((LIKES SALLY ?WHO)))
(16 14 5)   +EMPTY-CLAUSE+

Conclusion = ((<- ((LIKES SALLY RENNER))))

(14)   (<- ((LIKES SALLY ?WHO)))
(15 14 4)   +EMPTY-CLAUSE+

Conclusion = ((<- ((LIKES SALLY MOTIES))))

(13)   ((LIKES ?X ?X) <-)
(10)   ((LIKES SALLY ?X) <- ((LIKES ?X MOTIES)))
(14)   (<- ((LIKES SALLY ?WHO)))
(17 14 10)   (<- ((LIKES ?WHO MOTIES)))
(24 17 13)   +EMPTY-CLAUSE+

Conclusion = ((<- ((LIKES SALLY HORVATH))))

(10)   ((LIKES SALLY ?X) <- ((LIKES ?X MOTIES)))
(14)   (<- ((LIKES SALLY ?WHO)))
(17 14 10)   (<- ((LIKES ?WHO MOTIES)))
(20 17 9)   +EMPTY-CLAUSE+

Conclusion = ((<- ((LIKES SALLY SALLY))))

(13)   ((LIKES ?X ?X) <-)
(10)   ((LIKES SALLY ?X) <- ((LIKES ?X MOTIES)))
(10)   ((LIKES SALLY ?X) <- ((LIKES ?X MOTIES)))
(14)   (<- ((LIKES SALLY ?WHO)))
(17 14 10)   (<- ((LIKES ?WHO MOTIES)))
(21 17 10)   (<- ((LIKES MOTIES MOTIES)))
(27 21 13)   +EMPTY-CLAUSE+

RTP Release 1.1 description.

Version 1.1: 21st March 2017

Use with test data file:  Genealogical-etc-test-data   ('load' statement at end of the file).

Version 1.01 continues to be best as a first tutorial to understand the program at code level. Plenty of comments there showing worked-execution. But the process for the user to invoke the program is clumsy.

This version has:
  1. Stripped out many of the comments for improved code clarity.
  2. Wrapped up execution in a new function, 'prove-it'.
  3. Improved functions to print the proofs created by the program. 
All new functions at the end of the file.

Only other change is altered search strategy order in 'prove1':

.       (next-igb-list     (append new-igb-list igb-list1)) ) ; search strategy

which seems to help convergence.


The main test data for the new version is the genealogical axiom set which you will find in the test data mentioned above (or for an early version scroll to the bottom here).

It does lead to some tortuous proofs: for example, this one.


> (defvar *gg6* (mk-goal-clause '((grandfather gerry james))))

> (prove-it *gg6* *family-tree-axioms* 12)
Conclusion = ((<- ((GRANDFATHER GERRY JAMES))))

(2)   ((MALE PETER) <-)
(1)   ((MALE GERRY) <-)
(23)   ((PARENT ?X ?Z) <- ((MALE ?X) (CHILD-OF ?X ? ?Z)))
(23)   ((PARENT ?X ?Z) <- ((MALE ?X) (CHILD-OF ?X ? ?Z)))
(1)   ((MALE GERRY) <-)
(32)   ((GRANDFATHER ?X ?Z) <- ((MALE ?X) (PARENT ?X ?Y) (PARENT ?Y ?Z)))
(40 39 32)   (<- ((MALE GERRY) (PARENT GERRY ?Y1425) (PARENT ?Y1425 JAMES)))
(41 40 1)   (<- ((PARENT GERRY ?Y1425) (PARENT ?Y1425 JAMES)))
(42 41 23)   (<- ((PARENT ?Y1425 JAMES) (MALE GERRY) (CHILD-OF GERRY ?1498 ?Y1425)))
(44 42 23)   (<- ((MALE GERRY) (CHILD-OF GERRY ?1498 ?Y1425) (MALE ?Y1425) (CHILD-OF ?Y1425 ?1547 JAMES)))
(46 44 1)   (<- ((CHILD-OF GERRY ?1498 ?Y1425) (MALE ?Y1425) (CHILD-OF ?Y1425 ?1547 JAMES)))
(47 46 11)   (<- ((MALE PETER) (CHILD-OF PETER ?1547 JAMES)))
(49 47 2)   (<- ((CHILD-OF PETER ?1547 JAMES)))
(50 49 13)   +EMPTY-CLAUSE+

So that means it's already a superhuman reasoner, right?

Monday, March 20, 2017

AI code in Lisp: new resource here

New resource on the sidebar to your right. This should please readers wondering why vast screeds of Lisp code randomly appear here, interrupting more erudite essays on this & that.

Except as I write this, 134 of you have visited: Description: Theorem Prover in Lisp.

Here's what the READ ME at the new sidebar says.


These Common Lisp files contain AI programs which are organised around the theme of building a chatbot.

They will all run independently and were developed in LispWorks free Personal Edition.

You can use the code as you like. It's not supported and there will certainly be bugs I haven't spotted.

I think of the code as a toolkit, there to be modified.


To come:

1. Upgrades to the resolution theorem-prover improving the display of proofs + any bug fixes.

2. An AI planner, oriented both towards a toy, virtual, physical world and speech acts for conversation planning.

3. A design for 'internal emotional states' to create some 'point' for the chatbot's autonomous behaviour; we need something more interesting than a natural language interface to Wikipedia-style queries.

Plus integration of all the above.

Sunday, March 19, 2017

Readings to Clare

I assume some people liked my audio recording of "The Mote in God's Eye"?

There's a new tab on the right: 'Readings to Clare'.

Here's the directory:

The books marked +....-abandoned I didn't finish. They didn't work for one reason or another as books to be read aloud to Clare. You get the earliest chapters, but then .. abandoned.

  • Game of Thrones is good. I abandoned the sequel, as the action began to slow down in the middle of the second volume. We saw the TV series instead.

  • The Patrick Lee books are uniformly excellent, but unfortunately I did not record the first book, 'The Breach', in the 'Breach' sequence. You'll have to get that for yourself.

  • There is one early chapter omitted in 'Conclave' (accident of uploading) but I summarise each chapter at the start of the next reading so it's not a show-stopper. Conclave is, in any case, excellent.

  • I was not especially impressed by Lee Child's book, nor Philip Kerr's.

  • Ursula Le Guin's Earthsea books are excellent and I will be reading the rest of the sequence in time.

  • I am currently reading 'Darkness at Noon' - that's in progress.

Here's Clare, about to go out yesterday.

Thursday, March 16, 2017

Description: Theorem Prover in Lisp

[Update: RTP release 1.1 is now available.]

As I mentioned, I've now completed basic testing of my (very, very simple) resolution theorem prover in Common Lisp: the code is ready to load. But read the stuff below first.

1. How to run the system

Once you have the code in front of you, go to Part 2: clauses and extended clauses where you will find the test-data definitions as follows:
(defvar *c1* (mk-fact-clause '(member ?item (?item . ?rest ))))
(defvar *c2* (mk-rule-clause '(member ?item (? . ?rest)) '((member ?item ?rest))))
(defvar *c3* (mk-fact-clause '(likes rod horvath)))
(defvar *c4* (mk-fact-clause '(likes sally renner)))
(defvar *c5* (mk-fact-clause '(likes sally rod)))
(defvar *c6* (mk-fact-clause '(likes hardy renner)))
(defvar *c7* (mk-fact-clause '(likes hardy rod)))
(defvar *c8* (mk-fact-clause '(amusing hardy)))
(defvar *c9* (mk-fact-clause '(likes horvath moties)))
(defvar *c10* (mk-rule-clause '(likes sally ?x) '((likes ?x moties)) ) )
(defvar *c11* (mk-rule-clause '(likes rod ?x) '((likes ?x renner) (likes ?x rod))))
(defvar *c12* (mk-rule-clause '(likes ?x ?y) '((amusing ?y) (likes rod ?y)) ) )
(defvar *c13* (mk-fact-clause '(likes ?x ?x)))

(defvar *g1* (mk-goal-clause '((likes sally ?who))))
(defvar *g2* (mk-goal-clause '((member sally (rod renner horvath sally . moties)))))

(setf *axioms* (list *c1* *c2* *c3* *c4* *c5* *c6* *c7* *c8*
                 *c9* *c10* *c11* *c12* *c13*))

(setf *goals* (list *g1*))
Input your own facts, rules and goals and assign them to variables *axioms* and *goals* as shown above. Or just use my test data to check the system out in the first instance.

Now go to Part 4:  TEST DATA and execute the following commands as shown there.
(defvar *igb-list-ia-list* (initialise-databases *goals* *axioms*))

(defvar *igb-list* (first *igb-list-ia-list*))  ; extended goal clause(s)

(defvar *ia-list*   (second *igb-list-ia-list*))  ; extended axiom clauses

(setf archived-goals-and-proofs (prove *goals* *axioms* 6))    ; depth 6 here

(pprint (setf all-proofs (show-all-proofs archived-goals-and-proofs *ia-list*)))
The search depth (eg 6) is arbitrary - try 4 or 8 etc.

With *1step* set to true,
(defvar *1step* t)        ; If t stops proof loop each iteration and
;                                      prints archived-goals, archived proofs & next goals
the theorem-prover will print its workings on each inference cycle and wait for input before proceeding. If you just want the thing to run to completion, set *1step* to nil.


2. In-code documentation

Most of the code is heavily documented with runtime examples - but read this post first.


3. Data Structures

The data structures are as follows.

1. We start with literals like (LIKES ?X RENNER), (LIKES SALLY ?WHO) - which the function 'unify' tries to unify to create bindings like this:
((?WHO . RENNER) (#:?X1113 . SALLY)).

2. At the level of basic binary resolution we have clauses, which look like this:

A fact is a clause (a list of literals) which contains only one (positive) literal followed by <- .
(defvar *c1* '( (likes horvath moties) <- )                          ; a fact clause
A rule is a clause (a list) which is a (positive) literal (the clause head) followed by <- and then a list of literals - the body of the clause.

We use <- with value nil as a spacer only for human readability.
(defvar *c2* '((likes sally ?who) <- ((likes ?who ?y) (likes ?y moties)))     ;  rule
A goal is a clause (a list) of one or more negative literals. First element is <- for human readability.
(defvar *g* '(<- ((likes ?x renner) (likes ?x rod)) ) )          ;  a goal clause

3. At the level of running the whole proof procedure we have extended-clauses, sometimes written as igb-list and ia-list for extended goal clauses and extended axiom clauses. The letter 'i' stands for index, and 'b' stands for binding.

An extended clause is a triple: index,  clause, then (if a goal) binding. It is implemented as a list like this: (index clause binding). Example:
(defvar *icb* '((16 (4 (nil nil)) (2 (nil nil)))                   ; Index
                     (<- ((likes ?x renner) (amusing ?x)))       ; Clause
                              ((T . T))))                                    ; Binding
An index is a binary tree of integers, with leaves nil.
Example: (1 (nil nil))   or   (16 (12 (7 (nil nil)) (4 (nil nil)) ) (5 (nil nil)) )
Each leading integer is the current clause number; the two immediate children are the two resolution parent clauses .. and so on. Initial goals, and axioms have no parents, thus nil.

The reason for extended clauses is that we need to keep track of the inference steps to reconstruct the entire proofs afterwards. The bindings are kept to allow us to instantiate variables in the original query which we need as the answer!


4. FAQs (reprinted from here).

4.1. Isn't this just Prolog?

Prolog is encountered as a black box. You provide a program and a query (as above) and you get back bindings, like this:
?- likes(sally, W).

W = sally;
W = rod;
... and so on

If you want proofs, to add the 'occurs check' or to change the order in which resolution steps are tried (the search strategy) - well tough: all those things are both fixed and opaque in Prolog. To make them explicit for modification, you have to write an explicit theorem-prover in Prolog (which can of course be done).

4.2. You are using Horn clauses?

Yes. I initially thought to implement a general clausal prover, but Horn clauses make the resolution step particularly simple (just one positive literal to match) and you lose neither generality nor expressive power. But since everything in the code is both modular and explicit, it would be easy to extend the program.

4.3. The style is functional?

Yes. Resolution theorem provers on the Internet are heavily optimised with clever, imperative algorithms and direct access data structures such as hash tables. This makes the code efficient but obscure - very hard to understand.

I didn't want to do that. This is a tool-kit and I'm not trying to create thousands of logical inferences per second. My intended application area (simple inference over an ever-changing knowledge-base for a chatbot) never requires massive inferential chains so clarity and easy modifiability was my objective.

4.4. What was hard?

The program is architected at three levels.

(a) Unification

This is basically already quite modular and straightforward. I re-used Peter Norvig's code.

(b) Resolution

Binary resolution itself - specially with Horn clauses - is a straightforward procedure - as you will see in the code. There are some subtleties with bindings and substitutions, but once you realise that resolution is fundamentally a local operation it's not too difficult.

(c) Control and proof construction

The process of creating new goals and resolving them with the axioms is somewhat complex although again, sticking with Horn clauses makes it a lot easier: just think of a naive Prolog execution model.

However, if you want to return proofs, you need to number the axioms and number-and-archive goals as you process them, capturing the resulting tree of inferences. At the end, for each empty-clause = successful proof, you need to trace that index-tree backwards to recover the proof steps. I found it a bit intricate.

4.5. What's next?

In theory you can do anything with a theorem-prover (Prolog being the existence proof) but it's not necessarily the best architecture. For a planner, where state changes are part of the problem definition, I need to adapt the tool-kit to a design centred around actions with pre- and post-condition in the context of a goal-state and an updating world-state. Such a dynamic model can be used both for actions in a (virtual) world and conversation planning via speech acts.

The theorem-prover remains optimal as a separate module, managing the crystallized knowledge base using inference to draw conclusions - for example to drive questions and answers.

I'm thinking of using the already-programmed Eliza front-end as a robust conversational interface. Doesn't matter if it's bluffing some of the time if it can use learning, inference and planning often enough.

Onwards to the GOFAI smart chatbot ...


5. Miscellaneous

As usual, code is provided for free use, unsupported and with no guarantees at all that there aren't bugs I've missed. Feel free to tell me in the comments.

Other code you may find useful:
  1. A resolution theorem prover in Lisp (as described here)
  2. Prolog in Lisp
  3. Minimax (noughts-and-crosses) in Lisp
  4. Eliza in Lisp
 An example of input and output of the prover.


Update: Saturday March 18th 2017.

1. Code here slightly updated (to v. 1.01) with change to 'show-proof' (and 'show-all-proofs') to add an additional 'top level goals' parameter, thereby getting rid of the previous embedded global variable. This makes the code completely functional now.

2. With further testing (on the genealogical axiom set below) .. with deeper inferences .. I have noticed that naive pprint doesn't do a good job of showing the output - the 'found proofs'. Over the next few days I'll write a dedicated display function which can handle more deeply-nested proofs and let you know when it's done. I'll also provide you with proper version of the genealogical test data below: (not completed testing, so it's illustrative only at this point).

; ---  Test data: family tree : axioms and goal ---
;   with anglicised names and modified rules.

(defvar *g1* (mk-fact-clause '(male gerry)))
(defvar *g2* (mk-fact-clause '(male peter)))
(defvar *g3* (mk-fact-clause '(male john)))
(defvar *g4* (mk-fact-clause '(male mike)))
(defvar *g5* (mk-fact-clause '(male james)))
(defvar *g6* (mk-fact-clause '(male hugo)))

(defvar *g7* (mk-fact-clause '(female mary)))
(defvar *g8* (mk-fact-clause '(female jane)))
(defvar *g9* (mk-fact-clause '(female sarah)))
(defvar *g10* (mk-fact-clause '(female christine)))

(defvar *g11* (mk-fact-clause '(child-of gerry mary peter)))
(defvar *g12* (mk-fact-clause '(child-of gerry mary john)))
(defvar *g13* (mk-fact-clause '(child-of peter jane james)))
(defvar *g14* (mk-fact-clause '(child-of peter jane mike)))
(defvar *g15* (mk-fact-clause '(child-of john sarah christine)))
(defvar *g16* (mk-fact-clause '(child-of john sarah hugo)))

(defvar *g17* (mk-fact-clause '(brother peter john)))
(defvar *g18* (mk-fact-clause '(brother john peter)))
(defvar *g19* (mk-fact-clause '(brother james mike)))
(defvar *g20* (mk-fact-clause '(brother mike james)))

(defvar *g21* (mk-fact-clause '(brother hugo christine)))
(defvar *g22* (mk-fact-clause '(sister christine hugo)))

(defvar *g23* (mk-rule-clause '(parent ?X ?Y) '((child-of ?X ? ?Y))))
(defvar *g24* (mk-rule-clause '(parent ?X ?Y) '((child-of ? ?X ?Y))))

(defvar *g25* (mk-rule-clause '(sibling ?X ?Y) '((brother ?X ?Y))))
(defvar *g26* (mk-rule-clause '(sibling ?X ?Y) '((sister ?X ?Y))))

(defvar *g27* (mk-rule-clause '(father ?X ?Z) '((child-of ?X ? ?Z))))

(defvar *g28* (mk-rule-clause '(mother ?Y ?Z) '((child-of ? ?Y ?Z))))

(defvar *g29* (mk-rule-clause '(son ?X ?Y ?Z)
                              '((male ?X) (father ?Y ?X) (mother ?Z ?X))))

(defvar *g30* (mk-rule-clause '(daughter ?X ?Y ?Z)
                              '((female ?X) (father ?Y ?X) (mother ?Z ?X))))

(defvar *g31* (mk-rule-clause '(wife ?X ?Y) '((female ?X) (child-of ?Y ?X ?))))

(defvar *g32* (mk-rule-clause '(grandfather ?X ?Z)
                              '((male ?X) (parent ?X ?Y) (parent ?Y ?Z))))

(defvar *g33* (mk-rule-clause '(grandmother?X ?Z)
                              '((female ?X) (parent ?X ?Y) (parent ?Y ?Z))))

(defvar *g34* (mk-rule-clause '(uncle ?X ?Z)
                              '((male ?X)   (sibling ?X ?Y) (parent ?Y ?Z))))

(defvar *g35* (mk-rule-clause '(aunt ?X ?Z)
                              '((female ?X) (sibling ?X ?Y) (parent ?Y ?Z))))

(defvar *g36* (mk-rule-clause '(cousin ?X ?Y)
                              '((parent ?U ?X) (sibling ?U ?W) (parent ?W ?Y))))

(defvar *g37* (mk-rule-clause '(ancestor ?X ?Y ?Z) '((child-of ?X ?Y ?Z))))
(defvar *g38* (mk-rule-clause '(ancestor ?X ?Y ?Z)
                              '((child-of ?X ?Y ?W)  (ancestor ?W ? ?Z))))

(defvar *family-tree-axioms* (list *g1* *g2* *g3* *g4* *g5* *g6* *g7*
                                 *g8* *g9* *g10* *g11* *g12* *g13*
                                 *g14* *g15* *g16* *g17* *g18* *g19*
                                 *g20* *g21* *g22* *g23* *g24* *g25*
                                 *g26* *g27* *g28* *g29* *g30* *g31*
                                 *g32* *g33* *g34* *g35* *g36* *g37* *g38*))

(defvar *gg1* (mk-goal-clause '((father ?X ?Y))))
(defvar *gg2* (mk-goal-clause '((mother ?X ?Y))))
(defvar *gg3* (mk-goal-clause '((child-of ?X ?Y ?Z))))
(defvar *gg4* (mk-goal-clause '((son james ?X ?Y))))
(defvar *gg5* (mk-goal-clause '((daughter christine ?X ?Y))))
(defvar *gg6* (mk-goal-clause '((grandfather ?X ?Y))))
(defvar *gg7* (mk-goal-clause '((aunt ?X ?Y))))
(defvar *gg8* (mk-goal-clause '((uncle peter ?X))))
(defvar *gg9* (mk-goal-clause '((cousin ?X ?Y))))
(defvar *gg10* (mk-goal-clause '((ancestor ?X ?Y james))))

(defvar *family-tree-goal* (list *gg1* ))   ; start testing with the first goal

;;; A function, 'prove-it', to save copy-and-paste of the long commands below

(defun prove-it (goal depth)
    (setf *igb-list-ia-list*
        (initialise-databases *family-tree-goal* *family-tree-axioms*))
    (setf *igb-list* (first *igb-list-ia-list*))
    (setf  *ia-list*  (second *igb-list-ia-list*))
     (setf archived-goals-and-proofs (prove (list goal) *family-tree-axioms*  depth))
     (pprint (setf all-proofs
                           (show-all-proofs archived-goals-and-proofs *ia-list* (list goal))))

#|   --- Commands to run the test data ---

(defvar *igb-list-ia-list*
        (initialise-databases *family-tree-goal* *family-tree-axioms*))
(defvar *igb-list* (first *igb-list-ia-list*))
(defvar *ia-list*  (second *igb-list-ia-list*))

(setf archived-goals-and-proofs (prove *family-tree-goal* *family-tree-axioms*  6))

(pprint (setf all-proofs (show-all-proofs archived-goals-and-proofs *ia-list* *family-tree-


or type

(prove-it *gg1* 6)    ; but currently doesn't display well


**********************   Intended Output   *************************

(defvar *gg1* (mk-goal-clause '((father ?X ?Y))))

1 ?- father(X,Y).

X = gerry
Y = peter ;

X = gerry
Y = john ;

X = peter
Y = james ;

X = peter
Y = mike ;

X = john
Y = christine ;

X = john
Y = hugo ;

(defvar *gg2* (mk-goal-clause '((mother ?X ?Y))))

2 ?- mother(X,Y).

X = mary
Y = peter ;

X = mary
Y = john ;

X = jane
Y = james ;

X = jane
Y = mike ;

X = sarah
Y = christine ;

X = sarah
Y = hugh ;

(defvar *gg3* (mk-goal-clause '((child-of ?X ?Y ?Z))))

3 ?- child-of(X,Y,Z).

X = gerry
Y = mary
Z = peter ;

X = gerry
Y = mary
Z = john ;

X = peter
Y = jane
Z = james ;

X = peter
Y = jane
Z = mike ;

X = john
Y = sarah
Z = christine ;

X = john
Y = sarah
Z = hugo ;

(defvar *gg4* (mk-goal-clause '((son james ?X ?Y))))

4 ?- son(james,X,Y).

X = peter
Y = jane

(defvar *gg5* (mk-goal-clause '((daughter christine ?X ?Y))))

5 ?- daughter(christine,X,Y).

X = john
Y = sarah

(defvar *gg6* (mk-goal-clause '((grandfather ?X ?Y))))

6 ?- grandfather(X,Y).

X = gerry
Y = james ;

X = gerry
Y = mike ;

X = gerry
Y = christine ;

X = gerry
Y = hugo ;

(defvar *gg7* (mk-goal-clause '((aunt ?X ?Y))))

7 ?- aunt(X,Y).

X = jane
Y = christine;

X = jane
Y = hugo ;

X = sarah
Y = james ;

X = sarah
Y = mike ;

(defvar *gg8* (mk-goal-clause '((uncle peter ?X))))

8 ?- uncle(peter,X).

X = christine;

X = hugo;

(defvar *gg9* (mk-goal-clause '((paternal-cousin ?X ?Y))))

9 ?- cousin(X,Y).

X = james
Y = christine ;

X = james
Y = hugo ;

X = mike
Y = christine ;

X = mike
Y = hugo;

X = christine
Y = james ;

X = christine
Y = mike ;

X = hugo
Y = james ;

X = hugo
Y = mike ;

(defvar *gg10* (mk-goal-clause '((paternal-ancestor ?X ?Y james))))

10 ?- ancestor(X,Y,james).

X = peter
Y = jane ;

X = gerry
Y = mary ;