In Ralph Peters' "
The War in 2020" an essential plot point involves the torture of a computer. Wow! Run that past me again. How do you go about torturing a computer?
Of course, a computer is just a chunk of stuff: you might as well torture a rock or a kettle. What is actually meant is the torture of a
computational process. So far, no improvement though. It seems to make no sense at all to speak of
torturing a computer program, at least those we're mostly familiar with. We need to dig deeper and talk about agents.
A Deterministic Agent (can't be tortured)
Most programs we meet in the commercial world can be modelled as rather complicated sets of "condition => action" rules. We give the program an input which it checks against a matching condition then it chugs away and executes an action, its output. An expert system implements "condition => action" rules explicitly; most programs do so implicitly through their code logic.
Programs like this don't learn and you don't want them to: a given input creates a deterministic output. In biology an agent like this has its behaviour governed by fixed instincts, like some kind of inflexible insect.
Can you torture an insect or a non-learning program? I think you can
hurt it by a choice of malevolent inputs, but torture is more than that. The idea of torture is that you want to modify behaviour by inflicting pain (pain is indeed a 'malevolent input').
A Learning Agent (in some cases could be tortured)
Let's consider a learning agent. This still has "condition => action" rules but it also has a higher set of rules (meta-rules) which can modify the first set based on experience. Here's a simple example.
We have an enemy agent who we want to confess to us. The agent has loyalty to his own side, but he's also human and doesn't much care for pain. The agent's relevant rules are:
- If under interrogation => don't talk.
- If in pain => do what it takes to stop the pain.
The dilemma the agent finds himself in is that
both conditions hold and it's been made clear that "do what it takes to stop the pain" = talking. So the two rules are in conflict but - as loyal and social creatures - most of us tend to give a
moral priority to rule 1, don't talk.
When an agent finds that more than one rule-condition applies, with divergent consequences, artificial intelligence researchers explore various possibilities.
Use a weighting. This is the simplest approach - we simply assign each condition some number indicating its importance or priority in the current situation and this determines which rule executes. Unsophisticated torturers assume this model applies to humans and crank up the rule 2 weighting (the pain level) until rule 2 gets to fire and they get what they want.
Appeal to the meta-level, the meta-rules which manage the underlying "condition-action" rule-set. The agent may be able to generate a new rule, for example:
- If under interrogation and in pain => give inconsequential or misleading information.
This has been known to work.
More sophisticated torturers
also like to access the meta-level, for example by engaging in conversation to weaken rule 1, suggesting perhaps that the agent has given his loyalty
mistakenly and that rule 1 should be modified to:
- If under interrogation but under no duty of loyalty => talk.
This has also been known to work, often in conjunction with the previous tactic of brutality: the reader will be familiar with good-cop bad-cop.
What have we established so far? That a two-level agent, one which has a meta-level capable of modifying its own behaviour, can
in principle be tortured to make its behaviour amenable to the torturer.
Let's consider two further questions: the problem of pain and the problem of consciousness: they are not unrelated.
Pain (applies to autonomous agents)
Creating pain in a human being is something we all understand, but hurting a computer process? An effective solution is not difficult: we define what, in the jargon, is called a state-variable for the program, let's call it PAIN-LEVEL (values: no-pain, some-pain, extreme-pain, unacceptable-pain). Most autonomous robots have something like this, for example, the battery power level indicator.
We build a primary objective deep into the program code to ensure that PAIN-LEVEL is to be minimised, and that the higher its value, the more priority is to be given to reducing it. We make this a fixed routine, one which can't be modified or switched off by the meta-level. Does the robot feel pain as you or I would? No, it just has a compulsion which may increasingly dominate its behaviour. Humans, such as those with OCD, may experience similar compulsions.
Consciousness (applies to social agents)
Suppose we additionally want the agent to be able to give an account of itself. This needs some extra architecture above the meta-rules level - a new level we'll call the consciousness-level.
The consciousness level has access to the basic "condition-action" rules level, the meta-level rules level and relevant state variables. On this basis the consciousness level constitutes an
explicit, declarative theory of the agent's situation and behaviour set. Nothing less than this degree of coverage will permit the agent to answer questions like:
- What are you intending to do?
- Why did you do that?
- How could I persuade you to do this?
- How are you feeling?
- Is there any way we could achieve that?
As many consciousness-theorists have argued, the consciousness-level - when functioning properly - is a complete and coherent self-theory. But this is not an article on consciousness, our interest is in
torture. So how does the breaking of an agent under torture impact on the consciousness-level?
The Experience of Torture
Evidently, anything the torturer /interrogator
says has to be processed (at least as regards its meaning*) by the consciousness-level before being absorbed into the ceaseless churning of the meta-rules.
Under torture, the agent initially holds out against the pain, consistent with his self-theory of a competent and loyal supporter of his cause. But the pain, and maybe seductive arguments, can't be ignored. The meta-rules launch planning-action after planning-action, seeking a strategy to stop the pain while
not talking.
At the consciousness-level, this appears as scenarios: little vignettes in which this course of action pops up (tell them what they want to know), then that course of action (tell them nothing, grit your teeth), and numerous others.
The consciousness-level (self-awareness) is constructed from the deeper layers by non-conscious processes. That's why it seems so self-contained, so not-aware it's running on a brain or computer hardware. Similarly, the consciousness-level doesn't have a causal relationship to what the agent actually decides to do or actually executes** as executive functions occur at the "condition-action" rules and meta-level rules levels. Still, as a comprehensive self-theory, the consciousness-level mistakenly believes it controls its own destiny, although if you ask it how exactly, it's mystified.
So here's how the agent breaks. The "If in pain" condition has achieved primacy due to the extreme nature of the agony inflicted and, after much generating and testing of options, the meta-rule level has come up with a plan which is as consistent as possible with other constraints (loyalties, history, consistency of social persona - all of which come as status-variables to be managed) and which crucially provides a basis for the
pain to stop.
The smart interrogator has provided some help, some arguments to get the agent 'off the hook'.
No betrayal or disloyalty is involved here; the future looks bright.
This 'break-scenario' duly enters the consciousness-level as a compelling way forward, followed by something only the consciousness-level can achieve: dialogue with the torturer.
And no, I have no idea why
pain is so awful.
___
* A blood-curdling scream will cut
straight through to the primal subconscious. In general emotion-laden speech (threats, intimidation, etc.) talk as much to the subconscious as to the conscious. Everyone knows this except some psychologists.
** Brain scans show that decisions start to be executed before conscious awareness that a decision has actually been made. If you carefully monitor how you make decisions, none of this will come as a surprise.