Wednesday, May 19, 2010

The Logical Omniscience of Reactive Systems

Author: Nigel Seel.

ABSTRACT

The problem of 'logical omniscience' continues to engage attention in AI. In this paper it is argued that the problem arises from interpreting formal properties of epistemic logics into agent performance domains, where agent cognition is taken to incorporate formula-manipulation, which then mirrors inference in the logic. The case of Reactive Systems is examined, where some such systems eschew such formula-manipulation in favour of fast-acting mechanisms. A formal account of such agents' knowledge states can still be given, but the 'problem' of logical omniscience re-emerges in a new light.

This paper is published as:- N. R.Seel (1991). The 'Logical Omniscience' of Reactive Systems. Proceedings of the Eighth Conference of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour, U K (1991). Leeds U.K.

1. INTRODUCTION

The phenomenon of 'logical omniscience' is a technical one, which occurs within certain formal systems which attempt to capture notions of 'knowledge’. The notion of logical omniscience is orthogonal to distinctions between Knowledge and Belief:- the concepts of 'knowledge' and 'belief' intermix freely in this paper.

It is usual to formalise the notion that an agent 'knows p' where p is some proposition, by defining an operator such as 'K' and writing syntactically: 'Kp'. K cannot be an ordinary prepositional operator such as ‘not', because it is not truth functional (e.g. if p is contingently true, then nothing special can be said of the truth value of Kp).

The standard approach is to treat K as a modal operator, and to treat the analysis of epistemics as a particular interpretation of standard modal logic with possible worlds semantics. A story is then told as follows. Agents are not omniscient about the contingent facts of the world they inhabit: there are many possible ways the world could be, that an agent cannot distinguish between.

Thus in each situation (world) we associate a number of 'accessible worlds' with the agent, capturing the space of possibilities within which the agent considers its actual situation to be. The things an agent is sure about reflect commonalities between these worlds; the things an agent is agnostic about take distinct forms in different worlds. Note that the agent's lack of knowledge prevents it from identifying which of the possible worlds is to be considered actual, while if it has incorrect beliefs, then none of the accessible worlds will be the real one.

In such a semantics, the following axiom, called 'K', is valid:

K. Kp ^ K(p -> q) -> Kq

Axiom 'K' is valid because in every world in which p and (p -> q) hold, q has to hold too, by the meaning of the implies operator (Chellas, 1980, p. 7), and the notion of satisfaction relation used.

In a similar fashion, logically necessary formulae, which could not be false, must be true in every possible world, and therefore must be 'known'. So we have the inference rule called the Rule of Necessitation (RN), (Chellas, ibid, p. 14):

- p
---- (RN)
- Kp

To sum up, on the standard modal logic account axiom K is valid and inference rule RN is sound.

These facts, not necessarily controversial in the context of modal logic, become problematic when epistemic logic is interpreted into the real-world domain of agents which are said to know things. It seems to follow that such an agent knows all the consequences of the things it knows (by axiom 'K' and modus ponens), and knows all necessary truths (presumably including all of mathematics) by inference rule RN.

The trouble is, these conclusions are felt to violate our intuitions about how knowledge and belief actually work (in people). More precisely, our intuitions encompass a performance model of knowing and believing, in which people, when asked, fail to give satisfactory answers where the theory would suggest they ought: (e.g. "Is 'Fermat's Last Theorem' actually a theorem of Peano Arithmetic?").

In this paper I first survey recent attempts to address the 'problem' of logical omniscience. This is followed by a discussion which focuses more on agents 'in the world' - reactive systems, and in particular what it means for a reactive system to know something. Finally, I discuss whether considering situated agent-environment interaction as the primary phenomenon might shed new light on the 'problem' of logical omniscience: for example, by establishing it as an artifactual problem.

2. THE PROBLEM OF LOGICAL OMNISCIENCE

As Hintikka (1962) observed, it is possible to 'get round' the problem of logical omniscience by assuming that the K operator does not refer to what is explicitly believed by a person, but instead refers to what is implicit in those beliefs - what would have to be the case if those beliefs were true. Another proposed interpretation is that we are dealing with 'idealised reasoners', which in some sense do not suffer from ordinary limitations.

Even if these suggestions were acceptable, there still remains the problem of dealing with agents which are presumed not to believe all the consequences of their beliefs, and not all necessary truths. What is the nature of the assumed fallibility of such agents? Four limitations have been discussed in the literature, (Levesque, 1984; Fagin & Halpern 1985, 1988; Konolige, 1986).

1. The agent lacks awareness of a concept (syntactically, the predicate naming the concept is 'missing' in some sense from the agent's lexicon, or semantically, it lacks a valuation).

2. The agent undertakes some actions equivalent to theorem-proving - by applying transformation rules to tokens, but is resource-bounded, so that tokens requiring more than some number of applications of the rules are not derived.

3. The agent is as (2), but some transformation rules which are needed for completeness are contingently omitted.

4. The agent maintains a number of contexts for reasoning, which are applied in different situations. These contexts need not be the same, or even consistent with each other.

Assuming we wish to model some or all of these epistemic fallibilities, the strategy runs as follows. First we introduce a new operator, B say, which aims to capture explicit belief - the beliefs actually attributable to the epistemically fallible agent. The kinds of beliefs which are closed under logical consequence, include all valid formulae etc are then called implicit beliefs, and given a different operator, say [].

Secondly, some axioms are then given which characterise the nature of the agent's epistemic limitations: for example we might want to make

(1) -B(p v -p)

satisfiable, where a necessary truth is not believed in the case where the agent has never heard of p.

Similarly,

(2) Bp ^ B(p -> q) ^ -Bq

may be satisfiable, showing a failure to draw logical consequences.

Finally, some mathematical structures are derived which provide a suitable class of models for the proposed logic, and which make the axioms come out valid. This last step may be more or less difficult.

An early example of this procedure was Levesque's use of partial and incoherent worlds, called situations after the similar constructions in Situation Theory, (Barwise & Perry, 1983). Partial situations do not provide valuations for all the sentence letters of the language of the logic, which permits lack of awareness to be modelled as in (1) above; incoherent situations permit simultaneously supported contradictory valuations of sentence letters, resulting in the failure of closure of explicit belief under implication as in (2) above.

Although it appears that there is nothing wrong with Levesque's treatment from a formal point of view, there are some questions as to whether it adequately addresses the problems it sets itself. Thus as Vardi (1986) points out, Levesque's agents are perfect reasoners in the constrained framework of relevance logic: "Unfortunately, it does not seem that agents can reason perfectly in relevance logic any more than in classical logic" (p. 294). The esoteric nature of incoherent worlds also raises some problems. In interpreting the semantics into the actual world, what could they be?

In (Fagin & Halpern, 1985, 1988) a number of logics are presented which attempt to model one or other of the epistemic fallibilities mentioned above, while avoiding Levesque's difficulties. Levesque's logic is re-engineered into a 'Logic of Awareness', which restores the general setting of total possible worlds semantics (abandoning partial and incoherent situations), but associates with each world an (agent-indexed) 'awareness set' of prepositional tokens which the agent is aware of at that world. Under the semantics given, all of Levesque's axioms are valid.

By extending the 'awareness set' to formulae in general (not just prepositional tokens), properties of Levesque's operators such as failure of consequential closure can be reproduced, and by choosing the contents of the awareness sets' with care (by computability properties), resource-bounded reasoning can be modelled.

As Konolige (1986) observes, however, the superposition of a basic possible worlds model together with arbitrary syntactic restrictions at worlds leads to a rather clumsy hybrid system. The system is neither elegant, not does it provide good support for intuitions about resource-bounded reasoning. In particular, the technical apparatus can be re-expressed more coherently in purely sentential terms.

Fagin and Halpern's final logic supports a notion of 'local reasoning'. Here, each agent in a given state is associated with a collection of sets of possible worlds. Each of these sets constitutes a 'context of belief'. Hence the satisfiability of (2) above can follow from the agent believing p in one context, p -> q in another, but never putting these two contexts together to believe q.

To sum up, these approaches to handling the 'logical omniscience' problem all exhibit a common theme: a movement from (1) to (3) thus.

(1) The identification of a concept of 'knowledge' or 'belief', abstracted from human affairs.

(2) A formalisation of the concept by operators, (K's or B's), which take the concept, as a conceptual category, to be unproblematic. This is then followed by the exploration of its properties through various candidate axiomatisations as we have seen above.

(3) The search for a simple, elegant mathematical structure capable of providing a semantics for the axiomatisation. In other words:

Philosophical analysis identifies concept of 'knowledge' =>
Concept then formalised as a logical operator: properties exiomatised =>
Esoteric structures then developed to make axioms valid.

Looking at reactive systems forces us to break with this reified view of 'knowledge' as a thing in itself, and restores the primacy of the situated epistemic subject engaged in practical relationships with its environment.

3. REACTIVE SYSTEMS

In most areas of AI, it is impossible to define the key concepts (e.g. knowledge-based system, blackboard architecture) with any precision. Things are no different with reactive systems, as can be seen from this preliminary attempt at definition, adapted from (Pnuelli, 1986).

Definition: Reactive System

A reactive system is an entity which interacts in a systematic way with its environment.

This however fails to capture the main intuitions of workers in this area: (e.g. Brooks, 1986; Kaelbling, 1986; Agre & Chapman, 1987) that:

- the environment is taken to be complex and hard to perceive,
- the environment is unpredictable in the large, and continually sets novel contexts for the system to which it has to respond,

and perhaps the most important intuition:

- the environment requires (often, always?) a very fast response from the system.

Designers of reactive systems have therefore emphasised:

- rapid response procedures, which take input (in some more or less 'raw' form) and optionally, state-information maintained by the system; and then return output (in some more or less 'raw' form) and optionally updated-state;

- a taxonomisation of required agent behaviours: most important => least important, which can be factored into concurrent task-achieving modules in the system architecture, plus some scheduling mechanism (cf Brooks, Kaelbling ibid).

Designers have de-emphasised/rejected suggestions that reactive systems should support explicit world-models + declarative reasoning about world and agent behaviour, in favour of any mechanisms which will permit the agent to respond appropriately (including fast enough) in a situated fashion. So although in principle a reactive system (according to the definition) could maintain a 'formulae database' world-model as a causal factor in its behaviour, it need not, and in what follows I assume it does not.

Now, if the notion of an agent's 'having knowledge' crucially depends upon the existence of such a formulae database, then one is in some epistemic trouble with reactive systems. But perhaps epistemic notions are not, after all, architectural, but are instead a fine-grained way of describing agent behaviour?

In (Seel, 1989) 1 described in detail a reactive agent in a Skinner box environment, and proved some properties of it using a temporal logic. In (Seel, 1990 a, b) I extended the logic with an epistemic operator, and showed how it captured fine-grained agent behaviour in the environment, such as making mistakes, and learning. I will briefly restate the approach to the epistemic modelling of reactive systems outlined there, and then consider the implications for the 'logical omniscience' of reactive systems.

4. HOW CAN A REACTIVE SYSTEM KNOW SOMETHING?

I will summarise the epistemic treatment of reactive systems in nine theses.

1. Consider the agent as an automaton-object (not necessarily finite-state). Consider the other entities constituting the agent's environment as also automata-objects (not necessarily finite-state). Let a collection comprising an agent plus the other objects constituting its environment, each in a definite state, be called a scene. Agent and environment objects are deterministic. Other objects could also be considered as agents.

2. Scenes can be executed, so that as each object's next state function is applied, each object in the scene updates itself to its next state: collectively the next scene is constructed. Call the infinite sequence: [initial-scene; next scene; one after that; ... ] a history. Note that the behaviour of objects defined in an initial scene is entirely determinate - the notion of history is well-defined.

3. Each object has both private and public state. An object cannot access another object's private state (it can access its own private state + other objects' external states as part of its own next-state computation). Assume an observer who looks at scenes from 'outside'. The observer also can only 'see' the objects' external states. This means that on the information available to the observer, the evolution of scenes as a history unfolds may not be determinate, due to the invisibility of private state in the objects.

4. The behaviour of the agent and environment is not arbitrary but is constrained by rules. These rules have a conditional and temporal character:

- if such and such has happened then subsequently the environment will behave thus;
- if such and such has happened then subsequently the agent will behave thus.

These rules state conditions on the external states of objects, they are assumed to be known to both the agent/environment designer and to the observer. Conditionality imposes on the agent designer that the agent must determine at 'run time' the specifics of the rules which actually hold. The resulting learning task provides the conditions for non-trivial epistemic description of the agent.

5. The observer starts by looking at an initial scene, scene zero. There are many possible histories which would look the same to the observer at scene zero, (differing in objects' private states, and next state functions). The observer collects these together and calls them the possible histories at scene zero. As the actual history unfolds, scene by scene, perfectly definite events occur, registered in the changing external states of the objects. The observer prunes from the possible histories all the ones which differ in their early scenes from what has actually happened in the history being watched.

As the actual history unfolds, and evidence accumulates, certain of the rules have their antecedents satisfied, 'kick in' and constrain the pattern of future events. All the histories which don't fit these patterns get removed from the possible histories as well.

6. Suppose we have now reached scene n of the actual history. As the observer looks at the possible histories, it may be the case that they all agree on what the agent (or environment) will do in scene (n+l) - (and maybe (n+2), (n+3) etc). Since the actual history is certainly one of the possible histories, then it is predictable what the agent (or environment) will do next.

7. If the different histories in the space of possible histories tell different stories about what the agent (or environment) will do next, then the observer cannot predict exactly what will happen next, only that some choice from amongst those in the possible histories will be found to occur in the actual history.

8. Since the agent itself may have no more information than the observer (recall the designer's ignorance mentioned above), then it may also find itself in an informational state as in (6), in which case it may be said to act appropriately, or (7) in which case it may, by "guessing wrong", be said to act in error.

9. Given a certain adequacy of the rules (thought of as specifications), there may come a point where in certain types of scene, the possible histories always agree on what the agent should do next. If that was not previously the case, the observer may say that the agent has learned to behave appropriately in these scenes.

Now, all this may seem a creative use of conditional rules to compensate for lack of information about private states on the part of the observer, and so it is. But when it is formalised properly, the changing structure of possible histories over 'time' turns out to replicate the Kripke semantics underlying a logic of knowledge and time.

So if we wish, we can stop talking about possible histories, and instead introduce talk about the agent's knowledge. Since the semantics is in place (it induces a KT45 axiomatisation for the knowledge operator) we can prove theorems about perception, knowledge and action. See (Seel 1990, a, b) for details of the formalisation, proofs and discussion.

Needless to say, the agent itself need have none of this: it only needs to change some internal structures to align its response behaviour correctly to the incoming stimuli from the environment, in the best reactive tradition.

5. REVISITING LOGICAL OMNISCIENCE

Notice how the above treatment dramatically modifies the way the problem of 'logical omniscience' is posed. Unlike in figure 1 above, we are no longer trying to exemplify some abstract, platonic notion of 'pure knowledge'. Such conceptual analysis is replaced by an attempt to reason correctly about a highly situated agent-environment interaction thus:

- Situated agent-environment interaction, developing in “time”. Behaviour conforms to public rules, but implemented using private mechanisms.

- Observer reasons in epistemic logic, based on public rules to compensate for lack of (private)mechanism information

So what are the properties of the formal epistemic theory which, generated by the observer, is attributed to the agent? The most obvious feature is the restriction of the lexicon of the logic to the naming of the interaction-relevant events in the 'life' of the agent. There is no attempt to start talking about number theory or stock-exchange prices. (And why should there be? These are problems within some localised regions of human social life - another, but different concrete situation). Hence the notion of 'limited awareness' here arises as a natural consequence of the specificity of the agent-environment mathematical model.

As regards the other issues flagged as problems above, (resource-boundedness, consequential closure and the knowing of all valid formulae), it is certainly true that the observer's epistemic theory has axiom 'K' as valid, and supports the Rule of Necessitation. But this is beside the point because there are no agent-performance implications to these facts.

The agent is not concerned because it is not computing with the logic, merely being a reactive mechanism.

Resource-boundedness may be a problem for the observer reasoning in the logic, as it is for all users of formalised theories, but not for the agent, which merely acts. In some sense, the logic ‘makes contact' with the agent at perception and action: in between, the observer may undertake arbitrarily long chains of reasoning, and deduce all kinds of valid formulae; the agent simply operates as a mechanism according to its design, using its private state and various observable public states to produce its next output.

To sum up, the agent (in its environment) satisfies the logic, but doesn't use it. Hence for the agent, the proof-theoretic properties of the logic which are called 'logical omniscience' (and which get dressed up with all kinds of performance implications to worry us), simply don't apply.

The reader is likely to be left uneasy by this conclusion. "Sure, you can cut the problem down to size for the dumb beasts, which is what reactive systems really are, but it's cheating - the problem is really meant to be people !"

Well, no and yes. It is an achievement to base a concept of knowledge on a thoroughly behavioural analysis of agent behaviour in an environment - it's not cheating. And yes, the real problem is people (and not logic !).

This suggests that a more compelling resolution of the classical difficulties will emerge from formalised sociological theories, modelling the situated formation of social constructs such as mathematics by human communities. Difficult as this may seem, it holds out more hope of success than those approaches which reify the problem as one of logic design.

6. CONCLUSION

The problem of 'logical omniscience' in its traditional guise was outlined, and some recent attempts to address it were surveyed. Attention was then focused on the agents themselves which traditionally are meant to suffer the 'problem'.

In the case of reactive systems, it was argued that a perfectly coherent behaviour-based notion of agent-knowledge is possible, without any assumption that the agent has to conduct logical reasoning of any kind.

In this case, the problem of logical omniscience, which at root is a performance problem, simply vanishes as a real problem. It is suggested that the task of adequately modelling human 'beliefs' and cultural accomplishments, such as mathematics and the formalised sciences, belongs more properly to yet-to-be-developed formal social theories rather than logic per se.

ACKNOWLEDGEMENTS

Although disagreeing with their conclusions, I found (Reichgelt & Shadbolt, 1990) a useful source of ideas.

REFERENCES

Agre, P. E. & Chapman, D. (1987). Pengi: an implementation of a theory of activity. In Proceedings of the Sixth National Conference on Artificial Intelligence, pp 268-272.

Barwise, J. & Perry, J. (1983). Situations and Attitudes. Bradford Books/MIT Press.

Brooks, R. A. (1986). A Robust Layered Control System for a Mobile Robot. IEEE Journal of Robotics and Automation, Volume RA-2, Number 1, pp 14-23.

Chellas, B. (1980). Modal Logic: An Introduction. Cambridge University Press.

Fagin, R. & Halpern, J.Y, (1985). Belief, Awareness, and Limited Reasoning:
Preliminary Report. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence, (IICAI-85) , pp 491-501.

Fagin, R. & Halpern, J. Y (1988). Belief, Awareness and Limited Reasoning. In Artificial Intelligence 34.

Hintikka, J. (1962). Knowledge and Belief. Cornell University Press.

Kaelbling, L. P. (1986). An Architecture for Intelligent Reactive Systems. In Georgeff M. P. & Lansky A. L. (Eds.), Reasoning about Actions and Plans. Morgan Kaufmann.

Konolige, K. (1986). What Awareness Isn't: A Sentential View of Implicit and Explicit Belief. In J. Y. Halpern (Ed.), Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986 Conference. Morgan Kaufmann.

Levesque, H. J. (1984). A Logic of Implicit and Explicit Belief. In Proceedings of the National Conference on Artificial Intelligence, (AAAI-84), pp 198-202.

Mendelson, E. (1987). Introduction to Mathematical Logic. Wadsworth & Brooks.

Pnuelli, A. (1986). Specification and Development of Reactive Systems. Information processing 86 (IFIP). Elsevier Science.

Reichgelt, H. & Shadbolt, N. R. (1990). Logical Omniscience as a Control Problem. Paper, Department of Psychology, Nottingham University.

Seel, N. R. (1989). A Logic for Reactive System Design. Proceedings of the Seventh Conference of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour, pp 201-211.

Seel, N. R. (1990a). Intentional Description of Reactive Systems. In Y. Demazeau & J-P. Muller (Eds.), Proceedings of the Second European Workshop on Modelling Autonomous Agents in a Multi-Agent World. Elsevier Science.

Seel, N. R. (1990b). Formalising First-Order Intentional Systems Theory. Ph.D. thesis available from author.

Vardi, M. (1986). On Epistemic Logic and Logical Omniscience. In J. Y. Halpern (Ed.), Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986 Conference. Morgan Kaufmann.