Wednesday, February 26, 2020

Deistic Ontology

Photo by Robynne Hu on Unsplash

Imagine the headlines:
  • Powerful alien discovered!
  • Alien has superpowers--knows what you’re up to!
  • You are being judged!
Don’t you think every government, military and R&D facility would be on the case, working up threat analyses on this powerful, moralistic entity?

Apparently not. Yet we have to take this Being seriously. Deny its existence and the Great Religions are simply men and women in frocks expressing pointless rituals: an OCD epidemic.

And as for their followers …

---

If God is real, what would our research programme conclude? Omniscient, omnipotent and moralistic: it’s a tough specification, pretty much full-on agency, a fully-realised intentional system.

Agents are conceptualised within AI as four-vectors: [Perceptions, Beliefs, Goals, Actions]. The corresponding agent-algebra tells you that perceptions combine with prior beliefs and goals to generate plans consisting of actions. In the process, beliefs and goals get updated. It’s more complex but this basic model is good enough for Government work.

Where do an agent’s goals come from? Ultimately from homeostasis requirements. The agent is designed to survive (in biology its genes are): as Woody Allen said, “80 percent of life is showing up.

The non-trivial agent confronts a challenging environment which knocks it away from its enduring, homeostatic ambitions--it gets hungry or thirsty; it’s threatened, damaged or needs to find a mate. In its situated context homeostasis determines goals and plans … and demands their execution to restore the agent to its set-point.

---

How do we apply this model to God?

For omniscience and omnipotence, God’s sensor and effector apparatuses must span all of spacetime. God’s beliefs should be accurate--but complete and effective sensory coverage should address that, other (processing) things being equal.

But what are God’s goals?

It seems parochial that a Cosmic God should be motivated by the needs or welfare of the puny beings of Sol Planet Three at Big-Bang + 13.8 billion years. I think we need a more universal homeostasis requirement, something more Spinozan.

God is concerned with preserving the harmony of the universe.

A requirement for homeostasis is only interesting if there is a threat to it. At the level of the universe--harmoniously regulated by the laws of physics--God becomes non-trivial, non-superfluous only if there is a threat to those very laws. Perhaps in the bifurcating projections of the universe's state-vector in Hilbert space--where our universe shimmers in possibilities--there are pathological outcomes which threaten the very fabric, the very integrity of the entire cosmos.

Catalysed false-vacuum decay.

Just as the coldest place in the universe is on planet Earth, perhaps the greatest threat to our universe’s stability is also here?

Imagine God as a deep neural net with inputs and outputs at each point in spacetime: past, present and future. The neural net resides in an orthogonal set of spatial dimensions and has a developmental trajectory in a second time dimension. Its understanding of the present state of all reality is its collection of input-vectors in superposition. Each input-vector corresponds to a possible world weighted by its amplitude.

The feedback loop in second-time is from possible worlds and their amplitudes through the God-net with its evolving weight-matrices through to spacetime actions through to revised amplitudes. The intent is to ensure that possible worlds representing pathological excursions constitute a set of measure zero.

Who or what is giving God grief? People who want supernatural actions--either to help their own cause or to bring fire and brimstone down on their opponents. Praying over the millennia hasn’t had much effect but high-energy physics?

It is possible to frighten God: perhaps in productive ways.

If you knew there was a well-funded, competent adversary-team with access to unimaginably high energy-densities sufficient to trigger a bubble nucleation event, what would you do? If you were a state with an intelligence agency and resources, that is.

Could we go beyond threats against the Deity. Could we converse with it -- is God a second-order intentional system? What should we say? Is God even a conscious entity?

So that's the setting (as requested). Now over to Adam to devise the characters, plot and narrative.

7 comments:

  1. On a slightly side point, this week's New Scientist has Sean Carroll (et al.) puzzling over "agency" in Physics. His concern is that with the branching Quantum Multiverse nothing actually causes the branches nor does anything actually *follow* the branches.

    So he needs "Agency" (undertaken by "Agents") to take decisions (and thus *follow* a particular branch (I think). These Agents may embody "free will" in making those decisions.

    Now returning to the Agent model above, we have :
    Agent = [Perceptions, Beliefs, Goals, Actions]

    So an obvious, but quite deep, mathematical-agent question here is:

    "Given the tuple (Perceptions, Beliefs, Goals) is Actions uniquely determined (in some mathematical framework)? If Actions is *not* uniquely so determined, does this provide room for "free will decisions" (choices based on the available actions-set presumably)?"

    I am not sure whether Carroll himself has gone to this AI level of Agent yet though.

    So how would God fare? Does God have free choices too, or is God omnipotently just follow predefined rules to maintain Universal homeostasis?

    You will note that this latter description of God is similar to Gaia vis a vis Earth.

    Whether Gaia could step in and prevent (Earth based) Nuclear Physicists doing anything stupid is an interesting question, perhaps worthy of another (Dan Brown style?! ie "Angels and Demons") trip to CERN?

    ReplyDelete
    Replies
    1. I think that hoary old chestnut of 'free will' is irrelevant here: computers. And Sabine's musings at BackReaction. And Superdeterminism. And neural-network self-knowledge.

      We'll wait awhile for the story I think ... M. Carlton is at an early stage! I'm thinking there is an additional Satanic principle required for the dialectic to really work here. And did you read Colin Kapp's 'Patterns of Chaos'? Cheap on Kindle.

      Delete
    2. I have now read the "Patterns of Chaos" and have also reviewed the link to "second order Intentional systems" in the original posting which I missed/wasn't there. Since there will be a wait for any story to emerge from this posting, I can respond to the above comment.

      Of course "free will" is unscientific, and any (Agent) model involving this intuition needs to explain its role and significance. Some points:

      1. To summarise the NS article. Carroll follows MWI giving the "simple" QM interpretation with many paths. This conflicts with observation so *something* must explain the following of individual paths - which is called the Agency. Carroll hopes to use existing physics (like Statistical Mechanics) to explain this, but needs to explore "choice based" mechanisms further. Sabine does not believe in MWI, so this problem does not even exist in that overt Agent centric form.

      2. The core mathematical and logic modelling question might be "can we model the choice process"? One answer is that in ZF Set theory this is what the Axiom of Choice already does. Mathematical objects which are proven to exist purely by AC cannot be constructed (such proofs are known as "highly non-constructive"). No sets of rules (let alone computational rules) can exist to construct or build such objects.

      3. Of particular interest in these Agent oriented discussions are "weak" forms of AC and even weaker structures present in second order logic which play an analogous role to AC within Logic. One consequence of this is that "second order" anything runs the risk of *tacitly* assuming choice mechanisms even if it superficially seems to be an easy generalisation of a first order rules based system. So this point might apply to "second order Intentionality" too.

      4. One might drill into the Agent Intentionality theory further by asking whether all first order Intentional Logics always have computational models. Could some only have non-computational models? This might arise when the Agent specification was "really" a specification of a rules-free choice process.

      5. I have noted before that Neural Net theory has a habit of not sticking to computational formalisms. As a result its link to choice theories is obscure.

      6. In "Patterns of Chaos" although there was a long range determinism, there was also a choice based correction possible, as Commander Bron demonstrated.

      Delete
  2. Not sure the axiom of choice has much to do with free-will. Natural language, huh? On (4) you may recall 'Skolem's Paradox'.

    I haven't seen Carroll's piece so not sure where his problem has come from, over and above splitting also splits observers according to the Born rule. Perhaps he'll blog about it.

    ReplyDelete
  3. On the AC - actually there seems to be a "choice spectrum" of increasing / decreasing strengths. Because ZF itself is too general to model rules, linguistic structures and other topics of interest in Agent theory (and seemingly Physics as well) its main "Choice" extension (AC giving ZFC) carries no further intuition.

    One has to look at the weak Choice axioms for applications to "decision making choice". In the ZF world we have for example "the axiom of Countable Choice" - which can introduce discreteness in the choice process - our intuition prefers to analyse choices from a discrete set rather than from some continuum (or from some giant ZF set).

    So we obtain related members of the AC family including the ubiquitous "(Countable) Ultrafilter Lemma". This gets used to prove theorems in subjects such as "Social Choice theory" with infinite sets of voting options (e.g. the so called "hidden dictator").

    Most/all results in Infinite Graph theory (and therefore its many applications) also use the Ultrafilter Lemma (in the form of Konig's Lemma). Again the countable discrete case carries the most intuition involving navigating infinite graphs and trees, etc.

    Rather a lot of results in mathematical economics also use these (countable) choice ZF extensions like Ultrafilters. I haven't found anybody else yet who examines these results directly in terms of "rules versus non-rule choice" or "deterministic vs non-deterministic" outcomes. A few mathematical economists think that the entire subject should be written down with purely Constructive Logic and Mathematics and theorems, (this gives easy computability proofs and techniques for the results). But maybe this is going too far in the other direction - banning social choice entirely from economic theory (and the "human sciences" more generally) and making it all the application of fixed rules and procedures.

    So as I say there is a spectrum here. At one extreme argue that the given subject can be formalised using Constructive Logic, guaranteeing an easy link to computation, but enforcing the subject as entirely rules-based. Or at the other extreme allowing (as usually happens) the use of (Countable) choice techniques (like Koenig's Lemma) to prove theorems about the models. This makes these models essentially non-constructive, with a scattering of objects which cannot be constructed by rules. Using our graph theory example if we associate "Goals" with end-points of an infinite path in a graph/tree then we have tacitly introduced choiceness into our model. This may be one reason why your Agent Modelling stalled, because the constructions were not actually constructive/computational anymore.

    You are quite correct to spot the connection with "Skolem's Paradox". This is an interesting topic also linked to a hidden "choiceness" within model theory - but not to do with AC itself. This "choiceness" comes from within Logic rather than ZF set theory. It has heavily increased the complexity of "ZF Model theory" by causing the introduction of "Interior Models" and "Exterior Models". There may be links to the "Intentional Stance" here....

    ReplyDelete
    Replies
    1. "This may be one reason why your Agent Modelling stalled, because the constructions were not actually constructive/computational anymore."

      The problems I identified in my PhD work were trying to specify the constraints which specified human-type cognition ('What was happening on those Pleistocene plains?'). It's a bit more complicated than the problem of iterated epistemic modalities -- Knows(A,Knows(B,Knows(A,... ))) :).

      I frame these kinds of problems within situated ecology rather than formalism-variants within mathematical logic, interesting though the latter are.

      Delete
    2. The discussion has now moved from the Patterns of Chaos to the Pleistocene plains, via what really stalled the Agent Modelling work!

      Iterated constructions like Knows(A,Knows(B,Knows(A,... ))) appear like infinite regressions and can be a sign that the given formalism has a "singularity". If so the formalism needs to be upgraded.

      I seem to have missed out on the Pleistocene era of Agent theory, so I don't know what was really intended here. One possibility was a hunt (by you and Bernie?) for any "cognitive breakthrough" during that period, which you could identify in some well-chosen model. One would then want to analyse that "cognitive breakthrough" from as many angles as possible, and build appropriate simulations, if possible?

      An alternative is that some Pleistocene cognitive parameter kept increasing and increasing. Again, identify and model?

      Clearly a lack of historical data could stall such a project.

      In all this we need to note that various words are used in such studies: "culture", "learning", "social learning", "ecology", etc over which a formal model would be required.

      Was this formal model assuming that everything would be computable?

      Delete

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.