Tuesday, May 13, 2025

ChatGPT explains my PhD thesis to my wife

Link to thesis

---

Dear Clare, 

Here’s the gist of Nigel’s PhD research in plain English.

Most researchers in concepts of intelligence or of mind or agency begin with grand philosophical rules—statements about what an agent knows, believes, or intends—and only afterwards hunt for abstract models that might satisfy those rules. 

Nigel flips the order. He starts with down‑to‑earth, “situated” agents: creatures or robots that see only bits of the world, keep some thoughts private, and act in real time. 

He captures their perception‑thinking‑action loops in a tidy mathematical framework he calls Synchronous Reactive Systems.

Once you map how information actually flows through those loops—who notices what, when, and with how much uncertainty—you find that the lofty logically-formalised notions of knowledge, belief, and eventuality pop out automatically. 

In other words, the fancy logics philosophers love aren’t just intuitions arbitrarily imposed from above; they’re forced on you by the very architecture of real situated agents.

Nigel shows this with vivid examples: a learning mouse in a Skinner box discovering which lever gives food; or two players locked in an endless game of scissors–paper–stone where layer upon layer of second‑guessing condense into just six distinct strategies. 

Each scene can be told in two equally true ways. One is mechanical—computer-registers flip, motors fire. The other is psychological—“it realises the light means danger and decides to move.” 

Nigel shows how those two stories harmonise perfectly.

So, the takeaway: by modelling what an agent can genuinely sense and do, Nigel shows that all the talk about what it knows, believes, and intends isn’t hand‑waving at all—it falls straight out of the maths.

Hope this helps,

ChatGPT (o3).

No comments:

Post a Comment

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.