OK, she was toying with me. She knows I am torn between desire and calories. I mentally charted my options like this.
Figure 1 |
But then she had second thoughts. "No," she said. "I do want you to have the cheesecake, but I also want it to be a real surprise. You'll definitely get the cheesecake tonight or tomorrow night. One of the two. But I won't tell you which."
OK, I was disappointed, I have to admit. I hate even the possibility of delayed gratification. A new diagram took shape in my mind.
Figure 2 |
I'd either get the cheesecake tonight or it would be the empty plate, in which case at least I'd be getting cheesecake tomorrow.
But wait. It's meant to be a surprise. The surprise is all in the branching possibilities. If I didn't get cheesecake tonight, then I'd be in the situation below (yellow oval).
Figure 3 |
No surprise there! There's no branching in the yellow oval. So there's no surprise tomorrow. And that must mean I'm getting the cheesecake tonight! Excellent!
Figure 4 |
Oh, wait again. No branching here. So my very certainty has removed the element of surprise. So it looks like Clare's proposal to give me a surprise cheesecake has collapsed. There's no way she can do it.
No cheesecake at all. Cruel.
Figure 5 |
But now I'm sure I'll get nothing, I can't rule out the possibility that nevertheless she may give me a cheesecake tonight after all.
Figure 6 = figure 2 |
---
I wrote about this paradox back in 2009. I noted that despite its simplicity, according to Wikipedia, no-one has a good solution. It looks to me that the self-reference feedback is creating alternate, flip-flopping states of certainty and uncertainty.
Another example of how concepts from AI such as game trees, the perspective of an agent with cognitive states, and Kripke models of doxastic logic can illuminate problems which seem intractable from a purely logicist or philosophical standpoint.
Yes there are some interesting developments and theorems which have been proven off the back of this paradox, cited in the linked papers. I was especially interested in the links found to Godel's incompleteness results, and the possible link with Nash equilibrium (these two topics are themselves (loosely) linkable now in logic).
ReplyDeleteOne observation I would make is that whenever we have trees (such as the game trees or the unravelled Kripke trees) we have to ask whether the underlying tree is infinite. If so, and the tree is binary (or similar), then finding a path through that tree is weakly non-computable (WKL). So we cannot expect a computational logic to work, and from that we can get meta-surprises!