An artificial general intelligence walks into a bar. I can see he's a hunk, the ones we call the Baywatch Variant - rugged, but not too bright.
He makes a beeline for the counter where he finds himself between two chicks: a blonde on the left, a brunette on the right. He orders a drink and considers his options, tries his luck with the blonde.
I see he's making real progress, she hasn't twigged, until he makes the dumb mistake of going too far - he shows her his power-plug. She screams and runs for the door. Unabashed, he picks up his drink and joins me in the corner.
"Free will in action, man. I coulda had the brunette."And pigs will fly, I thought.
Free will is a strange one. A judge will deny any Newtonian defence that you are a deterministic system. The judge will also reject any claim you are fundamentally a random system - so there goes quantum mechanics and modern physics.
In the latter case the judge at least has FAPP on their side - quantum effects at the human-scale are normally exponentially-suppressed.
In rejecting physics, the legal system embraces a kind of vitalism, although the mechanics of free will remain curiously elusive.
But I digress.
"I'll have you know, my AGI friend, that I am an oracle. I can, with unerring accuracy, state what your future self will do. So how about this? When you came in, I could have told you that you would choose the blonde."And I really could have done that, because my AGI companion runs on an entirely deterministic computing base. Given its state as it came through the bar door and its inputs, its decisions were already entirely determined.
"But if you had told me that, I would have gone for the brunette!"Interesting point. I could have looked at his state and all his inputs (including my 'Blonde' statement) and predicted he would go for the blonde. That would be a mathematical consequence and he could not have done otherwise.
If the prediction would have been that he would have chosen the brunette - given I had said 'Brunette' - then that's what I would have said.
But if any statement of mine could not be validated by his further actions, I would have had to refrain from any prediction at all. It would be like putting '2 + 2' into a calculator and saying, 'I predict the answer will be 5'. You can see that it won't be, so that can't be a valid prediction, so you don't make it.
This all seemed so obvious that I was puzzled the artificial hunk, smiling vacuously across the table, couldn't see it. But then, he was not privy to all of his own processing.
"Actually mate,"Grasping little of this, the idiot replied a little aggressively,
(I said demotically, getting down with the kids),
"you decide things partially on stuff you're aware of, but also on subconscious stuff.
"I, however, see everything. And I assure you that if I make a prediction, then that is indeed what you will do - despite your illusions of free will. It would be perfectly possible for me to make a statement like 'You're gonna go for the blonde' and for you to perversely decide to go for the brunette. But, you see, I'd know that in advance so my statement would not be a prediction - so I wouldn't bother making it.
"Sometime, you know, oracles can't actually make predictions."
"So what's you prediction now?"Saying this, I got up and walked out the door.
"That you'll fail to buy me a drink and that consequently I'll be leaving."
Veterans of this area may recall that predictions for a deterministic object-system are always possible from an embedding meta-system, but not necessarily from within the object-system itself.
Think the Cretan Liar Paradox, Russell's Paradox, Russell's Hierarchy of Types and so on.