|377 years and we're still baffled, René|
Descartes was prepared to doubt the evidence of his senses (illusions, errors, dreams), but "we cannot in the same way suppose that we are not while we doubt of the truth of these things; for there is a repugnance in conceiving that what thinks does not exist at the very time when it thinks. Accordingly, the knowledge, I think, therefore I am, is the first and most certain that occurs to one who philosophizes orderly." (Wikipedia).
An argument which seems compelling .. yet curiously hard to formalise.
Never, ever trust the logicians to lead you closer to truth. We have:
|- thinks(descartes) [assumption]and with a bit more effort one can conclude that x here must, or could be descartes.
|- ∃x.thinks(x) [existential introduction]
|- thinks(descartes) → ∃x.thinks(x) [standard logic]
All the heavy lifting here is being done by the existential quantifier ∃, simply reflecting the mundane point in logic that an individual exists as part of its assertion. If you replace 'thinks' by 'walks' the argument works equally well (or badly).
Had they worked (harder) to attempt:
|- thinks(descarte) → exists(descarte) (*)(and is existence a predicate?) one would have confronted the central mystery. How do we formalise 'thinks' and 'exists'? What axioms support a theory which has (*) as a theorem?
As an AI person, I'd rephrase the "Cogito" as a robot problem. How would we design a robot which could convincingly assert Descartes's thesis?
The "Cogito ergo sum" is an internal reflection; by definition it doesn't relate to the outside world. In the jargon, it's metacognition. We immediately hit a problem. What is going on in a robot when it asserts "I'm thinking"?
Presumably it's thinking about something in particular, which we normally model 'without loss of generality' as a deduction within the robot's database of assertions and rules, its world-model.
So consider an inferential process occurring in the problem-solving layer of the robot and a concurrent metalevel representation of that inference. (An inference which is occuring, or maybe has just occurred?)
If P and Q are a couple of arbitrary facts while P→Q is a rule, then something like this?
thinking (P, Q, t1, t2) :- database([P, P→Q], t1), database([P, P→Q, Q], t2), t2 > t1.Lacks conviction, don't you think? One is drawn to the byways of self-referential languages.*
I'm inclined to view the "Cogito" as more about consciousness, more about feeling self-aware than logic. Probably explains why we're as far away as ever from a compelling formalisation.
I notice I wrote about this in a similar fashion for sciencefiction.com back in 2011.
* Reflective Prolog and "Reflection in logic, functional and object-oriented programming".