Wednesday, November 06, 2024

Some of those alive today will never truly pass...


Some of those alive today will never truly pass...

I live in the moment. I don’t experience yesterday’s me, or the me of ten seconds ago. Those people are only memories in the here-and-now: partial and selective.

The me of tomorrow is speculative.

I feel myself a sliver of consciousness in this moment, surrounded by a sterile past and future; a present-spark in the wasteland of non-being in the rest of eternity.

I don’t worry about it.

Let me paraphrase: that intellectual truth carries no affective overtones.


You tell me I will be shot to death tomorrow. That prospect, truly believed, updates my present internal state and induces fear. How uncomfortable! 

But will I really be shot tomorrow? Who knows? Nobody.

I die and am resurrected in a very advanced AI system, a distant descendant of the Chatbots and Replikas of today. It’s loaded with my personality model, with my personal history (reconstructed from documents, posts, pictures, videos, memoirs) used to create detailed ‘memories’.

Perhaps it has a body much like mine so it’s an embodied avatar of myself.

How is it different in kind from my present anticipation of tomorrow-me?

It’s hard to say. I think it would come down to whether it had an inner life, just as what constitutes the me that worries about such things seems necessarily to have an inner life.

Hat tip René Descartes.


Why do we have an inner life at all? Where did that come from, evolutionarily speaking?

Because we struggle to understand ourselves. A single level architecture, whether a merely reactive robot like an insect - or a zombie powered by a purely rational theorem-prover - would either have nothing capable of self-understanding (the insect) or nothing to understand (the zombie).

But humans have a triune architecture, with our cortex-powered superego/ego a late addition to our selfish, instinctual id - a cortex doing the work of situating us in an extended, non-immediate world, a world with a past, a future, a web of objectives, means, ends, social relationships, friends, enemies, neutrals and endless politics. No wonder we are conflicted over what we should be doing: the bit that negotiates lives in endless, fraught dialogue with the bit that selfishly wants and doesn’t want.

Our mental world is populated with intentional prediction models of others (goals, beliefs, plans, perceptions, actions) and of course, models of ourselves: “Learning to be Me” in Greg Egan’s phrase.

Our chatbots don’t currently have an internal life: it is not ‘something that it is like to be’ if you’re ChatGPT. But that’s a feature of the current architecture. There's not much current demand for emotional, conflicted, somewhat edgy AI agents. But when that moment comes…

When it comes, I should contemplate my biological death as no longer final on this Earth, provided that at some point my descendants choose to reincarnate a personality much like my own.

How convinced are you? And did you reflect on this deeply enough?

No comments:

Post a Comment

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.