Richard Feynman once remarked that "unless you can build something, you don’t really understand it." This simple observation underpins the very nature of scientific inquiry and technological progress. If you cannot replicate a phenomenon through engineering, then your understanding of it is at best incomplete.
But when we apply this insight to consciousness we are confronted with an unnerving reality: despite our increasing competence in building systems that outperform anything in the natural world: aircraft that outmatch birds in flight, tanks that could defeat any armoured reptile, and computer systems that easily outthink humans, we still cannot engage with consciousness itself.
This profound inability suggests a paradigmatic gap in our understanding. It seems that we are missing something fundamental about what consciousness truly is and how it might have emerged.
Modern AI systems converse, navigate, play games at superhuman levels. Yet they are not conscious. They might simulate understanding but they do not feel. They lack subjective experience.
To date, no engineering specification has required, or led to the emergence of, consciousness. While we have designed systems to meet every requirement we could imagine - intelligence, adaptability, problem-solving - none of these systems exhibit the subjective what-it’s-like quality of conscious experience. Have we overlooked some crucial aspect of what makes human beings not just intelligent, but conscious?
We turn to introspection. Take the example of driving a familiar route. Once the skill is mastered, the brain processes the task automatically, our conscious mind is elsewhere. However, when something goes wrong - say, an unexpected obstacle appears - we "wake up" to the problem. Our conscious mind kicks in, calling upon a broader model of the world, seeking a solution.
This shift from automatic processing to conscious thought hints at a key feature of cognition: consciousness arises when the system is confronted with a situation that can’t be resolved by routine processes. When the 'closed world' assumed by instinct expands to the 'open world' of coupled reality.The brain reaches out to a wider pool of knowledge, seeking to integrate more complex information, a 'bigger picture' to formulate a solution.
In these moments of cognitive consciousness, the brain is not simply processing more information, but is actively integrating a more expansive model of the world. This expansion of awareness feels like a conscious step - an intervention from the "bigger world" outside the confines of the immediate task.
The crucial question, however, is why this transition is accompanied by a conscious experience. Why does the mind 'wake up into self-awareness' when faced with complexity?
And this is not even the full explanation of consciousness. It only touches on the functional aspects - the cognitive process of integrating new information. It does little to explain the affective experience of consciousness - the "hard problem" that David Chalmers famously identified. Why does any system, biological or otherwise, experience the intense pain of a stubbed toe, or the exquisite joy of a Led Zeppelin riff? This is the domain of qualia - the raw, subjective feel of experience that, as far as we can guess, seems to arise from the interplay of lower and higher brain systems.
It seems plausible that emotions play a critical role in harmonizing the conflict between primal drives and higher-level cognitive goals. But this does not, in itself, explain why pain feels so profoundly bad, or why joy is so intensely pleasurable. The rawness of these sensations seems to arise from the tension between lower-level survival instincts and the more abstract, deliberate planning processes of the cortex.
Emotions, in this sense, act as a bridge between these competing systems. But why should these conflicts - between the brainstem’s imperative to action and the cortex’s more detached planning - be felt at all? Robot designers have been designing such multilevel 'subsumption architectures' for decades without anyone ever thinking that consciousness was involved.
This is (one of) the unexplained mysteries at the heart of consciousness.
The failure of materialism to account for consciousness appears absolute. AI systems get better and better but they're all p-zombies. ChatGPT? Superhuman competences, human-level dialogue; no consciousness.
Consciousness seems to be a solution in search of a problem. Is this sending us the message that we’re simply operating within the wrong paradigm?
The reductionist, materialist model of consciousness posits that the mind will eventually be explained through an understanding of brain processes. As a hypothesis, this is appealing in its simplicity: no magic. But also no success.
In despair, panpsychism offers a radically different approach, positing that consciousness is a fundamental property of the universe - something intrinsic to all matter. Consciousness is not an emergent property but an inherent aspect of reality. Atoms and the void - and consciousness.
Panpsychism offers no predictive or explanatory capabilities. It sells itself as a metaphysical framework, but without any practical means to explore the concept of consciousness. A comforting idea, perhaps, but one that presently leads nowhere.
We remain at an impasse. Materialism fails to explain consciousness, and panpsychism fails to provide any productive means of exploring it. Feynman might say: if we can’t yet build consciousness, it simply means we don’t understand it well enough (or indeed, at all).
So consciousness joins the other great questions: why is there something rather than nothing? What is the true nature of reality? And Camus' question.
No comments:
Post a Comment
Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.