Ludwig Wittgenstein famously wrote:
"If a lion could speak, we would not be able to understand him". This is on the grounds that language only acquires meaning through a community of speakers using it as part of their 'form of life' (way of life). Hence beings with a radically different way of life would not be able to make sense of the others' utterances."I am unconvinced. The lion inhabits the same spatio-temporal world as we do, lives in a similar planetary environment, has the same mammalian drives. We share all that stuff. No, we'd understand the talking lion only too well.
The real aliens are the ones we're building.
Back in the dawn era of artificial intelligence, researchers worked on chess endgames. They built a database storing every possible play in the final 20-30 moves, marking those which led to a win. Their AI program simply looked in the database to find the optimal response to any human move.
It was a curious experience playing against such a program. The computer made moves that seemed incomprehensible, which defied understanding, but which by some strange alchemy led inexorably to its victory. It was impossible to understand what the program was thinking of.
If that program could have talked, all it would have said was:
"My last move was marked optimal in my database of quite a few possible moves."It would have had to say that sentence every time, and its human opponent's understanding would never have improved. Donald Michie coined the term 'Human Window' - and these programs were outside it.
These days we do better. Our AI programs no longer look up their actions in mammoth, static databases - they learn features, they chunk the phenomena.
Chunking it may be, but not as we know it.
Here's "fhe" on Hacker News:
"When I was learning to play Go as a teenager in China, I followed a fairly standard, classical learning path. First I learned the rules, then progressively I learn the more abstract theories and tactics. Many of these theories, as I see them now, draw analogies from the physical world, and are used as tools to hide the underlying complexity (chunking), and enable the players to think at a higher level.Steve Hsu sardonically quotes DeepMind CEO Demis Hassabis:
"For example, we're taught of considering connected stones as one unit, and give this one unit attributes like dead, alive, strong, weak, projecting influence in the surrounding areas. In other words, much like a standalone army unit.
"These abstractions all made a lot of sense, and feels natural, and certainly helps game play -- no player can consider the dozens (sometimes over 100) stones all as individuals and come up with a coherent game play. Chunking is such a natural and useful way of thinking.
"But watching AlphaGo, I am not sure that's how it thinks of the game. Maybe it simply doesn't do chunking at all, or maybe it does chunking its own way, not influenced by the physical world as we humans invariably do. AlphaGo's moves are sometimes strange, and couldn't be explained by the way humans chunk the game.
"It's both exciting and eerie. It's like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). and much to our surprise, it's a new way that's more powerful than ours."
"Over the summer DeepMind will look at the internal representations used in the valuation engine to see how they correspond to expert human intuitions about Go."If we design minds which induce features from spaces which do not share our human geography and agency, their mental concepts will massively fail to intersect with our own. We will have no referents for their words; it will be like talking general relativity to a four year old.
"This is like peeking into the mind of an alien creature that evolved fighting for territory in a 2D world with discrete spacetime :-)"
If such an AI could speak, we would not be able to understand it.*
* You might be inclined to say, '... without considerable effort.'
I might agree with you .. up to a point.