What the kitchen catcam sees |
His food, already somewhat depleted, can be seen on the blue tray. He really doesn't like the hard tack and this may explain why he vomited so much of it up in our absence ...
The Economist has a feature this week on 'Neuromorphic Computing' - designing machines which faithfully replicate the neural design of the human brain. The principles of this new (and increasingly well-funded discipline) include:
- low power consumption (human brains use about 20 watts, whereas the supercomputers currently used to try to simulate them need Megawatts);
- fault tolerance (losing just one transistor can wreck a microprocessor, but brains lose neurons all the time); and
- a lack of need to be programmed (brains learn and change spontaneously as they interact with the world, instead of following the fixed paths and branches of a predetermined algorithm).
The article concludes:
There remains, of course, the question of where neuromorphic computing might lead. At the moment, it is primitive. But if it succeeds, it may allow the construction of machines as intelligent as—or even more intelligent than—human beings. Science fiction may thus become science fact.
Moreover, matters may proceed faster than an outside observer, used to the idea that the brain is a black box impenetrable to science, might expect. Money is starting to be thrown at the question. The Human Brain Project has a €1 billion ($1.3 billion) budget over a decade. The BRAIN initiative’s first-year budget is $100m, and neuromorphic computing should do well out of both. And if scale is all that matters, because it really is just a question of linking up enough silicon equivalents of cortical columns and seeing how they prune and strengthen their own internal connections, then an answer could come soon.
Human beings like to think of their brains as more complex than those of lesser beings—and they are. But the main difference known for sure between a human brain and that of an ape or monkey is that it is bigger. It really might, therefore, simply be a question of linking enough appropriate components up and letting them work it out for themselves. And if that works perhaps, as Marvin Minsky, a founder of the field of artificial intelligence put it, they will keep humanity as pets.
It's not science-fictional to imagine that we might have a conscious computer system within a decade or so; within most of our lifetimes for sure. How do we ethically debug it - or turn it off?