|Derivative, dreary, uninspiring and not very original|
As promised a few days back, I have now force-marched my way through this tedious book, admittedly skimming the last few chapters. There has been a fair amount of hype about the alleged dangers of future-AI. I could never see it myself, but thought it fair to give the proponents of these scare-stories their best shot .. which seems to be Nick Bostrom.
Alas, Professor Bostrom from Oxford University says nothing in this book which hasn't been dealt with more profoundly in a dozen science-fiction stories. I learned literally nothing whatsoever of value. The book is so badly written - endless lists of not-very-insightful conceptual categories & pointlessly over-abstract language - that my eyes glazed over at the prospects of writing a detailed review. Thankfully, someone else with a PhD in artificial intelligence has done the job over at Amazon and I can quote their thoughts verbatim in this vitriolically-amusing review.
"As someone who holds a PhD in AI, I was super-excited to get Nick Bostrom's book "Superintelligence". Finally, mainstream discussions on a topic I care about from a highly ranked academic institution, with a cool looking owl to boot. Yea, owls! Fantastic, right? Wrong. After the first few chapters I had to force myself to finish it; it's miserable.I like to think that if this were my review, I would have been just a tad politer.
"This book is a 260 page tribute to Eliezer Yudkowsky that does not appreciate concepts at all in the entire field of constraints nor does it take into account common sense or any practical basis in AI. At best, this is a grammatically well-written sensationalist book designed to inspire irrational fear of a fictional form of AI, but more likely it is but one of many examples of the distilled essence of naivety of people writing on a topic they know nothing about.
"At one point Nick quotes an idea postulated by Eliezer as a practical / credible scenario in the rise of malevolent AI. Eliezer suggests an advanced AI could reassemble a biomimetic computing device using a naive human and a stereo speaker to force certain chemical reactions to create a 'nanosystem' threat inside a glass beaker (p. 98). Oh, but first it needs to 'crack the protein folding problem'. Never mind that problem has been shown to be at least np-hard, and possibly np-complete - a clear demonstration of a lack of understanding of the protein folding problem in the first place, but the notion of basic physics to assemble nano-particles using a liquid substrate through non-uniformed shaped vessel made of glass seems "far-fetched", to put it mildly. I've read more plausible science fiction.
"At one point Nick states that by adding more money, growth in computational capabilities increase linearly. I guess he's never heard of order of operation -- sometimes referred to associativity. You see, we have things like division and subtraction that take precedence in computer science. (Most people are introduced to these concepts before 5th year maths, folks.) This means we cannot just pass along as many computations as we'd like evenly across a giant grid of processors as Nick seems to assume.
"This book is riddled with purple prose and denotatively 'weird' use of terms -- such as "recalcitrance". Ever been next to that person at the party who uses unnecessarily large words when a simplistic explanation will do? You'll get a lot of that sensation in this. You'll also see the occasional spelling error, which is nice -- like at the bottom of page 56, amongst others.
"In addition the citations are dubious at best - e.g 'world robotics 2011'. Was there some big consensus at a conference regarding all of one topic in AI? That'd be a first in history I'm pretty sure for any conference on any topic; I guess I missed one hell of a conference.
"The head of one of Oxford's philosophy departments might be an intelligent person but he's absolutely unqualified to speak on any topic in practical AI - by his own demonstration. I can honestly say that purchasing this book was literally the worst money I've ever spent - and I've bought an Oasis album.
"There is one redeeming quality of this book: The survey of scientists in AI-related fields who believe SGI might become a reality that describes when, at a 50% threshold, that might occur (p. 19), and the general description of the 'types' of SGI and the mediums in which those have been thought to occur in (Chapter 2) -- e.g. whole brain emulation versus evolutionary models, etc, but these come with serious errors in assumptions. Basically, if you had a PhD student and this was their thesis, it'd be s*** except for the second chapter of their work.
"TL;DR: If you want to be told that AIs will take over space and destroy humanity using a stereo speaker then this is the book for you. If you like reason and don't want to waste £20, then may I suggest something of equivalent intelligence such as Where's Wally, or maybe a nice collection of toilet roll? At least the latter would be fit for a higher use. Oxford should be ashamed to have given this guy his own department, but proud he's free of any teaching responsibilities. Who knows how much damage he'd do to graduates."
The topic (and indeed danger) of 'superintelligence' is not stupid per se. Evolution tells the story of species being wiped out by changes in their niches or by invaders who can exploit their niche more effectively. Nothing says that humans can't design agents which will out-compete us, even if bio-hacked viruses seem more of a risk than super-AIs.
Nevertheless we will no doubt design at some point very competent competitors. That competence is not reducible to 'superintelligence' however, which properly only refers to a general competence and efficiency of concept management (see here).
The road to systems with the overall competence to take us all out is so complex and multifaceted that the idea that some hacker with a PC is going to come up with the killer algorithm in their bedroom, or that it could all happen over the weekend, is completely fanciful. Might as well write a book about the dangers of black magic.