From the MIT Technology Review.
"It’s been almost 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard tournament rules. Since then, chess-playing computers have become significantly stronger, leaving the best humans little chance even against a modern chess engine running on a smartphone.But Matthew Lai at Imperial College, London has a new approach. His artificial intelligence machine called Giraffe has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.
But while computers have become faster, the way chess engines work has not changed. Their power relies on brute force, the process of searching through all possible future moves to find the best next one.
Of course, no human can match that or come anywhere close. While Deep Blue was searching some 200 million positions per second, Kasparov was probably searching no more than five a second. And yet he played at essentially the same level. Clearly, humans have a trick up their sleeve that computers have yet to master."
"Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players.Decades ago an AI researcher made this comparison:
The technology behind Lai’s new machine is a neural network. This is a way of processing information inspired by the human brain. It consists of several layers of nodes that are connected in a way that change as the system is trained. This training process uses lots of examples to fine-tune the connections so that the network produces a specific output given a certain input, to recognize the presence of face in a picture, for example.
In the last few years, neural networks have become hugely powerful thanks to two advances. The first is a better understanding of how to fine-tune these networks as they learn, thanks in part to much faster computers. The second is the availability of massive annotated datasets to train the networks."
[Read more].
Suppose many years ago you had wanted to understand flight, as in birds and insects. So you played around with a sheet of paper until you fashioned, essentially by clever trial-and-error, a paper aeroplane.You have not understood flight, you have merely emulated a rather simple aspect of it. To understand flight you need fluid dynamics and a theory of aerofoils. And so it is with the neural networks of deep learning. The simulated neurons, axons and dendrites with their super-high-dimensional weighting-vector-spaces achieve minor miracles in selected domains .. but there is no theory.
"Look," you said, "I have recreated flight, at least the gliding version. Science has advanced!"
It flies, in a gliding version, but we still don't know why.
Classic (that is, symbolic) AI, for me, only ever had two ideas. One was automated inference over formal languages and the other was heuristic search.
The former covers expert systems, knowledge representation, planners, scripts, natural language understanding systems and automated theorem provers.
The latter covers chess-programs (and a hundred other games) and provides the control mechanism for many of the former systems using tree and graph traversal algorithms with clever pruning allied with sophisticated state evaluation functions.
These two ideas conceptualised Intelligence as abstract reasoning in a large space of symbolic options, selecting what was good and useful via a domain-specific evaluation function. It's not a bad theory in certain highly-intellectualised tasks but it falls apart for tacit, common-sense knowledge and everyday competences. The problem is that we're only able to properly formalise microworlds; the real world is just too interconnected, fast-moving and messy.
So I think it's fair to say that we've pretty well reached the limits of applicability of classical AI. As diminishing returns set in, the grown ups: Google, Facebook (even Mattel for God's sake) have moved to deep learning for the real goods, based on those inscrutable neural networks.
Hello Barbie is coming to town in November 2015. An example of what you can do with a 1965 idea (ELIZA) if you throw unbounded cash and people at it, and leverage the latest WiFi, Internet and deep-learning speech understanding technologies.
---
Thanks to Marginal Revolution and Slate Star Codex for some of these links.
No comments:
Post a Comment
Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.