From
Singularity Hub.
"In an interview with Singularity Hub, Ray Kurzweil provides an update about his first two months as Director of Engineering at Google. During the interview Kurzweil revealed that his team is collaborating with other groups at Google to enable computers to understand and speaking language just like humans. Kurzweil also tells us how Larry Page personally recruited him to join Google to pursue the goal of creating machines that can think and reason like the human brain.
Speaking with Singularity Hub Founder Keith Kleiner, Ray explained that ”My project is to get the Google computers to understand natural language, not just do search and answer questions based on links and words, but actually understand the semantic content. That’s feasible now.” To successfully do this will involve employing technologies that are already at Google like the Knowledge Graph, which has 700 million different concepts and billions of relationships between them. His team will also develop software as part of a system that will be “biological inspired” and can learn in a way analogous to the way the human brain is designed, that is, in a hierarchical structure."
Here's the interview (six minutes or so).
Kurzweil comes across as a nice guy, but judging from the
reviews of his recent book, he is neither a tremendously deep thinker nor fantastically well-read in the sprawling field of AI. Still, Google has plenty of smart people to help him out.
Automated Natural Language Understanding Systems of human-equivalent ability currently don't exist and are astonishingly hard to design. Example: if you write a Google query:
"
Why is the sky blue?"
Google today will search for web pages which have these words (or equivalent terms) closely spaced within their text and will then serve you a prioritised list. This is not understanding. To
understand the query, Google would have to translate the query into an internal '
mentalese', something like
"
Query(
Reason(
Colour(
Sky,
Blue)),
Asker(<
YOU>))" *
and then know what to do with each of the emboldened words. Google therefore needs
mini-theories for each of the concepts associated with them.
To answer the query, Google would have to have read the pages of the web (not a problem, it's already stored them) and translate those also into'
mentalese'. Now it has to figure out how to use this mountain of unreliable and inconsistent conceptual information (theory piled upon theory) to answer your question.
Finally, Google really needs a theory of
you. What did you intend by the question? Is it a physics answer you want and if so, how much science do you know already? Are you an artist - do you want a poetical answer? Are you a cultural historian - do you want to know the different explanations people have come up with. And so on.
If Google succeeded, it would be emulating a veritable army of call-centre workers, each of whom knew everything published on the Internet, and a lot about you personally. These are the 'guys' you'd be dealing with every time you pointed your browser/app at Google.
I would be
impressed .. and a little nervous.
---
* A note for experts.
"
Query(
Reason(
Colour(
Sky,
Blue)),
Asker(<
YOU>))" seems like a relatively innocuous term in first-order predicate calculus (FOPC): nothing in fact could be further from the truth. FOPC is a highly-constrained declarative formal language much loved in AI because of its assumed tractability for automated theorem proving. Unfortunately it's a straitjacket when it comes to describing how humans linguistically interact.
"
Query" is like a question-mark, semantically interpreted in the domain of
Pragmatics (cf Speech-Act Theory). Pragmatics is founded upon agents with states of mind, beliefs and intentions - all completely absent from FOPC.
"
Reason" is worth a Ph.D. thesis of its own. It's a cousin to the idea of "proof", formalised in metamathematics (cf Gödel's Incompleteness Theorem). Proofs (and reasons) look a little bit syntactic - formulae which justify other formulae via chains of inference. In fact the concept is clearly a semantic one (think of the way
Standard ML manipulates proofs). There are some arguments omitted in the "
Reason" term shown here, as all reasons start from some set of
grounding assumptions - but what are they?
"
Colour(Sky, Blue)" is a formula of FOPC but we need its denotation not to be TRUE or FALSE but, like Situation Semantics, to be a semantic object which connects the sky to a colour. Queries in general will require at least temporal, intentional and modal operators which are not present in FOPC.
<YOU> is a deictic signifier but that is the least of it: the reference is to an
agent with a state of mind, namely
you - with
what you know and
what you want clearly implicated. Another Ph.D. thesis in waiting.
To the Google team led by Ray Kurzweil I therefore commend Dorothy's famous remark: "Toto, I've a feeling we're not in Kansas any more."