Posted at Amazon.com here.
The Oxford Handbook of Computational Linguistics (Oxford Handbooks) (Paperback).
This ‘handbook’ needs both hands to lift it! At 700+ pages and 38 chapters, detailed chapter-by-chapter review is impossible. Let me start with the top-level structure, which divides the book into three parts: Fundamentals; Processes, Methods and Resources; and Applications.
Part one, ‘Fundamentals’, walks through the standard sub-disciplines of computational linguistics with chapter headings: phonology, morphology, lexicography, syntax, semantics, discourse, pragmatics and dialogue, formal grammars and languages, complexity theory. Each chapter is a short introduction and overview to the topic, aimed at the informed newcomer (i.e. it helps if you have a computer science/maths background and know about predicate logic and state machines).
Part two, ‘Processes, etc’, covers a number of problem areas and techniques: text-segmentation, part-of-speech tagging, parsing, word-sense disambiguation, anaphora resolution, natural language generation and so on. There is little commonality between the chapters, but they are all informative.
The final part, ‘Applications’ covers areas such as machine translation, information retrieval, text summarisation, second-language computer-assisted learning systems and spoken dialogue systems.
As a comprehensive, and relatively recent review of the whole field the book is excellent. Some points which caught my interest.
1. Speech and written language are hugely different, due to noise, self-repair, speech acts and discourse functions, accents and the strange ‘grammaticality’ of utterances (p. 521).
2. The distinction between simpler finite-state dialogue models (machine-centric) vs. more dynamic planning-based dialogue managers (which can deal with mixed-initiative dialogue) - chapter 7.
3. The controversial role of real-world knowledge. This is different from semantics, which is more about representational and inferential adequacy. Chapter 25 on Ontologies surprising states “it is not clear to what extent NLP technology, in its current form, needs such ontologies and their complex knowledge representation systems”. Apparently “large scale vocabularies with very limited reasoning are preferred”. Interesting.
Human-to-human conversation seems, in performance, to be a unitary phenomenon. For scientific purposes, however, it has to be analysed into sub-fields, as in the chapter headings of part one. However, there is then both the problem of tunnel vision, and of scope creep: we see, for example, syntactic approaches expanding into the spaces of semantics and pragmatics in, to my mind, an unbalanced way.
I was most interested in Spoken Dialogue Systems, as these attempt to combine the state of the art in the separate disciplines into a unified architecture and implementation to address the original problem: a powerful constraint on one-sided development. The solution architectures seem to show that modular works, with bottom-up statistical techniques performing well at the speech-recognition level, and symbolic processing techniques such as automatic planning to achieve agent goals working at the dialogue level. The latter seems to be the least developed, however, as linguistics merges into a more general social agent theory.