Link to Amazon page here |
David Mindell trained as an electronics engineer, where he worked on deep-water submersibles, and as a social scientist where his ethnomethodological studies have looked in detail at how people construct and rework social relationships to integrate new technologies. If there’s one take-home message here, it’s that people create artefacts – technologies – to solve problems, and that all such tools, even those with substantial AI in the control loop, have to operate successfully embedded within a matrix of human processes.
Mindell’s case studies include underwater crewed and remotely-controlled vehicles; advanced aviation automation and military RPVs; space systems such as Mars rovers and the ISS, and finally innovative proposed systems such as driverless cars. I read one comment on his work which argued that there could be no issue in principle with fully autonomous systems: the fact that existing ‘autonomous’ systems always sit in a loop with human specialists is simply that it hasn’t yet proven cost-effective to automate said specialists. This is to spectacularly miss the point, that all such ‘autonomous’ systems are in fact embedded within human systems; all such artefacts are tasked, their operations monitored and their results delivered into wider human systems and contexts. You can never escape the issue of the human-machine interface.
And this issue has been around for a long time. A generation ago people worried about the inscrutability of expert systems, the bafflement humans felt when a system veered off into some unexpected course of deduction or action. Was this intended or is it a glitch? In either case, how can we humans continue to engage with the ‘rogue’ system? Mindell has many examples of autonomous subsystems (autopilots and autolanders come to mind) where the automation fails inscrutably, throwing control back to the unprepared user/driver/pilot with consequent disaster. This is the stuff of newspaper headlines.
The automation which seems to work best is augmented reality. Here the automation systems map a high-complexity environment (eg engines and system status, outside terrain) into a visualisation which makes task performance easy (eg keep the craft icon on the guidance icon as you come onto target). Here the human stays in the loop with enhanced powers.
Autonomous systems work where environmental behaviour is largely predictable and there are few (and acceptable) negative consequences to unanticipated failure modes. In military affairs, guided missiles and mines come to mind, although unintended consequences in the latter case have proven controversial. Put a putative autonomous system into an open environment with a great deal of under-constrained human interaction and unpredictable environmental variation (snow, weather, building work, crashes, diversions) and no AI system currently foreseeable can cope. The system requirement would be emulation of the full common-sense and conversational capabilities of a socialised adult human. No, we are nowhere near that point.
These seem to be good reasons to be sceptical about prospects for fully-automated driverless cars, despite their great desirability and corresponding levels of hype. David Mindell finishes his book with a well-researched assessment of prospects for the Google car and similar automotive innovations. I can summarise by saying, don’t hold your breath.
The author has written an excellent and thought-provoking book, a welcome attempt to see beyond the limitations of a pure systems engineering approach to advanced AI automation: essential reading.