Thursday, March 30, 2017

Open systems meet closed automation

Let me start with this rather intriguing story (via Bruce Schneier).



"Prior to World War II, Abraham Wald was a rising mathematician in Europe. Unable to obtain an academic research position in Austria due to his Jewish heritage, Wald eventually made his way to the U.S. to become one of the most important statisticians of the 20th century.

"One of Wald’s most prominent works was produced for the U.S. government’s World War II-era Statistical Resource Group. The project examined aircraft that had returned from their combat missions and the locations of armor on the planes. Placement was, of course, no trivial matter. Misplaced armor would result in a negatively balanced, heavier and less maneuverable plane, not to mention a waste of precious wartime resources.

"Tasked with the overall goal of minimizing Allied aircraft losses by placing additional armor in strategic locations on the plane, Wald challenged the natural instincts of military commanders. Conventional wisdom suggested that the planes’ survival rates might benefit from additional armor placed in the areas that suffered the highest volume of direct hits. But Wald found that was not the case.

"Leveraging data stemming from his examinations of planes returning from combat, Wald made a critical recommendation based on the observation of what was not actually visible: He claimed it was more important to place armor on the areas of the plane without combat damage (e.g., bullet holes) than to place armor on the damaged areas. Any combat damage on returning planes, Wald contended, represented areas of the plane that could withstand damage, since the plane had returned to base.

"Wald reasoned that those planes that were actually hit in the undamaged areas he observed would not have been able to return. Hence, those undamaged areas constituted key areas to protect. A plane damaged in said areas would not have survived and thus would not have even been observed in the sample. Therefore, it would be logical to place armor around the cockpit and engines, areas observed as sustaining less damage than a bullet-riddled fuselage.

"The complex statistical research involved in these and Wald’s related findings led to untold numbers of airplane crews being saved, not only in World War II, but in future conflicts as well."
---

As designers we always have a theory of our proposed artefact in its intended environment. Sometimes we capture the theory in a formal specification, sometimes it's implicit in the examples we feed to some artificial neural net, frequently it's some fuzzy understanding we incorporate into a plain-language requirements document plus some test data.

In any event, the final engineered artefact embodies a theory - the theory of the environment in which it works correctly. That environment is often the real world and here we hit a problem: the real world is not a precisely-specified closed system*. Inevitably the artefact will encounter an event which is out of the envelope of its design - and then it will fail.

A good example of this is driving. Here, you are the artefact. Initially you learn in structured lessons how to control the car and tactics to safely navigate the streets.

As you gain experience, you statistically encounter fewer, rarer anomalous events. If you are lucky, your consequential mistakes will not be too serious. You update your protocol and become a better driver. But you will never be perfect.

Driving is an open system. There are (porous) boundaries around the theory of driving but as all experienced drivers know, that theory incorporates a great deal of real-world social knowledge - it's more than seeing the white lines in the rain. **

---

When we classify a human social role as routine, we're saying that the wider system into which the role is enrolled is effectively closed and can be pre-specified. No real systems are truly closed so we always provide an escalation route to a competent (ie more informed) authority. For truly routine roles, we don't expect that escalation to occur too frequently, or to be problematic when it does.

Bruce Schneier's excellent article is about countering cyber-attacks. This is far from routine. The adversary is using intelligence, novel tools and unfixed vulnerabilities to get you. That's pretty much the definition of an open system. Schneier describes the problem like this:
"You can only automate what you're certain about, and [...] when an uncertain process is automated, the results can be dangerous."
The right answer is to use automated systems within manageably closed subsystems (like antivirus routines) within the broader oversight of a computer-augmented human response team.

Perhaps one day we will have human-socialised AIs which have the intuitions, general knowledge and motivational insights which humans possess, and then we can hand things over to those said AIs, confident they will make no more mistakes than we would in those incredibly challenging not-sufficiently-closed systems.

---

*   Arguably it is from the point of view of modern physics - but that doesn't buy you anything.

** Here's a review about the implications for driverless cars.

No comments:

Post a Comment

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.