Thursday, February 15, 2018

Disrupting über-surveillance: a five point plan

Yesterday I wrote about the potential for new sensor, effector and AI technologies to create a totalitarian surveillance state (after Charles Stross). What could be done to defy such measures?

At the extremes there is, plainly, no solution. If you're thrown naked into a hardened cell, they junk the key and never visit you again .. you're going nowhere.

But real-world security systems are not like that. They're constructed of real, fallible and resource-limited components. Think of the surveillance system as a security agent, as shown below.

A functional diagram of the surveillance-enforcement system - with countermeasures

The surveillance systems are top right: cameras, microphones, pressure pads, .. whatever.

Sensor data is interpreted into symbolic form via low-level primary processing. Interpretation can be informed by feed-forward of higher-level hypotheses - as in the predictive processing model.

In a context of the system's present beliefs and goals, the perceptual world-view is acted upon by a planning system to determine the appropriate response. This is the point where humans are likely in the loop.

Finally, going down to mid-bottom of the diagram, resources are chosen and marshalled and tasked to execute the operational response: "stop and search", "arrest", "kill" .. .

Each of these modules has a possible attack point from the viewpoint of the adversary.

  1. Sensors can be physically attacked - cameras can be painted over or depowered.

  2. Interpretation can be confused: some AI vision classifiers have had problems with patterned spectacles; there are opportunities with disguises and bogus roles.

  3. Planning and resource-assembly can be disrupted by physical attacks and/or resource-intensive diversions. Or directly by an insider.

  4. The beliefs and goals of the system can be subverted: from the outside by subtle misdirection (eg use of an apparently-benign front organisation); or internally by hacking.

  5. The final execution stage can be met with misdirection, attacks or diversions.

As the surveillance and enforcement systems get ever more ubiquitous and sophisticated, the rigidities of AI transform into vulnerabilities. Absent an AGI (and they will be absent), security systems are baffled by human subtleties while human overseers flounder in alert-trivia.

Adversary cleverness, preparation and resources on the ground make for a more even contest than one might imagine.

---

You can usefully think of the security state as a vast, distributed, rather rigid and not-terribly-bright personality. An ESTJ most likely. Then consider how you might fool, or con such a person.

Most of the time this will work, but beware: eventually - if you are successful - they will put someone smart and imaginative on your case.

The security state can change its personality on a sixpence if sufficiently provoked.

No comments:

Post a Comment

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.