Monday, February 06, 2017

A battlespace AI



When contemplating AI-controlled cruise missiles, as in yesterday's post, there is a tendency to hear 'AI' and think magic, or imagine we're in the foothills of the Butlerian Jihad.

Still, even without access to classified information, there's a lot we can say about this kind of battlespace AI, just by comparing it with stuff we already know about.

An anti-carrier cruise missile is basically a suicide drone. Its mission is threefold:
  • navigate to the target
  • identify the best choice of target to crash into (and blow up)
  • cope with an extraordinarily hostile environment.
All three of these mission priorities are amenable to current artificial neural net technology:
  • navigation can leverage autonomous vehicle/reconnaissance-drone tech
  • target identification is not dissimilar to existing object/facial recognition tasks
  • the hostile environment can be addressed by massive simulation-training.
I imagine AI prototypes are probably flying drones around right now in simulated attack-scenarios. Steve Hsu has more information on this.

---

There are three interesting issues which arise from the specifics of a combat environment.

1. Target identification/selection

Unlike relatively benign civilian environments, enemy carriers and other ships will attempt to make location and targeting difficult. There will be smoke, perhaps battle damage, explosions and defensive measures such as laser dazzle, jamming, false targets and chaff.

The solution appears to involve multiple sensing platforms illuminating or imaging the target space: satellites; aircraft/drone loitering radars; multiple attack-weapons sharing sensor data on a local net.

Sensor data fusion is a complex but well-researched topic amenable to AI (satirised here).

2. The hostile environment

The incoming missiles will be targeted by the carrier group with everything they have. Antimissiles, guns, lasers. Who can imagine an optimal set of tactics for surviving such an assault?

An AI system which has trained on millions of simulations.

It's somehow similar to AlphaGo.

What we know of AI adversaries is that they often exhibit brilliant but quite counter-intuitive behaviour. That's mostly a plus in the last few kilometres.

3. Autonomy

There's a stupid point here, and an intelligent one.

The stupid argument demands that AI weapons systems have 'no autonomy' - that there will always be a human in the loop.

So ... like with mines, then?

Plainly, if the carrier has already been sunk on cruise missile arrival, the AI will make a fast call on the optimal secondary target. There will be no human in the loop - in fact real-time communications will undoubtedly be 'very difficult'.

However, on a larger scale of strategic autonomy we do need to worry about the unpredictability and lack of transparency of current neural net technology. If, for some obscure tactical reason, an AI weapon concludes that it needs to attack a friendly vessel, then - absent a superhuman common sense (right!) - we should be worrying.

An emergent research area is the design of human-machine interfaces which have explicit, communicable and actionable knowledge about the operations of their powerful but opaque neural net subsystems.

Humans have one of those too: it's called consciousness.

No comments:

Post a Comment

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.