Monday, August 22, 2016
How will AIs become politically correct?
In my recent post, "Gloria Hunniford and the case for AI biometrics", I advocated the use of AI facial recognition systems in bank branches to check for scammers. They would be more effective than cashiers because 'AI systems don't have to be polite.'
But of course they do. Hardly a day goes by without some story appearing about an AI system which 'noticed' certain unfortunate connections and had to be tweaked.
Some of these stories reflect genuine issues of training sets and algorithm-configuration; others expose the system's aspie-like tendency to blurt out uncomfortable truths. And there are plenty of them - truths which fall outside the famous Overton window.
I think it will be a very smart AI which can keep two sets of books: the accurate model of the world it generates from its deep learning and the acceptable model which it has to use and pay homage to in public.
Since the acceptable model is ideological rather than based on evidence, it's a non-trivial process to concoct the politically-correct version from the data trawled exhaustively from reality. How would an AI handle this?
Till we get AI self-deception really locked down, I see a long spell of high-pay-grade tweaking from specialists at Google, Facebook and the like, carefully guided by their in-house commissars.