Sunday, November 26, 2017

"Surfing Uncertainty" on Autism (and Schizophrenia)

Scott Alexander got pretty excited back in September about Andy Clark's "Surfing Uncertainty" (in this post) - but that's because he's a psychiatrist and Clark's model has some insightful things to say about both Autism (Asperger's Syndrome) and Schizophrenia.

Amazon link

I read the book and found Clark's approach (that biological agents, aka 'animals', cognitively function through a combination of top-down model-based prediction and bottom-up sensor-based verification) highly plausible, though not that new. Still he pushes the model quite a way - the details are instructive.

My main problem with the text is that the proposed model is really an architectural/engineering one, yet Clark is a philosopher. He writes in that over-abstract, bloated and padded style which people like Daniel Dennett have made so famous.

Somewhere in there, good ideas are trying to extricate themselves from the gloop.


Anyway, here's how Scott Alexander, channelling Andy Clark, talks about Autism.
"Various research in the PP [Predictive Processing] tradition has coalesced around the idea of autism as an unusually high reliance on bottom-up rather than top-down information, leading to “weak central coherence” and constant surprisal as the sensory data fails to fall within pathologically narrow confidence intervals.

Autistic people classically can’t stand tags on clothing – they find them too scratchy and annoying. Remember the example from Part III about how you successfully predicted away the feeling of the shirt on your back, and so manage never to think about it when you’re trying to concentrate on more important things? Autistic people can’t do that as well.

Even though they have a layer in their brain predicting “will continue to feel shirt”, the prediction is too precise; it predicts that next second, the shirt will produce exactly the same pattern of sensations it does now. But realistically as you move around or catch passing breezes the shirt will change ever so slightly – at which point autistic people’s brains will send alarms all the way up to consciousness, and they’ll perceive it as “my shirt is annoying”.

Or consider the classic autistic demand for routine, and misery as soon as the routine is disrupted. Because their brains can only make very precise predictions, the slightest disruption to routine registers as strong surprisal, strong prediction failure, and “oh no, all of my models have failed, nothing is true, anything is possible!

Compare to a neurotypical person in the same situation, who would just relax their confidence intervals a little bit and say “Okay, this is basically 99% like a normal day, whatever”. It would take something genuinely unpredictable – like being thrown on an unexplored continent or something – to give these people the same feeling of surprise and unpredictability."
As an AQ high-scorer, I relate to this. In many a social situation I'm walking on eggshells, never quite knowing how people will respond. I'll say something which seems amusing within my own private model of the subject of discourse, only to be met with incomprehension - or worse, consternation - as my poor unconscious predictive model of other people's likely response fails again.


The very next section (11) summarises the story on Schizophrenia:
"Schizophrenia. Converging lines of research suggest this also involves weak priors, apparently at a different level to autism and with different results after various compensatory mechanisms have had their chance to kick in.

One especially interesting study asked neurotypicals and schizophrenics to follow a moving light, much like the airplane video in Part III above. When the light moved in a predictable pattern, the neurotypicals were much better at tracking it; when it was a deliberately perverse video specifically designed to frustrate expectations, the schizophrenics actually did better.

This suggests that neurotypicals were guided by correct top-down priors about where the light would be going; schizophrenics had very weak priors and so weren’t really guided very well, but also didn’t screw up when the light did something unpredictable. ...

The exact route from this sort of thing to schizophrenia is really complicated, and anyone interested should check out Section 2.12 and the whole of Chapter 7 from the book. But the basic story is that it creates waves of anomalous prediction error and surprisal, leading to the so-called “delusions of significance” where schizophrenics believe that eg the fact that someone is wearing a hat is some sort of incredibly important cosmic message.

Schizophrenics’ brains try to produce hypotheses that explain all of these prediction errors and reduce surprise – which is impossible, because the prediction errors are random. This results in incredibly weird hypotheses, and eventually in schizophrenic brains being willing to ignore the bottom-up stream entirely – hence hallucinations.

All this is treated with antipsychotics, which antagonize dopamine, which – remember – represents confidence level. So basically the medication is telling the brain “YOU CAN IGNORE ALL THIS PREDICTION ERROR, EVERYTHING YOU’RE PERCEIVING IS TOTALLY GARBAGE SPURIOUS DATA” – which turns out to be exactly the message it needs to hear.

An interesting corollary of all this – because all of schizophrenics’ predictive models are so screwy, they lose the ability to use the “adjust away the consequences of your own actions” hack discussed in Part 5 of this section.

That means their own actions don’t get predicted out, and seem like the actions of a foreign agent. This is why they get so-called “delusions of agency”, like “the government beamed that thought into my brain” or “aliens caused my arm to move just now”. And in case you were wondering – yes, schizophrenics can tickle themselves."
My overall take-home message from this book was that tabula rasa, blank slate paradigms of so much contemporary AI may suffice for crafting smart and powerful classificatory tools, but they won't hack it when we try to build socially-competent agents. In facial recognition and playing Go we're already superhuman; chatbots not so much.

No comments:

Post a Comment

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.