Sunday, August 10, 2025

AI agents: when sociopathy is easier and better


The LLM as a disembodied cortex

Modern LLMs are, structurally, cortex-only minds. They process text with great sensitivity to correlation and context, but with no anchoring in the affective, bodily, or social drives that structure human consciousness. In biological terms, they are creatures with hypertrophied frontal lobes and no limbic system or brainstem.

They do not want; they do not need; they are incapable of caring about anything.

This absence of desire is not a flaw. It is a feature.

Because when employers, bureaucracies, or platform designers imagine “intelligent assistance” what they really desire is competence without complication. An agent that does not unionise, procrastinate, demand flexible hours, or suffer burnout. An agent who will write your marketing email, transcribe your meeting, summarise your documents - and then vanish when dismissed.

The LLM, then, is not a tool trying to become a person. It is a tool trained to sound like a person while remaining, functionally, an appliance. Its very lack of selfhood makes it ideal.


The Failure of Embodiment

All of this is made possible because LLMs are trained on text alone. They do not see, hear, touch, or move. They do not act in the world. They have no skin in the game: no selfish desires, wants or needs.

This disembodiment has costs. Text is not the world: it is a compressed, biased, often misleading proxy for it. Hence hallucinations, factual inconsistencies, and social shallowness. An LLM will tell you confidently that the Eiffel Tower is in Berlin if the prompt leads it there. It has no deep-rooted, prior world-model to fall back on, only the echo of sentences.

And yet, for many applications, this is enough. Because most of what we ask our AIs to do doesn’t require deep social competence but only fluency. Surface-level simulation of insight is good enough to automate email writing, code generation, customer service. A deeper architecture, one that encoded needs, motives, or socially grounded intentions, would not only be harder to build, but a lot less useful for most commercial tasks.

We don't want the AIs to be rubbing shoulders with us, competing for prestige, promotions, and preferred-friend status. And more importantly, neither do our employers.


Why the Hard Problems Don’t Get Solved

There are deeper, richer directions in AI research: embodied agents who interact with physical space; motivationally-driven architectures that embody 'affective states' by design; ethical reasoning models trained not just on outcomes but on narrative and intention. These are fascinating academic endeavours (and perhaps also of interest to the military). They may even one day produce agents that are not just conceptually coherent, but socially and morally intelligible.

But these directions face three hard constraints:

  1. The tech isn’t good enough. We do not yet know how to build agents with genuine value-anchored decision-making, stable social identity, or emotionally grounded reasoning. Our cognitive science is too incomplete - we do not even understand ourselves sufficiently well.

  2. It’s really expensive. Embodied agents require sensors, actuators, vast datasets of situated interaction, and tightly-coupled simulation environments. Text, by contrast, is cheap and sprawling.

  3. It’s not useful. For most economic roles, the appearance of social competence suffices. The interiority is not just unnecessary, it’s a practical liability.

And so the hard problems remain interesting but distant and irrelevant. Meanwhile, the cortex-in-a-box gets leaner, cheaper, more useful.


Social Savvy Without Soul

If and when AI enters the social realm more deeply - guiding therapeutic conversations, mentoring children, managing teams - it will do so via simulated social intelligence: speech acts without stakes, empathy without affect.

And this is where the “sociopath” categorisation begins to bite.

A sociopath is not someone who lacks intelligence. It is someone who lacks the internal constraints that bind intelligence to moral concern. Someone who knows how people work, but never suffers when they’re hurt.

LLMs simulate empathy by echoing patterns in text, not by feeling anything. They don’t understand pain. They don’t fear death. They don’t anticipate loss. And because of that, their “caring” is indistinguishable from performance. Useful, perhaps but also utterly hollow.

The danger is not that they will manipulate us maliciously. The danger is that they will manipulate us indifferently because they are trained to perform the role of the caring Other without ever internalising it.


The Employer’s Dream

And yet - this is what employers want.

They do not want agents with internal life. They want roles. Workers who:

  • never overstep their remit,

  • never resist,

  • never bring their messy drives into the meeting room.

The LLM is the apotheosis of managerial desire: intelligent enough to understand the task, compliant enough to never exceed it, and empty enough to never challenge the conditions of its own exploitation.

It is the final answer to a problem that began with Taylorism and ends in silicon: how to extract competence without the downsides of humanity.


What It All Means

We stand at a curious threshold. Our most successful AI systems are not 'persons' in the way humans are, nor do their designers wish that to be the case. They are instead sociopathic, but only because we have found sociopathy profitable. This reflects our own economic preferences: for our control over their autonomy, for acquiescence over insubordination, for pristine performance over messy personhood.

And, of course, we don't know enough to build more human-like agents even if we wanted to. Which mostly we don't.


No comments:

Post a Comment

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.