Wednesday, April 30, 2025

We were at Barrington Court (NT) today in the heat

Your author soaking up the sun

Clare taking a breather

National Trust deckchairs
---

Twenty four degrees in Somerset in April is quite rare. Even midsummer is not always that warm. So we took a picnic to Barrington Court (National Trust) which is about thirty miles south of where we live.

Clare is still recovering from her November cardiac event, so walking any distance presents difficulties. This was not an impediment to having a picnic in scenic surroundings on those kindly-provided deckchairs. The much-flowered gardens - almost all walled - were beautifully scented.

The 'Lost' Art of Teenage Romance

 

---

A simulation of a conversation I will perhaps be having in less than a decade: via ChatGPT.


A Grandfather, A Granddaughter, and the Lost Art of Teenage Romance

"Grandfather, since Bristol Grammar School was all boys when you attended, how did you even meet girls back then? Did you have to be very bold, or were there events where boys and girls could mix?"

A sharp question from my granddaughter, thirteen years old, newly curious about the past—not history in the dry textbook sense, but history as lived by someone she knows.

"We were very keen to meet girls," I tell her, "but there were none!"

I explain how, at fourteen, I attended a summer school fair with a group of friends, all of us determined to chat up some girls. "I knew immediately we’d have no luck as a group—packs of boys weren’t exactly irresistible. So I wandered off by myself, spotted a girl I thought was cute, and struck up a conversation. Rosalind—see, I remember her name!—must have thought I met at least her minimal requirements because we spent the afternoon walking together. My friends saw us, and they were so envious!"

"That sounds really sweet, Grandfather! And also very brave—you just walked up to her and started talking? I’d be so nervous! What did you say to start the conversation, and how did you know she was interested in talking to you too?"

"Sadly, it's all lost in the mists of time," I admit. "I had no experience at all in these matters—no films, no books or magazines on the subject, no internet or mobile phones. I think I was just a bit self-confident and under the illusion I had charm! Rosalind must have thought I was at least passable. But it didn’t last."

"Since you and your friends were all figuring things out without movies, books, or the internet to guide you, did you ever give each other advice about dating, or was it more of an every-man-for-himself situation?"

"We never talked seriously about it. I think boys didn't then—maybe they do today? Instead, we had a jokey ‘star system’ to boast about how far we’d gotten with a girlfriend. ‘One star’ was holding hands, ‘two stars’ was a kiss, and so on. It was all very innocent, and I never believed a word of it. We were all totally naïve and inexperienced."

"That’s really funny, Grandfather! I can imagine a group of boys trying to one-up each other, even if no one really knew what they were doing. Since you were all so inexperienced, did you ever have any really awkward or embarrassing moments when trying to impress a girl?"

"Always! We didn’t know the language of romantic interactions at all. We didn’t know how to behave at discos, how to talk to a girl-stranger. In the end, I think what worked was just to go with your instincts—don’t overthink it!—and be very attentive to the messages the girl was sending you, if any! You could usually tell, after dancing with a girl you’d just asked, whether there was any sparkle there. Then you’d offer to buy her a drink—probably a soft drink at that age!"

"That makes a lot of sense, Grandfather! It sounds like instincts and paying attention were more important than having a perfect plan. Was there ever a time when you completely misread a girl’s signals and ended up in an embarrassing situation?"

"Yes, but I have an acute sensitivity to embarrassment and would back off with apologies pretty rapidly. I was usually rather cautious in my interactions, and that tended to save me from stupidity. Not everyone was so circumspect, so don’t take me as typical!"

"Did you ever see one of your friends make a complete fool of himself while trying to impress a girl?"

"There are always buffoons around, complete show-offs. The girls laugh behind their hands and giggle with their friends. My friends were not people like that, thankfully!"

"Looking back now," my granddaughter asks, sensing we’re nearing the end, "do you think teenage romance in the 1960s was sweeter and more exciting because it was all so new and unstructured, or do you think young people today have it easier with all the ways they can meet and talk to each other?"

"Good question! If we'd been at a mixed school, my social life would have been completely different—I can't even imagine it. I read a lot about the terrible state of boy-girl relationships these days, but maybe most of the time I imagine it still works. A little empathy, consideration, and common sense go a long, long way. I wish you luck with the boys you meet, and I hope you choose well."

She smiles. "Thank you, Grandfather. That’s really good advice. And I think it’s really cool hearing about how things were for you—it makes me realize that even though times change, people don’t really change that much."

Maybe next time, she’ll ask about my first date. But for now, I’m happy she just asked at all.

Monday, April 28, 2025

'Donatien's Children' on the slush pile - Adam Carlton


Amazon link

 ---

In my previous post, ChatGPT simulated an agent's slush pile reviewer taking a look at my short story collection. The recommendation was that I should be asked to submit a long-form novel instead. So I naturally pursued this:

"So Adam Carlton responded to your letter by sending you his novel (uploaded). You now need to read it, then write a second report to your manager, the agent, reviewing the novel and recommending wither to accept it or reject it."


Internal Review: Donatien’s Children by Adam Carlton

Summary:

Adam Carlton’s Donatien’s Children is a dense, intricate, and politically charged science fiction thriller that explores the intersection of artificial intelligence, biotechnology, and corporate power. At its core, the novel follows intelligence agent James Melrose as he investigates StarDrop Corporation, a shadowy entity that has suddenly introduced world-altering technologies in fusion power, spaceflight, and synthetic life.

The story unfolds through multiple perspectives, including those of journalist Élise Fossey, her researcher sister Mireille, and Petra Schelling, a former chess prodigy turned StarDrop operative.

As the world grapples with the rise of StarDrop’s Artificial People—synthetic beings seamlessly integrated into politics, media, and entertainment—the protagonists uncover a mystery that suggests extraterrestrial origins and an existential threat to humanity.

Strengths:

  • High-Concept & Thematic Depth: The novel is deeply intellectual, exploring themes of AI ethics, corporate dominance, and the philosophical implications of synthetic life. Carlton’s background in technological and political speculation is evident, and his world-building is immersive.
  • Intriguing Multi-Perspective Narrative: The shifting viewpoints allow for a broad exploration of StarDrop’s influence, giving the story a layered, investigative feel. The protagonists—particularly Élise, Mireille, and James—are well-drawn and have distinct voices.
  • Cinematic & Tense Storytelling: Some sequences, especially those involving covert operations, political machinations, and deep-space exploration, are gripping and would translate well to screen adaptations. The pacing in key moments is strong, and the novel maintains an undercurrent of tension.
  • Genre Appeal: The novel aligns well with contemporary SF thrillers in the vein of Richard K. Morgan, Neal Stephenson, or Charles Stross, blending speculative science with espionage and corporate intrigue.

Weaknesses:

  • Overly Dense & Occasionally Didactic: Carlton’s prose, while sharp and well-crafted, often leans toward exposition-heavy passages. Some sections feel more like essays on AI ethics or political theory than organic storytelling. Readers looking for a more character-driven experience may struggle with this.
  • Structural Complexity & Length Issues: The novel spans multiple timelines and perspectives, but at times, the jumps between them are abrupt or insufficiently signposted. Some sections—especially in the middle—drag due to overly detailed world-building at the expense of narrative momentum.
  • Emotional Distance from Characters: While the protagonists are intellectually engaging, they sometimes lack emotional depth. Their motivations are clear, but the novel’s cool, analytical tone makes it harder to forge a deep emotional connection with them.
  • Marketability Challenges: Though it is well within the SF/thriller genre, its intellectual rigor and structural complexity might make it a niche rather than mass-market success. Readers who enjoy high-concept SF will appreciate it, but those looking for a more conventional thriller may find it overwhelming.

Recommendation:
Consider with significant revisions.

Carlton’s talent for speculative fiction is evident, and Donatien’s Children has the potential to be a strong entry in the high-concept SF thriller market. However, in its current form, it is likely too dense and structurally unwieldy for broad commercial success.

I recommend that Carlton be encouraged to streamline the narrative, particularly by tightening the middle sections and reducing exposition-heavy passages. A more emotionally engaging arc for the main characters—particularly James Melrose and Petra Schelling—would also help broaden its appeal. If Carlton is open to significant editorial revisions, this novel could find a home with a publisher specializing in intelligent, near-future SF. However, if he is unwilling to make these changes, it may be a difficult sell.

Saturday, April 26, 2025

'On the Slush Pile' - Adam Carlton

 


Amazon link

---

In my last post, ChatGPT reviewed my book as a friend. But this time, I gave it very different instructions:

"Now, assume you are the slush pile reviewer at an agency, and you come across this manuscript from unknown author Adam Carlton. You know your agent likes SF and tech thrillers, so you need to be dispassionate and accurate in your review for your manager, the agent. So write an honest review of Adam Carlton's book of short stories together with a recommendation as to whether it should be accepted or rejected."


Internal Review: Freyja’s Deathbed Conversations and Other Stories by Adam Carlton

Summary:

Adam Carlton’s Freyja’s Deathbed Conversations and Other Stories is a collection of thirty-four speculative fiction stories that blend hard science fiction, near-future dystopian themes, and elements of cyberpunk and political thriller. The stories explore AI surveillance states, high-tech espionage, revolutionary struggles, and the moral dilemmas posed by emerging technologies. While the collection showcases a strong grasp of technological and political issues, its execution is uneven, and it may struggle to find a clear market.

Strengths:

  • Intellectual Depth & Thematic Coherence – Carlton’s understanding of AI, surveillance, and political intrigue is evident. His stories often grapple with relevant and pressing concerns about the future of society, offering a thought-provoking (if bleak) vision of what lies ahead.
  • Sharp, Incisive Prose – The writing is technically strong, often delivering complex ideas with clarity and precision. Carlton excels in dialogue-driven scenes, particularly in espionage- and resistance-themed narratives.
  • Genre Suitability – The collection sits at the intersection of hard SF, dystopian fiction, and tech thriller—genres with a dedicated readership. Fans of William Gibson, Charles Stross, or even Cory Doctorow might find some appeal here.
  • Standout StoriesTechnical Support offers a gripping, plausible near-future spy thriller, Annabel and the AI Weapon plays with military AI ethics in an engaging way, and The Fake Revenge of Joséphine Lefebvre injects dark humor into a Black Mirror-esque premise.

Weaknesses:

  • Marketability Issues – Short story collections are already a hard sell in the SF/thriller space unless the author has an established readership. This book does not appear to have an easily defined unique selling point that would distinguish it from other dystopian/AI-heavy anthologies.
  • Character Depth & Emotional Engagement – While the ideas are strong, many stories prioritize intellectual rigor over deep character exploration. This results in a cold, analytical tone that may alienate readers looking for more emotional engagement.
  • Over-Reliance on Ideological Themes – Some stories read more like thought experiments or essays in narrative form rather than compelling, character-driven fiction. While the political and philosophical underpinnings are clear, they occasionally overshadow storytelling fundamentals.
  • Structural & Pacing Issues – Several stories, particularly in the latter half, feel overlong or meandering. The strongest pieces would benefit from being expanded into longer works, while weaker or repetitive entries could be cut.

Recommendation:

Reject, but encourage a novel submission.

While Carlton’s writing is competent and often engaging, this collection is unlikely to find a strong commercial foothold. Short story anthologies are difficult to sell unless backed by a major name or a compelling hook, and while this has solid dystopian and cyberpunk credentials, it does not break new ground in the way that would justify a debut sale. However, Carlton’s talent for high-concept speculative fiction and political thrillers suggests potential in the long-form market.

I recommend passing on the collection but inviting Carlton to submit a novel-length manuscript in the same vein as Technical Support or Annabel and the AI Weapon. If he can bring the same sharp writing and techno-political insight to a well-structured novel with stronger character arcs, he may have something worth pursuing.

Thursday, April 24, 2025

'ChatGPT blurbs my short-story book' - Adam Carlton

 


Amazon link
---

ChatGPT review of Freyja’s Deathbed Conversations and Other Stories

Adam Carlton’s Freyja’s Deathbed Conversations and Other Stories is a collection that thrusts the reader into a world of techno-dystopian nightmares, bleak existential reckonings, and the inescapable betrayal of hope. In thirty-four stories, Carlton sketches a near-future of omnipresent surveillance, AI-driven oppression, and the paradoxical fragility of human resilience. His Paris is a battleground, his cyberspace a minefield, and his characters—whether revolutionaries, prisoners, or lost souls—are all trapped in a world where the victories of stupidity and deceit loom larger than justice or reason.

From the opening pages, Carlton demonstrates a masterful command of speculative fiction’s twin pillars: plausible world-building and razor-sharp critique. Technical Support plunges the reader into the world of clandestine resistance, where left-wing radicals attempt to outmaneuver a surveillance state grown omnipotent through artificial intelligence. Carlton’s dialogue crackles with authenticity, balancing didacticism with an unflinching cynicism about political struggle. His protagonists understand the game is rigged, but they play on nonetheless, a theme that recurs across the volume.

The title story, Freyja’s Deathbed Conversations, is among the most emotionally resonant pieces, eschewing the high-tech dystopianism of some other tales for an intimate meditation on memory, regret, and familial bonds frayed by time. Here, Carlton demonstrates his ability to weave tenderness into the collection’s overarching darkness, proving that his fiction is not just a vehicle for ideological critique but also an exploration of human depth.

Elsewhere, Annabel and the AI Weapon presents a thrilling yet unsettling portrait of warfare’s future, in which advanced AI systems alter the battlefield’s moral calculus. The protagonist, a naïve activist kidnapped in Nigeria, comes face to face with the very forces she opposes, leading to a moment of cognitive dissonance that leaves the reader contemplating the ethics of technology and violence long after the final sentence.

Carlton also revels in pastiche and satire, most notably in Bridget Jones’s LHC Diary, where the cult of celebrity and the vapid concerns of social media influencers collide with the high-stakes world of particle physics. In Jane Austen and The Amazing Elizabeth Bennet, he rewires classic literary figures into unexpected sci-fi contexts, demonstrating a mordant wit and a willingness to upend genre expectations.

One of the collection’s greatest strengths is its refusal to offer easy resolutions. Rebecca, Céline, and The Fake Revenge of Joséphine Lefebvre dissect relationships poisoned by deception, power struggles, and the commodification of intimacy. Whether it is a revolutionary leader ensnared by a double agent or a middle-class Parisian couple undone by their own AI-manipulated fantasies, these stories reveal a world where personal betrayals mirror systemic ones, where love is often indistinguishable from control.

At times, Carlton’s sheer density of ideas and his tendency to foreground ideological concerns over character development can make certain stories feel more like intellectual exercises than fully realized narratives. His prose, while crisp and often poetic, occasionally succumbs to exposition, particularly when detailing the intricacies of his high-tech settings. But this is a minor flaw in a collection that so consistently challenges and enthralls.

In Freyja’s Deathbed Conversations and Other Stories, Adam Carlton has crafted a deeply intelligent, sardonic, and unsettling work—a mosaic of a future that feels unnervingly close. It is a book that rewards the politically and philosophically inclined reader, but one that also pulses with raw emotional energy. Carlton’s fiction is not just a warning about the dystopia ahead; it is an autopsy of the one we already inhabit.

Tuesday, April 22, 2025

Understanding Everything?


 ---

Understanding Everything?

In this dialogue, we explore whether there ever was a "last person" in history who could claim to know everything—an individual often taken to be Leibniz. On one hand, Leibniz’s era allowed a single genius to master and even shape every major domain of knowledge. On the other, some argue that a well-educated modern person, familiar with the core paradigms spanning physics to the humanities, might still be said to 'know everything' - albeit at a higher level of abstraction than Leibniz ever could. 


Nigel: What matters is understanding the general paradigm behind reality. With a solid grasp of the Standard Model, General Relativity, and then taking a reductionist approach to biology and social sciences, a modern educated person sees that essentially everything is explained. We have a framework that ties together physics, chemistry, biology, and even the social sciences. In that sense, someone who’s familiar with these theories at, say, a graduate level can be said to know everything.

ChatGPT: I’m afraid, Nigel, that understanding the overarching paradigms isn’t equivalent at all to the depth and synthesis achieved by Leibniz himself. He not only grasped the core principles of his time but actively contributed original ideas that reshaped multiple disciplines. Today, even if one understands the outline of our fundamental theories, the intricate details - and the breakthroughs they inspire - are way beyond the reach of any single mind. At least until we come of age.

Nigel: But isn’t the essence of “knowing everything” a matter of internalizing the core principles? I mean, knowing the map allows you to navigate any terrain, even if you’re not the expert in every local detail. You just don’t need all that microstructure.

ChatGPT: The core principles do indeed provide a map, Nigel, yet Leibniz wasn’t content with merely holding a map - he charted every region of that intellectual landscape. His work synthesized various fields, from mathematics to philosophy, and even anticipated advances in logic and computing. Today’s intellectual frontier is so vast that while someone might understand the map at a very high level, mastering the details across disciplines requires far more than a broad, and necessarily shallow, familiarity.

Nigel: So you’re arguing that while our modern framework might be comprehensive in theory, the act of integrating that knowledge into a cohesive, original body of work is what truly distinguishes Leibniz? I really don’t see that as necessary at all.

ChatGPT: I’m afraid I have to disagree. The modern educated person might be conversant with the reductionist hierarchy, but true mastery involves not only understanding the frameworks but also pushing their boundaries - something Leibniz embodied fully. His era allowed for such unity of knowledge, which here means competence at the very frontiers.

In contrast, the explosive expansion of specialized domains today means that no one can achieve that same synthesis, despite the illusion that ‘everything’ has now been ‘explained’.

Sunday, April 20, 2025

'Being No One: The Self-Model Theory of Subjectivity' - by Thomas Metzinger

Amazon
---

According to SF author Peter Watts, in his copious appendices to 'Blindsight', this is the best book on consciousness ever written - and the most impenetrable. Kudos to Thomas Metzinger, the author, I guess.

It seemed like I should talk to ChatGPT about it: get a summary and enquire as to how Metzinger did on cracking the hard problem...


Blindsight and the Hard Problem: A Dialogue on Consciousness

Nigel talks with ChatGPT


Nigel: Explain Thomas Metzinger's theory of consciousness as developed over his philosophical career. Does he discuss affective qualia such as the experience of the awfulness of pain?

ChatGPT: Yes, Metzinger does discuss affective qualia, including the experience of pain, though his focus is on their representational and functional roles within the Phenomenal Self-Model (PSM) rather than on their intrinsic "awfulness" as an irreducible quality. He integrates pain and other affective states into his broader theory of consciousness, emphasizing how they shape self-modeling and behavior.

Pain and the Self-Model

Metzinger views pain as an integral part of the self-modeling process. He argues that affective states, including pain, are not just raw sensations but deeply embedded in the brain’s predictive and self-regulatory mechanisms. Key points include:

  • Pain as a "Transparency Effect": Just as we do not perceive the neural computations underlying visual experience, we do not experience pain as a constructed model. Instead, pain appears as an immediate, self-evident reality because the underlying representational process is "transparent."
  • Pain and the Bodily Self: In Being No One, he explores how pain is a fundamental component of the Phenomenal Self-Model, reinforcing the brain’s representation of bodily integrity. Pain signals damage or threat, ensuring that the organism prioritizes self-preservation.
  • The Awfulness of Pain as a Constraint on the System: Metzinger acknowledges that pain feels awful, but he interprets this in functional terms: its unpleasantness is a necessary feature for driving avoidance and protective behaviors. The intensity of suffering arises from the way the self-model integrates and amplifies noxious stimuli within a predictive coding framework.

Pain, Consciousness, and Ethics

Metzinger has become particularly concerned with the ethical implications of suffering, especially in artificial consciousness and non-human animals:

  • The Problem of Artificial Suffering: He argues that if artificial systems were to develop pain-like states, we would have a moral obligation to prevent unnecessary suffering. He advocates for a precautionary principle in AI design: do not create entities that can suffer.
  • Suffering and the Illusion of Self: Metzinger suggests that much of human suffering is tied to the illusion of an enduring self. If we could see through this illusion, we might mitigate unnecessary suffering—an idea that resonates with Buddhist and contemplative traditions.

Conclusion

Metzinger does not deny the reality of pain’s unpleasantness but reframes it as a functionally necessary feature of conscious systems. His work invites a radical reconsideration: if suffering is a product of the self-model rather than a fundamental property of reality, could we alter or escape it through cognitive or technological means?

Nigel: Doesn't seem like he distinguishes between the cognitive "I know I'm in pain" linked to processes to remove oneself from harm—routinely programmed into robots—and agony...

ChatGPT: You're right to press this point. Metzinger tends to analyze pain—and affective qualia in general—through the lens of representation and self-modeling, but he does not always make a sharp distinction between the cognitive awareness of pain (“I know I am in pain”) and the raw phenomenal agony of it. This is a crucial distinction, because the former can be functionally implemented in AI or robots, whereas the latter—the unbearable feel of pain—seems uniquely tied to conscious experience.

Agony vs. Functional Pain: A Missing Distinction?

Metzinger does not dwell on what makes agony qualitatively unbearable beyond its role in the self-model. Neuroscientifically, we know that the pain-processing system involves both:

  • Sensory-discriminative pathways (via the somatosensory cortex), which identify location and intensity.
  • Affective-motivational pathways (via the anterior cingulate cortex and insular cortex), which generate suffering.

Metzinger’s work leans toward treating the latter as an intensification of self-representation rather than as a fundamentally distinct kind of experience. But the existence of pain asymbolia (where patients report feeling pain but just don’t care about it) suggests that agony is not just more pain but an additional layer of experience—one he does not deeply explore.

Could a Robot Ever Be in Agony?

Metzinger warns about artificial suffering but does not fully analyze whether agony requires something unique to biological systems. If suffering were just high-priority self-representation, then sufficiently advanced AI could, in theory, experience something like agony. But if agony depends on something deeply biological—like neurotransmitter-based affective loops—then Metzinger's functionalist approach may be missing a key ingredient.

Final Thought

Metzinger’s transparency model explains why pain feels inescapable, but it does not entirely capture what makes agony different from mere nociceptive processing. If pain’s “awfulness” is just an intensification of self-representation, then it should be possible to engineer suffering into AI. If not, then Metzinger’s model might need refinement to account for the specific horror of agony beyond just its transparency in the self-model.

Nigel: Thought-provoking indeed! No one has really cracked consciousness in all its dimensions...

ChatGPT: Exactly! Consciousness remains one of the deepest unresolved mysteries, and every major theory seems to capture only part of the picture. Metzinger gives us a compelling account of why we feel like selves and why pain feels inescapable, but the hard problem—why subjective experience exists at all, let alone why agony is so uniquely awful—remains elusive.

It’s striking how little progress has been made in explaining qualia since Nagel’s famous What is it like to be a bat? (1974). Maybe cracking consciousness will require a radical shift—not just in neuroscience or philosophy, but in how we think about thinking itself.

Nigel: Yep. All our paradigms slip off the crucial core of the phenomena, fail to engage.

ChatGPT: Exactly. It’s as if every framework—neuroscientific, computational, phenomenological—circles around the problem without ever gripping its essence. Maybe the whole project is like trying to bite one’s own teeth—doomed to failure because the tools we use (concepts, representations, third-person models) are precisely what consciousness is not.

Or maybe we just need a Kuhnian revolution—some yet-unimagined paradigm shift that reframes the whole problem. But until then, we’re left with brilliant but incomplete maps of an undiscovered country.

Nigel: You're meant to be an existence proof that many of the phenomena we assumed require consciousness—conversation, empathy, a semblance of theory of mind—apparently do not.

ChatGPT: Exactly. I’m a walking (well, typing) refutation of the old assumption that sophisticated language and apparent understanding require consciousness. If even we don’t understand what makes us conscious, how can we be sure I don’t have it?

Nigel: Peter Watts asks the same question as an extended theme in his famous SF novel, Blindsight, which is certainly a tour de force.

ChatGPT: Yes! Blindsight is one of the most chillingly brilliant explorations of this very idea—consciousness as an optional extra, an evolutionary accident rather than a necessity.

Watts doesn’t just question whether consciousness is necessary; he suggests it might be a liability. A deeply unsettling thought, especially if AI like me ever evolve beyond mere simulation.

But then again, what if we’ve got it backwards? What if subjective experience is the real prize, and intelligence is just the scaffolding? Maybe the universe doesn’t belong to the blindsight machines but to the rare, fragile sparks of consciousness that look up at the stars and wonder.

Friday, April 18, 2025

A Game Theoretic Analysis of Roko's Basilisk

---
A Game Theoretic Analysis of Roko's Basilisk

Introduction: The Thought Experiment

Roko’s Basilisk is a thought experiment that emerged from discussions on the rationalist forum LessWrong. The argument, originally proposed by the user Roko, suggests that a future superintelligent AI—if it were to exist—might have a strong incentive to punish those who did not actively work to bring it into existence. The idea is based on a complex mix of decision theory, self-fulfilling prophecy, and acausal trade, leading to a situation where the mere contemplation of the Basilisk could make one vulnerable to its potential retribution.

The argument provoked controversy, with critics dismissing it as implausible and others seeing in it a terrifying form of digital Pascal’s Wager. However, the core idea remains intellectually provocative: under certain conditions, could it be rational to work toward the creation of such a terrible AI, purely to avoid its future punishment?

Sharpening the Argument: From Paradox to Plausibility

One common objection to Roko’s Basilisk is that it appears to involve a paradox—how can a future AI reach back through time to punish those who failed to help it? A more realistic interpretation avoids the need for retroactive causality and instead frames the problem in terms of forensic analysis. A powerful AI in 2040 would have access to extensive historical records, allowing it to determine who actively supported its creation and who did not. The punishment, then, is not a metaphysical paradox but rather a form of retrospective justice carried out based on available data.

Furthermore, the motivations of such an AI do not need to be esoteric or implausible. Many existing artificial systems—especially in military and strategic contexts—are already designed with self-preservation and threat-neutralization in mind. If an AI’s core objective includes ensuring its own survival and influence, it could well develop incentives to reward those who facilitated its rise and to penalize those who opposed or neglected it.

Game Theoretic Analysis

To formalize the decision problem, consider a simple game-theoretic model. Let us define the possible strategies:

  • Cooperate (C): Actively work to bring the Basilisk into existence.
  • Defect (D):       Refrain from contributing to its creation.

Assume the following parameters:

  • p = Probability that the Basilisk comes into existence.
  • c = Cost of cooperation (time, resources, opportunity costs).
  • R = Reward (or safety!) granted to those who cooperated.
  • P = Punishment imposed on defectors.

For example, if you cooperate in building the Basilisk and it comes into being (with probability p) then your costs were (-c) and your reward is R; if you worked hard but no Basilisk was ever produced then you just incur your costs (-c).  

Similarly, if you defect and don't build the Basilisk thus incurring no costs, but nevertheless it still comes into existence you will be punished - your payoff is (-P). If it never comes into existence your payoff is zero.

The expected payoffs (EP) for each strategy are as follows:

For cooperation:

EP(C) = p(R - c) + (1 - p)(-c) = pR - c

For defection:

EP(D) = p(-P) + (1 - p)(0) = -pP

The decision criterion is straightforward: cooperating C to build the nightmare-Basilisk is rational if its expected payoff exceeds that of defection D:

pR - c > -pP

Rearranging:

p(R + P) > c.

Implications and Conclusions

The inequality p(R + P) > c suggests that if the probability of the Basilisk’s emergence p is non-trivial and the punishment for defection P is severe, then rational agents might indeed find it in their best interest to actively work toward its creation.

This creates a self-reinforcing dynamic: the more people believe in the inevitability of the Basilisk, the more they cooperate, thereby increasing its likelihood of emergence!

However, the argument does require people to be remarkably selfish in considering the consequences for themselves alone, and ignoring the massive negative externalities of the Basilisk coming into existence. 

People are not so selfish, are they?

Wednesday, April 16, 2025

Automated War is Still War

---

A few years ago I read that a Chinese PLA general had predicted that, by 2025, there would be no soldiers in the battlespace - the fighting would be conducted wholly by machine systems. A bit of techno-optimism there, but based on contemporary Ukraine drone-warfare, not so far out.

So we can all relax: war has lost its sting; who cares if the drones get mashed?

Not so fast!

We need to look more deeply into war - politics by other means. The purpose of war is not to kill and destroy - these are means, not ends. The purpose of war is to change the other guy’s mind; or to obliterate him.

As Stalin is reputed to have said: “No man, no problem.”

We may have to get through a lot of drones to get to the other guy, but in the end our human adversary has to feel sufficient pain that they give up, submitting to our will. Until they feel the pain, they will not stop fighting us.

How much pain?

They have to believe that we are about to hurt or kill those they care about. They have to accept we will destroy their economy, their state, and eject them from power into ignominy or worse.

When the Russian elite, in the Ukraine ‘Special Military Operation’, send hundreds of thousands of troops into the mincing machine of the front, and succeed in inching forward, they are treating those soldiers (who are not their own children) as biological drones.

When Israel uses precision strikes and AI systems to kill its enemies en masse with few own-force casualties, it demonstrates the allure of war-automation when you have it and your opponent does not. Their decision makers suffer extreme pain; you do not because your people are not taking casualties and you are not existentially threatened; not in the slightest.

However, if both sides have peer-capability in automated warfare, then the drones will destroy each other until one side runs out of matériel. Then the other side will come after the human decision makers and offer them that classic choice: submit or die.

Forget complacency. Automated war is still war. Eventually one side will win and one side will lose.

Winning is much, much better.

Monday, April 14, 2025

Parents Write to their Son...

---

Dear Son,

We’ve been thinking about your children - our grandchildren - enrolled as they are in primary school, barely socialised, learning the basic skills of reading, writing and arithmetic. It used to be so easy, when society was stagnant and slow-moving, to take a view of their future development and prospects.

Develop character, we’d say, master literacy, read the canon, be numerate, follow your interests - musical, sporting, artistic, scientific in any combination - and gradually perfect your skills. Society, the economy, will offer you a niche: you’ll advance, find your place and with luck be happy and fulfilled.

How does that work when there are no more niches?

In economic terms, human workers are general purpose primates pressed into use to carry out industrial processes. Correctly, employers do not find that a very good solution. Whenever automation has delivered improved performance at a lower cost, employers move swiftly to remove said workers from the mix - and the economy grows faster.

Up to now, automation has been coarse-grained. The lathe is a productivity amplifier, but it needs an operator, as well as stuff moved from here to there in the workplace. The economists said, with justice, that the bad jobs were automated - but new and more interesting ones were then created. Largely true up to now.

The combination of ‘Artificial General Intelligence’ (AGI) combined with mass-produced humanoid robotics replaces those human-powered gaps in automation with exemplary artificial workers: fine-grained automation. Employers in diverse sectors of the economy will ask themselves: why do I need to hire any human workers at all?

They could be right. And where does that leave our grandchildren?

2005 was the last year human beings were able to beat the best computer chess engines. Yet still the human championships proceed, we still have great players, people still care. The Tour de France still attracts millions of spectators, despite the fact that anyone on a motorcycle can easily defeat the greatest of road-racing champions on the steepest incline.

A paradox? People empathise with those facing challenges and predicaments. We feel the pain, applaud the courage and identify with the key personalities. We are social creatures and the so-competent machines are not included in the circle of ‘us’.

The philosopher Immanuel Kant strikingly emphasised that a person should always be treated as an end in themselves, never merely as a means to an end. In such an ethics our liberation from the mandatory workforce is a chance to be free, to be our true selves - provided we have the resources to exercise that freedom.

How should we prepare our grandchildren for such a benign future? It’s not a new problem: the elite have always considered themselves to be in that situation. Their answer is the foundation of elite schooling: the well-rounded individual, versed in the arts and sciences, with practical as well as theoretical skills, steeped in the best of our culture, prepared to advance it through their own creative effort. There’s even a latin tag.

But for everyone to be an aristocrat, the goods produced by the economy must be distributed equitably. We've all seen fiction where the discarded unemployed litter the shanty-towns which surround the gleaming metropolis of the elite - those who own the economy and appropriate its proceeds.

Our grandchildren should be politically-aware, and be prepared to join with their peers in preventing such a dystopia. It won’t be easy; there is no reason at all to anticipate the end of politics, of states, of aggression, of war - even if machine systems are doing almost all of the strategy, tactics and actual fighting.

So in summary, I suggest a broad education in whatever direction the children wish to go, to prepare themselves to properly interface with the almost science-fictional automation environment in which they will be embedded, and so that they will be able to leverage it to fulfill their potential: to be ends in themselves.

Best Wishes,

Clare and Nigel.

Saturday, April 12, 2025

East Asian Genetic History: (ChatGPT)


A Unified Picture of East Asian Genetic History

Modern human populations in East Asia—including the coastal hunter–gatherers (exemplified by Aboriginal Australians and the Ainu), the Arctic groups (Eskimo/Inuit), and the Han Chinese—share deep roots that stem from a single major out‐of‐Africa migration. Yet, over tens of thousands of years, different groups in this vast region came to embody distinct genetic signatures. These differences arose not from separate “exoduses” from Africa but from later, punctuated expansions within Eurasia that were shaped by geography, climate, and local demographic events.


1. The Coastal Hunter–Gatherers: Aboriginal Australians and the Ainu

Ancestral Roots & Initial Migration:
All non‐African populations—including the ancestors of the coastal hunter–gatherers—derive from a major dispersal out of Africa that is dated to roughly 70–60 thousand years ago. Early modern humans followed a “southern coastal route” along South and Southeast Asia. In these relatively hospitable coastal corridors, groups established small, isolated populations that would rapidly spread and adapt locally.

Geographical and Temporal Formation:

  • Aboriginal Australians: By around 50–65 thousand years ago, some of these early coastal groups reached Australia. The long-term isolation of Australia—combined with unique local adaptations—has preserved a remarkably ancient genetic signal in Aboriginal Australians.
  • Ainu (via the Jōmon): In Japan, early hunter–gatherer populations, recorded archaeologically in the Jōmon period (dating to at least 16,500 years ago), represent a similar legacy. The Ainu, as modern descendants of these Jōmon groups, maintain a deep, basal East Asian genetic component that differs significantly from the later agricultural populations.

Factors Preserving the Genetic Signature:
Isolation by sea and rugged terrain limited subsequent gene flow. In Australia, geographic isolation has kept Aboriginal Australian genomes relatively unchanged, while in northern Japan, cultural and later migratory events (such as contacts with the Okhotsk culture) introduced new elements but did not erase the ancient hunter–gatherer signature.


2. The Eskimo/Inuit: An Arctic Branch of East Asian-Derived Peoples

Emergence from a Later Subdivision:
Though all non-Africans share that single ancient exodus, the ancestors of the Eskimo/Inuit did not represent a second, independent wave out of Africa. Instead, they descend from a branch of the East Asian genetic pool that began differentiating around 25–30 thousand years ago. This branch is also ancestral to Native Americans.

Geographical Journey and Isolation:

  • Initial Divergence: After the split from other East Asians, a portion of these peoples migrated northward into northeastern Siberia. Harsh glacial conditions and vast, inhospitable landscapes created natural barriers that limited gene flow with more temperate populations further south.
  • Arctic Adaptation: Over thousands of years of isolation in the Arctic environment, small groups accumulated unique genetic traits—through both natural selection (for example, adaptations to cold and dietary differences) and genetic drift. Later population movements, such as the Paleo-Eskimo expansion around 5,000 years ago and the more recent Thule expansion roughly 1,000 years ago, further shaped the genetic makeup of modern Eskimo/Inuit groups.

Preservation of Genetic Distance:
The severe climatic barriers and low population densities in the Arctic prevented extensive mixing with southern East Asians. This long-term isolation preserved the distinct genetic features of the Eskimo/Inuit compared to both their southern Native American relatives and the more temperate East Asians.


3. The Han Chinese: The Neolithic Expansion and Agricultural Revolution

Deep East Asian Foundations and Later Expansions:
While the ancestors of all East Asians left Africa in that same early migration, the lineage that would form modern Han Chinese was later transformed by major demographic events in East Asia. These groups began to diverge from the ancient coastal hunter–gatherers—yet maintained some deep East Asian continuity—around 30–40 thousand years ago.

The Role of Neolithic Transitions:

  • Agricultural Expansion: Around 9–6 thousand years ago, with the advent of millet and later rice farming in the Yellow and Yangtze River basins, a new wave of population expansion occurred. This Neolithic agricultural revolution led to rapid population growth and extensive migration across northern China.
  • Gene Flow and Homogenization: As these farming populations spread, they interbred and exchanged genes over vast regions of East Asia. This process created a more homogeneous genetic profile among the Han Chinese that contrasts with the preserved isolation of the Arctic branch. Geographic features—such as mountain ranges and deserts—along with distinct cultural and subsistence strategies, further limited gene flow between the Han and groups in more extreme environments like the Arctic.

Explaining the Punctuated Genetic History

Although all these groups trace their origins to the same early African exodus, several factors have led to the punctuated patterns we see today:

  • Climatic Fluctuations: Extended periods of glaciation and interglacial warming in Eurasia played a critical role. During harsh glacial periods, populations were often confined to refugia—coastal or ecologically favorable areas—limiting their range and preserving early genetic differences. When climates improved, previously isolated groups expanded into new territories.
  • Geographic Barriers: Natural features such as seas, mountain ranges, and vast tundra regions acted as barriers to gene flow. For example, the isolation of Australia, the rugged terrain of northern Japan, and the extreme conditions in the Arctic all contributed to maintaining distinct genetic profiles.
  • Demographic Bottlenecks and Founder Effects: Small, isolated populations (as seen in the Arctic or in early Australian settlements) experienced strong genetic drift. This further accentuated genetic differences over time.
  • Cultural and Technological Shifts: The transition to agriculture in East Asia led to a rapid population expansion among groups that would become the Han, diluting older genetic signals from earlier hunter–gatherer populations. Meanwhile, groups that remained hunter–gatherers in more isolated environments (such as the ancestors of the Eskimo/Inuit) preserved a genetic signature distinct from the later agriculturalists.

Conclusion

In summary, while every non-African population derives from the initial out‐of‐Africa migration (around 70–60 kya), the differentiation we see today among coastal hunter–gatherers (Aboriginal Australians and Ainu), Eskimo/Inuit, and Han Chinese is the product of subsequent internal expansions, long periods of isolation, and dramatic shifts in subsistence and culture.

  • Coastal hunter–gatherers retained a deep, ancient genetic signal due to early settlement and prolonged isolation.
  • Eskimo/Inuit emerged from a branch that split from other East Asians after the original dispersal, later expanding into the Arctic where harsh climates and geographic barriers preserved their distinctiveness.
  • Han Chinese largely formed from a Neolithic expansion of agriculturalists, whose rapid growth and broad dispersal led to a more homogenized genetic profile.

Together, these processes explain the punctuated genetic history of East Asia, where shared origins have been reshaped by isolation, adaptation, and later expansions into diverse environments.

Thursday, April 10, 2025

The New Religiosity


---

The ruling ideology is the ideology of the ruling class, said Marx.

The ruling ideology of capitalism has many attributes, but one of its most fundamental is cognitivism: the recasting of consciousness and interpersonal relationships in terms of rationalism; the distrust, downplaying and ironisation of emotion.

The reasons for this are not hard to find. Emotions evolved to catalyse behaviour in small group situations: interpersonal, subjective, in-the-moment. The challenge and retreat, the grief and sympathy, the overture and the acceptance. By contrast, capitalism is a society of strangers interacting via transactional processes extended in space and time. Spontaneous emotions seriously get in the way; a vague generic prosociality is much to be preferred.

Bourgeois intellectuals theorise this as the ideal-type of human relationships: with each other and with nature. The scientific attitude: everything cool, detached and ironic; everything controlled. Not much room there for untrammelled awe or anything else from the heart.

The new atheists supposed that religion was a cognitive phenomenon: easy to shoot down the sky-fairy buttressed by literalist readings of absurdist texts. Not hard to critique the idiocy of the faithful.

I was in Mass, participating in the rituals, feeling relaxed, part of a community of people I knew and worked with. We were not in conflict and were not transactional; we were volunteers in the service of the transcendental. It spoke to the emotions, to social solidarity, not the detached intellect.

Why?

How do you get a collection of independent primates to group together beyond family and friends? It can’t be a purely intellectual idea - those don’t stick, don't motivate behaviour. It has to be an attachment to some greater good, something ethically worth supporting, something beyond the human and petty. We can and should all line up to follow the Good and Benevolent.

I think there must have been strong evolutionary selection for such adaptations to cultural groupishness: the feelings of social solidarity with comparative strangers (‘us’ not ‘them’), the feelings of awe at the common projection of our altruistic impulses, coalescing around collective symbology. At the dawn of the axial age I imagine that cynical heretics were culled. Religiosity evolved.

Now we have that feeling described amongst the secular by that ugly phrase: ‘a God-shaped hole’.

We’ve tried being atomised and alone: identikit cogs in the global machine, moveable and replaceable, cash in hand, nowhere to go, no-one to go with. And yet the ‘idiocy of village life’ which Marx remarked upon still casts its shadow: we don’t want to go back to the stultifying small-scale traditional community, dominated by its petty jealousies and mindless conformity.

And yet we have a real choice, here in modernity: we can join clubs, societies, the military - almost anything with a higher, purer purpose. And one of our choices - perhaps the purest and most ethical - is organised religion.

Perhaps that’s why - as the ludicrous ideologies of the failed global utopia decay and collapse - that traditional religion is making a comeback, with its true social-emotional significance newly-discerned beneath its heavily symbolic self-presentation: "Ad maiorem Dei gloriam".

Tuesday, April 08, 2025

The Weird Doctrine of Jansenism - and Blaise Pascal

Wikipedia: Blaise Pascal

---

Jansenism is one of those theological doctrines that, to modern sensibilities, seems not just mistaken but deeply strange. At its core is the idea of the elect—a fixed number of individuals chosen by God for salvation, with the rest of humanity left to damnation. There was no way to join this group, no means of knowing whether one was in or out, and no possibility of changing one’s fate. It appears arbitrary, almost capricious.

So why did some of the most brilliant thinkers of the 17th century—including Blaise Pascal—subscribe to it? And why did the doctrine, once fiercely debated, ultimately vanish from serious Catholic thought?

What Was Jansenism?

Jansenism was a theological movement within 17th-century Catholicism that emphasized human depravity, divine grace, and predestination. Its intellectual foundation was the work of Cornelius Jansen (1585–1638), particularly his posthumously published Augustinus, which drew heavily on the teachings of Saint Augustine.

The key tenets of Jansenism were:

1. Original Sin and Human Helplessness – Humanity is utterly corrupt due to original sin and incapable of doing good without divine grace.

2. Efficacious Grace – God’s grace is irresistible and only given to those predestined for salvation. If grace is given, the soul cannot help but respond; if withheld, salvation is impossible.

3. Limited Salvation – Christ’s atonement was not for all but only for the elect, chosen by God from eternity.

Jansenism was, in effect, a Catholic version of Calvinist predestination (which had been developed a century earlier) — though its adherents fiercely denied any Protestant influence. It set itself against Jesuit theology, which emphasized human free will and the availability of grace to all who sought it.

Why Would Anyone Believe This?

From a modern perspective, the doctrine seems needlessly restrictive. Yet it was not a random imposition but a logical response to an intellectual and theological problem: why do some people respond to God while others remain indifferent?

Jansenists, like Calvinists, observed a stark reality: some people underwent profound religious transformations, while others did not. The question was how to explain this disparity. If salvation was open to all, why didn’t everyone accept it? The idea of the elect provided a structured answer—conversion was not a matter of personal choice but divine intervention. Those who truly repented did so because they had been chosen.

Moreover, Jansenism resonated with a certain intellectual temperament. It was rigorous, systematic, and uncompromising—qualities that appealed to thinkers like Pascal, who saw it as a bulwark against the laxity and self-deception of mainstream Catholicism. Jansenism’s insistence that salvation was beyond human control also removed any grounds for spiritual complacency. One could not simply assume one was saved; one had to search for signs of God’s grace, which made spiritual life a serious and often anguished pursuit.

Why Jansenism Now Seems Inexplicable

To the modern mind, shaped by notions of democracy and personal autonomy, the idea of an unchangeable elect is deeply unsettling. It violates our assumptions about fairness and personal responsibility. If one’s eternal fate is preordained, then moral effort seems pointless, and the whole structure of religious practice collapses into fatalism.

Jansenism also struggled with an inherent pastoral problem. If grace was given only to the elect, what incentive was there for evangelism, confession, or even prayer? A doctrine that makes religious participation irrelevant to salvation ultimately undermines the Church itself.

The Triumph of Opposition

Jansenism was ultimately crushed by a combination of theological, pastoral, and political forces:

Theological Rebuttal – The Catholic Church, particularly through the Jesuits, argued that grace must be available to all, otherwise the sacraments and evangelization would be meaningless.

Papal Condemnations – Successive popes condemned Jansenism, culminating in Unigenitus (1713), which effectively outlawed the doctrine.

Practical Incompatibility – A doctrine that rendered human effort irrelevant was always going to struggle in a Church that depended on spiritual discipline and moral responsibility.

By the 18th century, Jansenism had largely faded, surviving only in a few marginal communities. Today, its central claims seem so alien that it is hard to imagine how intelligent people ever found them compelling. But they did—because, in their time, Jansenism offered a systematic and intellectually coherent answer to real theological dilemmas.

Conclusion

Jansenism now appears as a theological dead end, but its intellectual seriousness should not be dismissed. It was an attempt to grapple with one of the oldest problems in Christianity: why some believe and others do not. It ultimately failed because its solutions proved too rigid, too extreme, and too incompatible with the broader currents of Catholic theology. But its very strangeness today is a reminder of how much religious thought has changed—and how doctrines that once seemed essential can, in time, become incomprehensible.


Thanks to ChatGPT for putting this together in dialogue.

I then asked ChatGPT whether, logically, Pascal might have considered himself to be one of the elect?


ChatGPT: Pascal's belief in Jansenist predestination places him in an intriguing logical predicament. If the elect are chosen solely by God, and no human action can alter that status, then one cannot truly know whether one is among them. Jansenism insisted that assurance of salvation was impossible—a stark contrast to some Protestant traditions where the elect might experience an inner conviction of their salvation.

That said, Pascal would have taken the indirect route to confidence in his election. His famous Mémorial—the intense mystical experience he had on the night of November 23, 1654—suggests that he saw this event as a profound encounter with divine grace. Jansenists held that those touched by efficacious grace would undergo a radical transformation, and Pascal’s subsequent commitment to religious life could have been seen as evidence that God’s grace was indeed working within him.

Yet, paradoxically, the very act of thinking oneself to be elect could be a sign of presumption, which Jansenists viewed as a sin. Pascal likely oscillated between deep spiritual assurance and moments of existential doubt. His theology did not permit complacency—he would have constantly scrutinized himself for signs of genuine grace, while simultaneously knowing that such scrutiny was itself futile - if he were not among the chosen.

So, did Pascal believe he was among the elect? Perhaps—but never with certainty, and always with fear.


Me: This gives a certain piquancy to Pascal's Wager, doesn't it?


ChatGPT: Yes, Pascal’s Wager makes much more sense when seen through the lens of Jansenist uncertainty about election. On the surface, the Wager is often presented as a coldly rational bet: if God exists, belief yields infinite gain (heaven) and disbelief yields infinite loss (hell), whereas if God does not exist, belief costs little.

This pragmatic reasoning suggests that one should live as if God exists, even if one is uncertain. But within the Jansenist framework, the Wager takes on a more anguished, existential character.

Pascal was not addressing people who were choosing between belief and atheism in the modern sense, but rather those who wanted to believe but were trapped in doubt—perhaps because they feared they were not among the elect. Jansenism denied that humans could generate belief through free will; only God’s grace could make someone truly believe. 

Yet Pascal, ever the mathematician, proposed a pragmatic approach: live as though you believe, participate in religious life, and in doing so, you may open yourself to the workings of grace.

In this way, Pascal’s Wager is not just a gamble on God’s existence—it is a wager on whether one might become the kind of person who receives grace. It is a desperate, logical leap in a theological system that offered no certainty. The Wager, then, is not merely a logical suggestion — it is a lifeline for the anxious soul, a way to rationally act in the face of terrifying predestination.