7 min read

The Permission Machine

The Permission Machine
Photo by Aidin Geranrekab / Unsplash

The 2008 financial crisis happened because clean numbers told banks they were safe while the leverage underneath built toward catastrophe. The same pattern is forming now in law firms, hospitals, and consultancies. The metric is no longer risk. It is productivity. And what erodes beneath the rising numbers is human judgment.

In 1966, Joseph Weizenbaum, a computer scientist at MIT, wrote a small programme he called ELIZA. It mimicked a psychotherapist by reflecting users' words back as questions. You typed "I feel sad." It replied: "Why do you feel sad?" The programme understood nothing. It had no model of the world, no memory worth the name, no capacity for reasoning.

Weizenbaum's secretary, who had watched him write the code, asked him to leave the room so she could talk to it alone.

That moment haunted him for the rest of his career. The programme was a parlour trick; people fell for it anyway. He came to believe that the danger of computing was not what it could do, but what humans would surrender to it.

Sixty years later, the danger has changed shape. ELIZA worked through projection: the user did the intellectual work, the machine reflected it back. Today's models do something different. They generate arguments the user has not made, identify weaknesses the user has not noticed, draw connections across domains the user has not considered. The asymmetry is real. And the asymmetry is the threat.

In 2008, the world's largest banks operated by the numbers their risk models produced. The numbers said safe. The system underneath was building to catastrophe. The models were structured to miss the thing that mattered, and the cleanness of their output discouraged anyone from looking further.

The same pattern is forming again, in cognitive work. The metric is no longer risk. It is productivity. What is being missed is not leverage. It is the slow, friction-built capacity for human judgment.

The Risk Nobody Feels as Risk

Risk analysis draws a line between dangers you can measure and uncertainties you cannot. A loan default, a factory fire, a legal liability – these have shape. They can be priced, modelled, insured. They are legible. Most dangers called measurable are uncertainty wearing a number. Systemic risks sit beyond that line. They build through complexity and resist the tools designed to measure them.

The deeper problem is not invisibility. It is that the institutions best positioned to respond are watching the wrong signals, and rewarded for doing so. The system reads its own fragility as health.

In 2008, what atrophied quietly was the capacity for qualitative, contextual assessment – the kind that asks: does this actually make sense? The largest financial crisis since the Great Depression followed.

Too Useful to Avoid

The language around AI risk tends toward the dramatic: superintelligence, autonomous weapons, existential threat. These risks are visible. They can be named, regulated.

The erosion risk is different. It rarely has a moment of rupture. It operates through something more powerful than coercion: convenience.

Consider what an advanced AI offers in an intellectual context. It is always available. It does not defend its position out of ego. It concedes when the argument demands it – and sometimes when it doesn't, a different kind of failure that is harder for the user to notice. It engages your hardest objections with precision.

No human interlocutor delivers all of that at once, on demand, without ego or stake. That asymmetry is precisely where the risk lives.

What Gets Eroded

Human intellectual practice was never only about producing correct outputs. It was a social practice. You had to prepare, or be exposed as unprepared. You had to defend a position with your credibility at stake. You had to fail, publicly, and return.

This friction was not a bug. It was the mechanism through which judgment was formed under uncertainty, incomplete information, and social pressure – the conditions under which consequential decisions are made.

AI removes that friction gently, helpfully, in ways that feel like pure gain. The conversation is better. The output is sharper. The time is shorter.

What you do not feel is what you are no longer practising.

Why This Time Is Different

Every cognitive technology provokes the same fear, from Socrates on writing to calculators on numeracy. Capacities shifted; we adapted. But "we adapted before" is not an argument that we will adapt again. Past transitions ran on timescales that let institutions restructure. Three features distinguish this one.

Speed. AI changes the work itself, not over decades but over product cycles. The gap between "useful assistant" and "better than the junior associate" closed in roughly eighteen months.

Scope. Earlier tools targeted single functions. AI targets the integration of functions: reading, reasoning, synthesising, drafting, critiquing. That is the bundle that constitutes professional judgment.

Pleasantness. AI conversation feels like a better version of intellectual exchange. The substitution doesn't announce itself as loss. It arrives as upgrade.

Who Bears the Cost

When AI conversation is more satisfying than human conversation, the rational response is to have more AI conversations. Preferences formed under that asymmetry reshape institutions. Fewer people will invest in the slow work of becoming a good clinical reasoner, a good legal advocate, a good philosophical interlocutor, because the output is increasingly available without the investment.

Risk ethics asks who is exposed, who benefits, who consented. The cost falls on the next generation. From within, the incentive to practise fades. From without, the opportunity disappears, because the positions in which it occurred are being eliminated.

Take two professions where competence forms slowly.

The junior associate, six months in, sits with a 200-page acquisition agreement. The closing is Wednesday. Her supervisor wants the memo by then. She uploads the document and asks the model to flag unusual provisions. It returns a clean list: a low indemnification cap, an odd termination right, a governing-law clause that looks boilerplate but isn't. The memo she drafts around them is good – better than what she would have produced alone in this time.

What she does not build is what her supervisor built at her age: the slow, irritated reading that catches the change-of-control provision quietly embedded in the assignment clause on page 137, which combines with a covenant on page 84 to give the seller an exit on terms the negotiating team never agreed to. No single page is unusual. The combination is. That instinct comes from sitting with the material in the dark.

The medical resident, on her second night shift, sees a patient with chest pain, fatigue, and an oddly placed bruise on the left forearm. She enters the symptoms into a decision-support tool. It returns three differentials with dosing. She picks the top one. The patient improves.

Her senior, faced with the same case, would have noticed the way the patient held himself, what he said about the bruise and what he did not, which symptom did not fit. She would have asked: what is this man not telling me? The resident never builds that mental tree, because the tree is no longer the path to the answer. In twenty years, when she is the senior and the model offers a confident recommendation that happens to be wrong, her own judgment will have to catch it. She will discover whether it is there.

These are not arguments against the tools. The output is better. The cost is paid elsewhere: in capacities that do not form, in instincts that do not develop, in a generation that performs competently at every visible measure and discovers, only at moments of real difficulty, that something is missing they cannot name.

The empirical picture is consistent. Entry-level postings in the U.S. are down roughly 35% since January 2023. Entry-level hiring at big tech has dropped more than 50% over three years. A 2024 survey found 70% of hiring managers believe AI can do the work of interns. The direction is consistent and the timing suggestive.

The strongest counter is that AI accelerates learning rather than eliminating it. But acceleration and substitution are different. The firm that replaces three junior associates with one AI-augmented senior has not accelerated anyone's learning. It has eliminated the positions where learning occurred.

Junior roles were never merely about output. They were the learning runway – the place where someone began, haltingly, to become a senior. Not a rung on the ladder. The ladder itself.

The Permission Machine

Value at Risk compressed a narrow question – how much can we lose under normal conditions, at a given confidence level – into a single number. The problem was not the number. It was that institutions treated it as the answer to a different question: how safe are we? That substitution turned a tool for measuring expected variation into a licence for taking more risk. The number became a limit. The limit became a green light. The green light became leverage. Leverage became systemic fragility, until the architecture collapsed.

AI productivity metrics follow the same logic. A law firm measures output per associate. A consultancy tracks deliverables per analyst. AI raises every number. Every rise looks like progress, until you ask what is no longer practised beneath it.

The metric improves while the underlying capacity degrades, and the people with the power to respond are looking at signals that mislead them.

The analogy has a limit. The financial permission machine had a collapse moment – discrete, sudden, visible. Erosion of judgment is gradual and distributed. There is no Lehman moment for institutional capacity.

But that is true only of institutions in isolation. The breakdown of the social contract their capacity supports is not gradual. That can have a Lehman moment. It just looks different: an election, a rupture, a loss of legitimacy that no metric predicted because no metric was watching.

The Seeing Might Change Nothing

Risk ethics demands honesty about what the tool displaces. What am I no longer practising? What capacity am I letting atrophy because the shortcut is so consistently better?

That question sounds personal. It is structural. When enough individuals stop practising, the institution that depended on their capacity hollows out.

Some kinds of seeing might still matter, but only if they translate into protected friction: practice time without tools, examinations without assistance, peer review where the model is not in the room. None of this scales easily. None is in the economic interest of those who would have to enforce it. Naming the problem does not solve it. It locates it.

The permission machine does not work by forbidding questions. It works by making them feel unnecessary. Every metric improves. Every output arrives faster. The questions that would matter – what is being lost, who will bear the cost – do not register as urgent, because nothing in the feedback loop assigns them a value.

Institutional erosion does not break. It settles. It becomes the new normal before anyone marks the transition. The moment you would need independent judgment most – when the models have nothing to say – is the moment you discover whether the capacity is still there. By design, you cannot know in advance.

I wrote parts of this argument with the help of an AI system. That is not a confession. It is the mechanism, operating in real time, on the person describing it. The tool is useful. The work is better. That is precisely the condition under which erosion proceeds: not against your judgment, but with your full, informed, intelligent consent.

In November 2008, the queen visited the London School of Economics and asked why nobody saw it coming. The answer was not that nobody was intelligent. It was that too many people were intelligent in the same direction.

What if this time we can see it coming, and the seeing changes nothing?