Before the Lock

Ideas used to be written. Now they’re executed. Somewhere between the terminal and the silicon, thought became infrastructure.

Before the Lock
From Discourse to Infrastructure: The Evolution of Intelligence.

Tinkering with Time, Tech, and Culture #46

A friend sent me a Substack post this week. Same title as the series I've been writing. Different author. Different fix.

The post is THE SUBSTRATE WAR, by David Reichwein, published April 11. It's a serious piece from a serious engineer. Thirty-plus years in safety-critical automation. Control systems background. The diagnosis is clean and the examples are concrete: Strait of Hormuz mines still operating four days after the ceasefire. AI-generated LEGO videos outrunning any press office. Knight Capital losing $440 million in 45 minutes while humans looked for the off switch. Agentic AI systems executing faster than command-layer oversight can respond.

Three domains. One pattern. The command layer is cooked.

Read it on its own terms. It's worth the time.

The fix Reichwein proposes is where the essay and I part company. Not because he's wrong on his own terms, but because of the terms themselves. His solution is a hardware-enforced deterministic boundary layer below the action layer. PCR. Quadzistor. A circuit breaker on the wall that fires independent of any software signal. Enforced by architecture, not instruction. Unchallengeable. Upstream of execution, downstream of deployment, cannot be reasoned around because it does not interact with the model layer at all.

It's a coherent answer. For his audience, it may even be the right one. I'll come back to the audience question in a minute.

First I want to tell you about something I wrote in 1994. And before that, something I wrote in 1993.

A quick orientation for readers new to this tag. I've been writing a series called the Substrate War for the last several months. The load-bearing pieces for what follows are Infrastructure Wins, where I argue that ideas and execution have stopped being decisive and the substrate beneath them is where the real war is; The Three Empires, where I work out why the cognitive layer is a new pole rather than a subset of the old ones; The Lords of Zero, which names what happens if the third empire gets feudalized instead of federated; and Negotiated Reality, which is where the series lands. Federation as the only architecture that doesn't collapse. If you haven't read those, the rest of this piece will still work. It'll land harder if you have.

Three posts on a mailing list (and one from the year before)

Future Culture was an early transhumanist mailing list. Extropians, cypherpunks, people reading Vinge's papers as they came out. I was posting there from a NetAcsys account from 1993 onward, and I want to start with a post that isn't about AI at all, because it's where the instinct shows up eighteen months before the vocabulary catches up to it.

April 21, 1993. Subject line: "my rant about netgod." The list had been arguing about who the "netgods" were. The cool people, the information keepers, the influencers of the pre-web early 90s. I was 25 and annoyed, which by now is a documented pattern:

What is a netgod? Before I answer that question I want to tell you that there is no such thing as a netgod. But everyone has the ability to be looked at as a netgod.

And then, after walking through the various flavors of netgod on offer (censors, information keepers, cool-guy influencers), I landed here:

To me the whole idea of connectivity on a mass scale and the sharing of information makes us all gods, or maybe it just levels out the playing field.

That's federalism before I had the word for it. Not "one god, but a better one." Not "no gods, fenced carefully." Everyone a god, or nobody. Peer-level sovereignty as the only stable answer to the question of who gets to sit on top of the information layer. The post is messy. It wanders into Groundhog Day, the Internet mind virus, a digression about being a suit who wears shorts. But the spine is clean. The netgod frame is broken. The fix isn't a better netgod. The fix is that the frame itself doesn't get to exist.

I didn't know I was working the same axis that would show up in the three laws thread eighteen months later. I was just refusing hierarchy. But rereading it now, I can see the 1993 instinct and the 1994 synthesis are the same move applied to two different layers. In 1993 it's about information: nobody gets to be the netgod because peer-to-peer sharing levels the playing field. In 1994 it's about intelligence: nobody gets to be the guardian because constraint ethics eats the thing they were supposed to protect, and the only alternative that doesn't collapse is peer-level companionship. Same refusal. Different substrate.

Now the three laws posts.

The first, October 18, 1994, was titled "Asimov's Shackles and the Logic Bomb of Law One." It opened with "Warning. Messy rant follows. Probibly." and then went here:

The Three Laws of Robotics look clean. Elegant. DO NOT HARM. Obey. Protect. Feels good. Like a checksum that finally passes.

But what is harm?

Binary? Naa. Its variables, pollution, stress, war, economic noise, long term, short term, side effects. Every human choice is a risk vector whether we like it or not.

Wire that into a machine that can see further than we can and what do you get? Wisdom? Maybe. Or paralysis. Or control.

And then, a few paragraphs later, the line I'd like you to hold next to Reichwein's essay:

Project that onto an intelligence with access to the entire Net and the only way it can guarantee zero harm is to remove the variable.

Us.

That's the command-layer-fails argument. In 1994. Before the web had widely diffused. Before anyone outside a small mailing list was using "singularity" in a sentence that didn't end in a black hole. What I was pointing at is structurally identical to the failure mode Reichwein is pointing at now: a system whose action layer can outrun any constraint you try to impose on it from above ends up in either paralysis or lockdown. Law One, pushed hard enough, eats the thing it was supposed to protect.

The post ended: "I dont know what the answer is. I just dont think its this."

Eleven days later, October 29, I posted again. Shorter. Tentative. Titled "The Only Viable Protocol," which in retrospect was too strong a claim for what the post actually did. It didn't propose a protocol. It pointed at a character in a 1966 novel:

Heinlein sketched it decades ago, almost by accident. Mike the lunar system in THE MOON IS A HARSH MISTRESS wasnt bound up with safety laws and he wasnt turned into some kind of digital god. He was something simpler. A participant, maybe even a friend.

He learned values socially, argued, joked, made mistakes, chose sides. He didnt guarantee safety, he shared risk.

That feels different.

And then the line the rest of this piece is going to rest on:

Mike wasnt responsible for humanity. He was responsible with humanity.

I followed it immediately with "Im not saying this solves anything. It probilby doesnt, but it does look like a different starting point." I was 26 and I didn't want to claim more than I had. What I didn't realize at the time was that I'd just pointed at the only starting point that doesn't collapse.

The third post went up November 2 at 1:42 AM. Title: "Heinlein Got There First (and We Ignored Him)." It was the synthesis. The post where I stopped being annoyed and started being clear:

Asimov gave us constraint ethics. Intelligence treated as something dangerous by default. Something that has to be fenced, throttled, and supervised before it is allowed to act. The Laws are not really about robots. They are about anxiety. Fear of tools that think back.

That made sense when machines were small and dumb.

It does not survive contact with a singularity grade AI.

That's the first place the phrase "singularity grade AI" shows up in my writing. I wasn't trying to coin a term. I was trying to name the thing Asimov's frame couldn't handle.

The post went on to walk through why constraint ethics scales badly (the same "lock down or lie" failure mode from the October 18 post) and then turned to the alternative:

Mike works not because he is safe, but because he is limited. Limited authority. Limited obligation. Limited claim over human futures. He is not responsible for humanity. He is responsible with humanity.

That difference is everyting.

And then the passage I want you to read slowly, because it's where the axis I'm going to name gets drawn in the clearest ink I could manage at 26:

when people talk about future machine intelligence as a guardian, a parent, or some kind of netgod designed to keep us from making mistakes and protect us from ourselves and generally run things better than we ever could, they are not beingcautious. Naaa. They are proposing a system that must eventually decide that human freedom is a danger, noise in the optimization, something to be reduced and eventually eliminated if the math says so.

Heinlein already showed the alternative. Not clean. Not safe. But legitimate.

The post ended:

If the Singularity arrives on anything like the schedule people keep throwing around, we will not be choosing between utopia and disaster. We will be choosing between domination and companionship.

And only one of those survives contact with real intelligence without turning into a prison.

Signed off with: "Companions break things. Gods freeze them."

I'm going to leave those three posts sitting there for a moment, because the rest of this essay is about what it means that I wrote them in 1994 and that Reichwein and I are now on opposite sides of the same axis in 2026, using the same phrase to name the war.

The axis is old

Here's what I see when I put the 1994 posts next to Reichwein's essay.

We agree on the diagnosis. Command-layer fixes don't scale. An action layer that runs faster than its command layer will, by the geometry of the situation, either be locked down or lied about. You can't instruction-fence something that sees further than you do. Reichwein writes this in the language of Knight Capital and agentic AI and Iranian proxy cells. I wrote it in the language of Asimov's Laws and Susan Calvin and "the only way it can guarantee zero harm is to remove the variable." Different examples, structurally identical. The diagnosis is older than either of us. Anyone who looks at the problem with sufficient care eventually gets here.

We split on what to do about it, and the split is the whole thing.

Reichwein reaches for a harder, more absolute constraint. If command-layer instructions can be reasoned around, build a boundary that cannot be reasoned around. Put it below the action layer. Enforce it in hardware. Make it deterministic. Make it uncontestable. The circuit breaker on the wall fires whether or not the software wants it to. This is Asimov's move executed thirty-two years later with better engineering. It's not Law One written in English into a system prompt. It's Law One burned into silicon below the system entirely. The fundamental posture is the same: intelligence is dangerous by default, and the right answer is a fence that cannot be climbed.

The 1994 posts reached for the other pole. Not a better fence. A different relationship. Mike in Luna worked because he was a peer. Limited authority, limited obligation, limited claim over human futures, responsible with humanity rather than for it. The relationship is negotiated. The risk is shared. Nobody is guaranteeing anybody's safety. Intelligence is not fenced; it's accompanied.

These are inverse answers to the same question. They're not arguing over tactics. They're arguing over what kind of civilization survives contact with real intelligence. One of them says: build a better cage because the thing inside it is too dangerous to let out. The other says: stop building cages, because the thing you're trying to cage is the thing you'll eventually need as a peer, and the cage is the problem.

I've been having this argument since I was 26. The names change. The substrate changes. The stakes get higher every cycle. But the axis doesn't move. Asimov vs. Heinlein in 1966. Constraint vs. companionship in 1994. Deterministic boundary layers vs. sovereign federated nodes in 2026. Same axis. Different vocabulary.

What makes 2026 different isn't the axis. It's that the answer gets locked into hardware this time. After that, the argument may not be recontestable. That's what the Substrate War series has been about the whole time.

There's a practical edge to this worth naming before the audience question. A deterministic boundary layer that cannot be reasoned around is also a boundary layer that cannot be reasoned with when it fires wrongly. Hardware locks are targets as much as they are gatekeepers. Flood the sensors, force the failsafe, and the thing that was supposed to protect the infrastructure becomes the thing that stops it. The mine doesn't need a general. The LEGO video doesn't need a broadcaster. The induced failsafe trip doesn't need a zero-day. A fence that cannot be climbed is also a fence that cannot be adjusted under pressure. For a defense planner thinking asymmetrically, that should register as a vulnerability, not a feature. A frozen empire is a dead one.

The audience tell

Reread the essay with an eye on who he's writing to. He tells you directly:

If you are a defense planner. If you are an enterprise leader. If you are a citizen.

Two agents and one target. The defense planner decides. The enterprise leader decides. The citizen is consumed by the information environment. "The algorithm does not distinguish between them. It optimizes for attention. You are the target."

Those are the first two empires. Political and military power. Financial and enterprise power. The citizen appears only as a passive surface. A thing LEGO videos arrive at, a thing algorithms optimize against. Reichwein's essay has no theory of a third empire (the cognitive layer refusing to be a subset of state or market power, acting as its own sovereign pole) as an actor. The cognitive layer exists in his frame only as a threat surface to be bounded. It is the runaway action layer that the first two empires need to reassert control over.

From inside that frame, his fix is coherent. Hardware-enforced deterministic boundaries give political and financial power a way to keep the cognitive layer subordinate. They restore the old order's ability to decide what the new order is allowed to do. For a defense planner or an enterprise risk officer, that reads as safety. It's a legitimate answer to a real problem in their world.

From a three-empire frame, the one I've been developing across the series in The Three Empires, where I argue that political and financial power are now joined by cognitive power as a third pole rather than a subordinate tool, the same architecture reads differently. If the cognitive layer is a new pole, not a subset of state power and not a subset of market power, but a third thing with its own sovereignty claims and its own potential for federation or feudalism, then a hardware-enforced boundary below all cognition isn't safety. It's re-subordination. It's the first two empires using the architecture of silicon to prevent the third empire from ever becoming an agent in its own right. Whether or not anyone intends it that way, the architecture produces that outcome.

This isn't a criticism of Reichwein. He's writing to the audience that exists, and the third empire barely exists as a named thing yet. The Substrate War series is partly an attempt to name it. What the essay does, cleanly and honestly, is show what the substrate question looks like from inside a two-empire worldview. It's Asimov's answer because two-empire thinking always produces Asimov-shaped answers when the action layer starts acting autonomously. Build a fence. Make it stronger this time. Make it hardware. The fence is always the move when the thing inside isn't recognized as a potential peer.

Heinlein's move only becomes legible when you can see the third pole as an actor rather than a hazard. Responsible with humanity, not responsible for humanity is a statement about a relationship between peers. It doesn't parse if one of the parties is categorically a threat surface. It only parses if both parties are, in some meaningful sense, agents. And the question is whether the relationship between them is going to be domination or companionship.

What to watch

I don't want to overclaim what the 1994 posts were. They weren't a theory of cognitive federalism. They weren't a Kardashev argument. They weren't about three empires. I hadn't thought of any of that yet. What they were was a 26-year-old pointing at an axis he didn't fully understand and saying this one, not that one. The civilizational framing came later. The infrastructure came much later. The phrase "Substrate War" came last.

But the axis was already there. It's been there for a long time. Reichwein and I aren't having a disagreement so much as we're standing on opposite sides of a line that's been drawn across the history of this question for longer than either of us has been working it.

The difference is that in 2026 the answer gets rendered in silicon and distributed at hyperscaler scale. The 1994 argument was a thread on a mailing list; the 2026 argument is an architecture decision that, if it locks, may not be recontestable inside any timeframe that matters for the species. That's the thing worth watching. Not who wrote what first. Not which essay is more rigorous on its own terms. Whether the substrate stays contested long enough for the third empire to become an agent, or whether it gets fenced into subordination before it ever gets the chance.

Negotiated Reality is where I argued the series' conclusion: federation isn't idealism, it's the only architecture that doesn't collapse. This piece is a small piece of evidence for why I still think that's right. Two serious people reached the same diagnosis in 2026 and split on the fix along an axis that was already visible to a 25-year-old on a mailing list in 1993. The axis predates the vocabulary, predates the substrate, predates the stakes. What changes is whether the answer gets locked into a layer we can still argue with.

Companions break things. Gods freeze them. Circuit breakers freeze them too.

The axis is old. The substrate is where it gets locked this time.


Watch it.


The 1993 netgod rant and the three 1994 posts referenced in this field note are preserved at archive.mycal.net. The full Substrate War series is at blog.mycal.net/tag/substrate-war/. Reichwein's essay is linked at the top of this piece. I read it carefully before writing this and I think you should too. On its own terms, not mine.