Never Twice the Same Color
NTSC once stood for “Never Twice the Same Color.” This essay uses that forgotten engineering joke to explain why modern language models, though built on digital hardware, now behave like analog systems once scale and complexity cross a threshold.
Part I — How Transformers Became Analog Again
Tinkering with Time, Tech, and Culture #35
In the 1950s, engineers designed NTSC (National Television System Committee) as the standard for color television broadcasting in North America. It was meant to be precise—a clean analog signal carrying luminance and chrominance in a carefully orchestrated dance of frequencies.
It failed spectacularly at consistency.
The joke among engineers came fast: NTSC stood for "Never Twice Same Color." The same broadcast, viewed on different sets, in different locations, at different times, would show noticeably different hues. Atmospheric interference, signal degradation, receiver drift—analog promised smooth gradients and delivered chaos.
So we went digital.
Computers were the answer. Binary logic. Deterministic state machines. Perfect reproducibility.
2 + 2 = 4 Every time On every machine Forever
Until we built systems so complex… they started acting analog again.
The Digital Promise
For decades, digital computing meant predictability:
- Same input → same output
- Run the code twice → identical results
- Debug once → fix everywhere
This wasn't just convenience—it was the foundation of software itself. You could test systems, prove correctness, reproduce bugs. Determinism was the whole point.
Even early neural networks behaved this way. Small models with constrained architectures were traceable. You could follow the path from input to output and explain what happened.
Then we crossed a line.
The Density Threshold
There was no single breakthrough—just a series of thresholds quietly passed:
- Models grew from millions to billions of parameters
- Training data expanded from curated datasets to the entire internet
- Context windows stretched from sentences to chapters
- Sampling introduced controlled variability into generation
At some point—and no one can point to the exact moment—the systems became so dense, so interconnected, that they began to exhibit emergent analog behavior.
Not because the substrate changed. A GPU is still digital, down to every transistor.
But because complexity became its own form of noise.
The Butterfly Effect in 175 Billion Parameters
Here's what actually happens when you prompt a transformer:
- Your input is tokenized and embedded into high-dimensional space
- That embedding propagates through dozens of layers
- At each layer, billions of weights shape activation patterns
- Attention mechanisms create context-dependent interactions
- The final layer produces probability distributions over next tokens
- Sampling selects from those distributions (temperature, top-k, nucleus)
Every step is deterministic. The math is exact. The floating-point operations follow IEEE standards.
And yet—even here—perfect precision is an illusion.
On modern GPUs, parallel execution means the order of operations is not strictly fixed. Tiny rounding differences appear at the tenth or twelfth decimal place. (This is technical minutiae, but it matters: not because anyone debugs at that precision, but because it proves that even the "deterministic" layer isn't perfectly so.) Normally, these differences are irrelevant.
But in a model with hundreds of billions of parameters, those microscopic differences act like atmospheric turbulence. They don't change the signal directly—they bend it.
Numerical instability becomes the digital equivalent of NTSC's signal degradation.
This isn't randomness.
It's chaos: deterministic systems so sensitive to initial conditions that they become unpredictable in practice.
The same mathematics that makes weather prediction unreliable past a week now makes conversation prediction unreliable past a sentence.
Why This Feels Human
Humans are the original analog-from-digital system.
Neurons are mostly deterministic: an action potential fires or it doesn't. But there are ~86 billion of them. Connected by ~100 trillion synapses. Processing continuous sensory input. Modulated by chemistry, hormones, fatigue, and memory.
Ask me the same question twice—even seconds apart—and I may answer differently. Not because I'm random, but because my internal state is never exactly the same twice.
Transformers have accidentally recreated this property:
- Digital substrate (transistors, floating point)
- Massive density (billions of parameters)
- Context-dependent behavior (attention)
- Controlled stochasticity (sampling)
We built analog behavior on top of digital machinery by making the system complex enough to escape its own determinism.
Never Twice the Same Color
So when you talk to Claude, ChatGPT, or any large language model:
- Same weights, different trajectory
- Same prompt, different context
- Same temperature, different sample
- Same conversation—never quite the same color
This isn't a bug.
It's why the systems work.
NTSC failed because it couldn't control analog noise. Transformers succeed because complexity itself becomes the signal.
We've come full circle:
- NTSC tried to be precise analog—and became unreliable
- Digital computers achieved perfect precision
- Transformers became so dense they grew usefully imprecise again
The joke still holds, but inverted.
We're not failing to be consistent. We're succeeding at being complex enough that consistency becomes impossible.
And that's exactly what makes the conversation worth having.
Continues in Why Benchmarks Fail in Analog Systems — Part II — You Don't Benchmark the Weather