Last week, Matt Shumer published a post on X that has since accumulated over 80 million views. The thesis is stark: AI has crossed a threshold. Models now demonstrate “judgment” and “taste.” Entry-level white-collar jobs are facing extinction. Nothing done on a screen is safe.
The response has been predictable—a familiar cocktail of Silicon Valley triumphalism and existential dread. But as we explored in our previous examination of George Gilder’s Gaming AI, there exists a contrarian framework that neither celebrates nor panics at these developments. It simply rejects their philosophical premises.
How would Gilder respond to Shumer’s viral prophecy? The answer illuminates the fault lines in how we think about machine intelligence—and what we’re actually building.
The “Rapture of the Nerds”
Gilder would likely greet Shumer’s post with a weary recognition. He’s seen this pattern before—what he calls the “Rapture of the Nerds” combined with “professional amnesia.” It’s a belief system that mistakes increasingly sophisticated data processing for a functioning mind.
The symptoms are textbook: claims of emergent properties (judgment, taste, intuition), predictions of exponential capability gains, and apocalyptic certainty about imminent transformation. Each generation of AI advancement triggers the same rhetorical cycle. Each time, the fundamental constraints remain unaddressed.
This isn’t dismissal of the technology’s power. It’s a diagnosis of the metaphysical assumptions smuggled into the analysis.
On the Illusion of “Judgment” and “Taste”
Shumer’s central claim concerns the new capabilities of advanced coding models. He describes an “inexplicable sense of knowing what the right call is”—machines demonstrating judgment and taste in ways that feel qualitatively different from earlier systems.
Gilder would identify this as the “materialist superstition” in action.
The AI does not have taste or judgment, he would argue, because it lacks what philosopher Charles Sanders Peirce called the “interpretant”—the necessary bridge between a symbol (the code) and the object (reality). Without this third element, meaning cannot exist. Data remains pattern without significance.
“In the sense of human conscious knowledge,” Gilder writes, “a computer knows nothing at all.”
What Shumer perceives as taste is the machine converging on statistical patterns derived from human inputs. It’s looking in a “rearview mirror” at the past—optimizing against historical distributions—while genuine human intelligence engages with the high-entropy surprises of the future.
The model’s “judgment” is actually the encoded judgment of thousands of developers whose code it ingested during training. The oracle remains human, even when the execution appears autonomous.
On AI Building AI
Shumer cites the ability of AI to write its own code as evidence of an impending intelligence explosion—a singularity where machines bootstrap themselves beyond human comprehension.
Gilder would invoke Gödel.
Kurt Gödel’s incompleteness theorems demonstrated that a logical system cannot prove or generate itself entirely from within. It requires an outside “oracle” to provide the axioms that the system cannot derive. Alan Turing formalized this as a fundamental constraint of computation—and specified that this oracle “cannot be a machine.”
“Creative information,” Gilder argues, “is non-algorithmic and therefore uncomputable.”
While Shumer sees AI writing code as the end of human utility, Gilder sees a deterministic loop. Without the oracle of human creativity to inject genuinely new information (entropy), the system merely refines existing low-entropy data. It’s optimization, not invention.
The engineers claiming to create minds through this process, Gilder suggests, are “clinically ‘out of their minds’“—not as insult, but as literal description. They’ve forgotten the mathematical constraints their own discipline discovered.
On the Apocalypse of Jobs
Shumer predicts that fifty percent of entry-level white-collar jobs will vanish, that “nothing done on a computer is safe.” It’s a familiar prophecy dressed in new data.
Gilder would likely view this as pessimistic misunderstanding of economics—a failure to grasp what productivity tools actually do.
He would cite Huawei founder Ren Zhengfei’s vision: AI causing a “productivity explosion” where one person can accomplish the work of ten. This is not unemployment. This is wealth creation.
“An industry utterly dependent on human minds will not prosper by obsoleting both their customers and their creators.”
The pattern holds across every previous technological revolution. The printing press didn’t end the need for writers—it created publishing industries. The spreadsheet didn’t eliminate accountants—it created financial analysis as a discipline. The tractor didn’t empty farms—it fed industrial expansion.
AI, in Gilder’s framework, augments the oracle rather than replacing it. The humans wielding these prosthetics become more valuable, not less. The question isn’t whether the tools are powerful—they clearly are. The question is whether they’re building minds or extending them.
On the Map and the Territory
Perhaps Shumer’s most revealing claim: “If your job happens on a screen… then AI is coming for significant parts of it.”
Gilder would critique this as a fundamental confusion between the map and the territory.
He distinguishes between “ergodic” systems—like games or code where the rules don’t change and the same inputs yield predictable outputs—and the real world, which pulses with black swans and butterfly effects. AI excels in ergodic domains precisely because, within them, the map is the territory. The chess board is simultaneously symbol and reality.
The real world offers no such convenience.
Economic reality generates genuine novelty. Entrepreneurial creativity produces outcomes unpredictable from prior states. The rules themselves shift. Markets evolve. Humans surprise each other and themselves.
While AI might master the low-entropy domain of writing code (the map), it will fail to master the high-entropy real world (the territory) without human guidance. Gilder points to self-driving cars as illustration—systems that must navigate chaotic, non-digital environments still struggle precisely because they face “combinatorial explosions of novelty” that no training set can anticipate.
The screen is a map. The territory remains stubbornly human.
The Oracle Endures
Gilder would likely tell Shumer to calm down—not because the technology isn’t significant, but because the fear of conscious, dominant AI is a “fantasy” derived from ignoring the history of computer science.
“Artificial intelligence is just the next step in computer science,” he would conclude, “not a replacement for the human soul or the oracle of the human mind.”
The prosthetic grows more powerful. The tools become more sophisticated. The augmentation deepens. But the oracle—the source of meaning, the interpreter of symbols, the generator of genuinely creative information—remains human.
This isn’t optimism. It’s mathematics.
What This Means for Us
If Gilder’s framework holds, Shumer’s viral anxiety contains a category error. The question isn’t whether AI will be powerful—it demonstrably is. The question is whether power constitutes mind.
For practitioners navigating this landscape, the implications are significant:
The oracle function is your moat. The capacity to interpret, to recognize when maps diverge from territories, to inject genuinely creative information into systems—these remain human domains. Cultivate them.
Augmentation is the game. Those who learn to wield these prosthetics effectively will not be replaced by them. They’ll be radically empowered. The one-person team accomplishing what ten could before isn’t unemployed—they’re transformed.
Beware rearview optimization. AI systems trained on historical data optimize for patterns that may not persist. Strategic judgment, entrepreneurial insight, and adaptive creativity operate precisely where the training distribution ends.
The Singularity may not have a destination. We might be building the most powerful tools in human history while hallucinating that we’re building minds. If so, that’s not tragedy—it’s correction.
The Rapture of the Nerds has been postponed before. Gilder’s wager is that it’s been misconceived entirely.
For deeper exploration of Gilder’s framework, see our previous article on Gaming AI or pick up his book directly.
