Pedro Pavón

A response to Matt Shumer's AI Essay

Matt Shumer's viral essay "Something Big Is Happening" has been read over 70 million times. There's no doubt he used AI to write it. It's also sloppy in ways that matter.

Shumer compares the current AI moment to February 2020, the weeks before COVID locked the world down. The analogy sounds powerful, but falls apart on contact. COVID was fast, biological, and indifferent to human institutions. It didn't need a 5,000-word essay to convince you it was real. AI is a technology mediated by markets, regulation, procurement cycles, liability frameworks, and organizational inertia. Pretending otherwise inflates the stakes.

His evidence is almost entirely personal: he tells AI what to build, walks away, comes back to finished work. A lawyer friend uses it to outperform junior associates. Fine, but a founder who has spent six years inside AI and evaluates these tools for a living is about the least representative user on earth. The real world has legacy systems, compliance requirements, union contracts, and VPs who constantly change expectations. None of that shows up in his essay.

He treats exponential progress as a given. AI got faster in 2025, then faster again, and he projects that forward without engaging in any real technical debates: data constraints, diminishing returns on compute, architectural limits. Exponential curves in technology are temporary and hold until they don't.

He also presents the fact that GPT-5.3 helped build itself as some kind of inflection point toward recursive self-improvement. Software has been used to build software since the compiler was invented. The real question is whether AI creating AI creates a runaway feedback loop or just large, linear gains. Shumer assumes the dramatic answer without evidence.

Then there's the claim that the latest model displayed "judgment" and "taste." In law, philosophy, and life, judgment means applying principles to particular circumstances in context-sensitive and accountable ways. What Shumer actually means is the model made design decisions he liked. That's a subjective reaction, not an empirical finding. I teach AI law. The gap from "this felt like judgment to me" and "this system exercises judgment in a legally meaningful sense" is not a quibble.

Finally, the "adapt or die" framing. If we're truly on the edge of an apotheosis moment, then "spend an hour a day experimenting with AI" is laughably inadequate advice. The modesty of his recommendations quietly undercuts the extremity of his predictions, which tells you something about what this essay actually is.

Shumer runs an AI company. His valuation rises with the perception of urgency. That doesn't make him wrong, but it's a conflict readers should weigh.

Use the tools. Pay the $20. Experiment. But the public deserves more than hype dressed as straight talk. If the industry can't deliver something universally useful soon, the backlash will be swift and earned. You can't sustain trust on marketing vibes and AI pic-edits forever.