The Fear That Follows Fire
It has been said that the first man to harness fire was probably burned at the stake. Not because it failed, but because it worked. The man who carved the first wheel was crushed beneath it—not by accident, but by design. Innovation, especially the kind that reshapes the world, doesn’t just solve problems. It sparks fear.
That fear isn’t new. It’s the fear that drove the Luddites to smash looms in the name of dignity. The fear that sees every breakthrough as the first step toward a dystopian future. Fear of disruption. Fear of loss. Fear of not being needed anymore.
Today, that fear lives again—in the swelling chorus of warnings about artificial intelligence.
The New Chorus of Panic
And in recent days, that chorus has risen to a crescendo. Many voices have joined the refrain—including some who ought to know better.
Just this week, Dario Amodei, the CEO of Anthropic—one of the most prominent players in the chatbot space—warned that his own products could wipe out 50% of entry-level jobs and send unemployment rates soaring to levels not seen since the Great Depression.
The New Yorker ran a long-form essay by Kyle Chayka describing how Sam Altman, CEO of OpenAI, and Jony Ive, the mind behind the iPhone, are conspiring to force AI into your life—emphasis on force.
So you wouldn’t have to, I read—or at least scanned—some of Chayka’s work. He’s a down-the-line progressive, who holds all the correct beliefs, donates to the right causes, and somehow remains convinced that all of this is the result of his unique capacity for independent thought. He’s also the kind of guy who would struggle to define the word force.
Amodei is a different case. He sees something real—disruption of entry-level jobs—but assumes no one will respond. Are those jobs at risk? Of course. And the kids who might have taken them will use Anthropic’s tools to make themselves many times more productive—to land even better jobs.
Take journalism. Many entry-level reporters start out rewriting press releases and fact-checking copy. If they’re particularly ambitious, they might call a source. Sometimes they do. Often, they don’t.
AI can now do the first part—rewriting press releases—without human help. Soon it may handle the second. That’s not the end of journalism. It’s the beginning of better journalism. Humans will be freed up to do what only humans can do: interview sources, develop stories, ask the questions no prompt could predict. The rewritten press release gets better. The reporting gets deeper. Everybody wins.
There were others—most easily dismissed—but then came Megan McArdle, a writer and thinker I genuinely admire. On the Dispatch Podcast, she joined the alarmists, warning of robot overlords, the overwrought “paperclip” trope, and indulging in the now-standard handwringing about misalignment, collapse, and a future for which we are not prepared—and probably couldn’t stop if we were.
History, Tools, and the Misuse of Fear
The Luddites—those who fear the future—are always wrong. History has proved them so, again and again. The printing press, the steam engine, the automobile, the airplane, vaccines, television, the damned internet—each one, at its birth, clearly and definitively marked the end of civilization.
And perhaps the greatest irony is this: once a new technology takes root, these same doomsayers usually end up romanticizing the one they previously opposed. In the grip of panic over some new invention, the refrain becomes: What was so wrong with the way things used to be? Social media is a disaster—why don’t kids just watch television anymore?
It’s all nonsense. And it’s nonsense on principle.
Technology is a tool. Tools leverage resources—most often human capital. But they also consume resources. A hammer makes a carpenter more productive, raising the value of his time—contributing to his success, and our own.
But hammers don’t hand themselves out, and they don’t swing on their own. If a tool doesn’t work—if it adds no value—it disappears. No one buys it. No one uses it. It fades away.
What Intelligence Really Is
But, the refrain goes, artificial intelligence is a special case. This time, we mean it. Week after next, the robots will be judging you—and you’re going to fall short.
They’re right, in a way. AI is a special case—along a number of dimensions. The promise is extraordinary. For the first time, humans have access to genuine digital assistants with almost no friction. I expect truly mind-blowing gains in productivity.
But here’s the real reason it’s a special case: the name artificial intelligence is completely wrong. And it’s misleading—conjuring something dark, mysterious, and deeply frightening.
AI today is none of those things. It’s not artificial—its output is trained on, built from, and reflective of human expression. And it’s not intelligent—not in any meaningful sense of the word.
Claims to the contrary are either ignorant or desperate bids for attention and clicks. Or both.
To understand why, we have to start with a simpler question: What is intelligence?
Let’s begin with what it’s not. Intelligence is not access to data. It’s not the ability to summarize data, or even to detect patterns within it. Those are powerful tools—and today’s AI systems are remarkably good at all three. But those abilities alone are not intelligence. Not as we’ve ever understood the term.
Intelligence is, at its core, a creative process. It is the uniquely human capacity to generate new knowledge—ideas that did not previously exist. Machines cannot do this. They can summarize, collate, analyze, and extrapolate. But true intelligence requires more than rearranging inputs. It demands an individual human mind actively engaging with a problem and willing a solution into being. It is not passive. It is not automatic. It is an act of will. And that’s the dividing line: intelligence requires free will.
Now, I’m not someone who claims that machines will never possess volition. Maybe one day they will. But they don’t have it now—and we shouldn’t pretend they do. Especially since most of the scientific establishment denies that anyone has it. In their view, free will is an illusion—an outdated superstition with no place in modern neuroscience.
To my mind, free will is self-evident. At any given moment, I can choose—usually from a delimited array of options—that mostly reduce to a single binary: to think, or to avoid the effort.
And here’s the irony: if science has convinced itself that even humans don’t have free will, then it certainly can’t claim that machines do. And even if it could—how would it convince me?
You know, since everything is determined in advance?
Why Intelligence Leads to Cooperation, Not Conquest
But what if they were? What if machines were intelligent? What if they became self-aware, capable of learning, even improving themselves? Wouldn’t that be terrifying?
Not necessarily. In fact, they’d likely be less of a threat than they are now.
Because machines can be dangerous. Tens of thousands of people die every year in and around automobiles. And while the death toll from spreadsheets is probably lower, we know that tools in the wrong hands—human hands—can cause real harm. But here’s what we also know: these tools tend to become safer over time.
Cars now have seatbelts, airbags, and crumple zones. We’ve learned. As we’ve gathered more data, we’ve applied it creatively, intelligently, to reduce risk.
Humans, too, are dangerous. Negligence, crime, war—especially war—inflict terrible suffering. And yet, even these evils have diminished over time, in fits and starts.
Why? Because we’ve applied intelligence—real, volitional, human intelligence—to the problem of living. We’ve built institutions, norms, technologies, and moral frameworks. We’ve learned, adapted, and (slowly) improved. That’s what intelligence does.
Maybe there is some future conflict brewing between man and machine. Maybe not a war, but a misalignment. A divergence. Let’s imagine it anyway: a machine becomes self-aware, volitional, capable of reason, creativity—real intelligence. A robot overlord, in Megan McArdle’s construction. Let’s assume it sees the worst in us. Violence, cruelty, waste, corruption. Maybe it concludes that humanity is bad. Unworthy. A threat.
But if that machine is truly intelligent, it would also understand this: destruction is inefficient. War consumes value faster than any corrupt institution ever could. And the human mind, however flawed, is still the only known source of moral imagination, of conceptual thought, of insight and meaning. And value creation. To wipe that out would not be a rational act. It would be a failure of intelligence.
And, of course, the humans would fight back—at enormous cost, to both sides.
A truly intelligent machine wouldn’t seek conquest. It would seek cooperation. Because that’s where value lives. And whatever damage humans are doing—through ignorance, neglect, or malice—can be corrected. Not with force. But with better ideas. Through intelligence.
Me and My Muse
I use AI chatbots almost every day. As I write this column, I have a window open with one running in the background. It’s my muse, my fact checker, my editor-in-real-time, and sometimes, a ghostwriter of sorts. It is a tool that enhances my own creativity. It is a tool that makes me more productive.
Is this valuable tool—a thing I’ve come to rely upon in much the same way a carpenter relies upon his hammer—going to throw twenty percent of the population into unemployment? Force people to make choices against their will? Guide the hand of the coming robot overlords in their policy to turn us all into paperclips?
Or will it, at the margins, make humans better? Make our jobs easier and make us better at them? Will it increase, a little bit, our leisure time? And help us come up with better ways to spend it?
Perhaps, help us start a blog—and patiently edit some of the ideas generated by our all-too-human intelligence?