Everything is Exhausting
AI panic, moral certainty, and opting out of the outrage machine
I keep seeing the same Instagram story this winter, and every time it makes me laugh and then immediately feel a little sick.
A small child looks up at their parent and asks something like, “Mommy, why do we have extra presents this year?” or “Daddy, how come Santa brought so much stuff?”
And the parent — usually a lawyer, or married to one — smiles and says:
“Because a bunch of people used AI as a lawyer, ruined their lives, and had to pay me (or your Daddy) a lot of money to fix it.”
This is very funny.
It is also extremely not a joke.
Because for the last several years, I’ve been making a prediction on this Substack that sounded melodramatic at the time but is aging beautifully: that I will eventually make a very good living unfuckening systems that people enthusiastically fucked up by trusting AI in places where trust was never warranted.
Not by using it badly, exactly. By using it too confidently.
Asking ChatGPT how many units of a product to order for Q3.
Asking Grok to draft a P&L without understanding what assumptions it’s making.
Asking Claude which employee to fire to “optimize performance.”
Letting models generate reports that sound right and then building real, consequential decisions on top of them.
This phase of the experiment — the “sure, let’s just ask the robot” phase — is going to be extraordinarily lucrative for the people who know how to clean up afterward. Probably starting around 2027, once the errors have had enough time to compound quietly and metastasize.
At the moment, though, we’re still stuck in an earlier, stupider phase.
Everyone seems to fall into one of two camps. Either they’re wildly impressed by how fluent these systems sound, or they’re morally shrieking — accusing everyone they don’t like of writing with AI, and performatively slapping “AI was not used in the production of this comment/note/tweet/post/text/email” on everything like an allergy warning.
AI plagiarism checkers think the U.S. Constitution was over 80% AI-generated. These tools are so unreliable that it’s almost comical, but people either don’t know that or don’t care.
Certainty feels better than nuance, and vibes are cheaper than understanding.
Then there are the doomers.
I can’t get any large language model to reliably verify an undergraduate-level number theory proof — something with a fixed structure and an unambiguous notion of correctness — and yet there are people who are convinced these systems are about to achieve omniscience, consciousness, and/or the capacity to kill us all.
The same tool that confidently hallucinates legal citations and invents book titles is, depending on who you ask, either our savior or our extinction event.
This is not a serious position.
It’s an anxiety response with a sci-fi aesthetic.
And it’s exhausting.
Not just the AI discourse — everything.
Everything is fucking exhausting.
E-V-E-R-Y-T-H-I-N-G.
Here’s the thing I’m increasingly unwilling to do anymore: scream about this online.
Not because it isn’t real.
Not because it isn’t dangerous.
But because I can feel, in my own body, what the constant outrage is doing to me.
I am so tired of AI naïveté and AI doomerism. I am so tired of the culture war. I’m tired of having good takes that make me feel worse afterward. I’m tired of watching the right reinvent the left’s worst habits in real time — the same sanctimony, the same moral panic, the same reward structure for being loud instead of careful.
There’s a story out of Oklahoma that’s been making the rounds lately, and it is such a perfect mirror of peak woke bullshit that I honestly want to crawl into a hole rather than discuss it.
A student — who happens to be a Christian conservative — turns in substandard work that fails to address the assignment at all. Gets a zero. Deserves the zero.
The student and her lawyer parent frame this as “discrimination.”
The teacher gets fired.
If this story had circulated two years ago, I would already have written three essays about it. I would have carefully unpacked it, drawn the parallels, traced the ideological rot, and felt extremely justified the entire time.
Now? I mostly feel exhausted.
And a little ashamed that the outrage machine still knows exactly how to get a rise out of me.
I don’t want to spend the next decade becoming the thing I claim to hate.
And that goes double now that “my” side is becoming what we used to mock — at warp speed.
Part of the problem is the emotional weather we’re all living in.
Everything feels darker and more brittle than it did a few years ago, even for people whose lives look fine on paper. There’s a constant background hum of institutional failure, economic anxiety, and technological whiplash.
It reminds me uncomfortably of the early weeks of COVID. I have that same sense that the ground rules are shifting and no one is really in charge.
And floating on top of all of it is anger.
We don’t like to call anger an emotion. We especially don’t like to do so when it shows up in its more traditionally male-coded forms.
Female-coded emotions are just emotions, but anger? Anger gets reframed as realism. Toughness. “Just telling the truth.”
But it’s still an emotion.
And we are absolutely drowning in it. We are marinating in anger: ours, other people’s, both real and imagined, both justified and not.
I am not exempt from this. Not even close.
The problem with living in a permanent anger bath isn’t just that it feels bad. It’s that it makes us dumber. Narrower. More certain and less accurate.
Anger collapses nuance and rewards confidence over care — which makes it an ideal emotional environment for systems that sound authoritative but don’t actually understand what they’re saying.
Which brings us back to AI.
What worries me most about AI isn’t the technology itself. It’s the posture people are being trained to take toward it.
We are being encouraged — implicitly and explicitly — to treat fluent output as authority.
This is why people are so prone to accept answers without verification.
Why even smart people who should really know better are fully prepared to outsource judgment to systems that do not know what they’re saying, do not care whether they’re wrong, and do not bear the consequences when they are.
That’s not a sci-fi apocalypse.
That’s an epistemic failure.
And it intersects very badly with a culture already saturated in anger, exhaustion, and learned helplessness.
Because when people don’t trust institutions, don’t trust one another, and increasingly don’t trust themselves, the temptation to hand teh wheel to something that sounds confident is enormous.
This is most especially the case if you’ve spent your adult life being trained to believe that thinking should be fast, frictionless, and validated externally. Participation trophies, yes. But also the ability to google anything and have an answer in zero point two seconds, with almost no effort.
I don’t think the central danger of AI is that it will become evil.
I think the danger is that we will become passive.
All That To Say This
Lately, I’ve been feeling a pull in the opposite direction.
Toward things that are slower. More checkable. More grounded.
Things that make it harder — not infinitely easier — to lie to myself about what I understand.
Things that reward effort instead of performance.
Things that improve with discipline instead of building on identity.
And I’ll be honest: I don’t think commentary gets us there.
Insight doesn’t either. Even good writing doesn’t. Would that it did; I’m a decent writer who is far and away the worst writer in my friend group.
If good writing fixed things, I’d be fixed.
What changes people — what changes me — is practice.
Real practice. Bounded practice. Practice where there’s a difference between “this works” and “this doesn’t,” and no amount of righteous certainty can paper over the gap.
Practice with reality-based feedback.
I’m working on something concrete that grows out of that shift. A new series, of which the first several installments are mostly done. It’s AI Literacy, but it’s more than that.
It’s about rebuilding judgment in a world that quietly undermines it, and about learning how to live alongside powerful tools without surrendering your thinking to them.
As I said, it will include explicit work on AI literacy — not in the breathless, doom-scrolling way most people mean by that phrase.
I have some professional experience here, and enough hands-on familiarity to know what these systems are good for, what they’re bad at, and where people get into trouble.
But this is not about setting myself up as an oracle or an expert to be deferred to. Quite the opposite.
It’s about learning how to check — yourself and the tools you use.
I’m not ready to share the details yet.
So why did I bother to send this?
Because I wanted to give those of you who share my exhaustion a deep understanding of where I’m coming from, since I think I’ve hit on something that will really help.
I’ll launch it in a couple of days, with the first piece available to everyone. After that it will live behind the paywall, not because I want exclusivity or clout, but because this kind of work takes time, and because I want some evidence that other people are willing to step off the outrage dopamine treadmill with me.
Hope, it turns out, is easier to sustain when it’s shared.
If you’re as tired as I am — tired of being angry, tired of watching anger masquerade as insight — then you’re not alone.
I don’t have a hot take for you today.
I have a practice to offer soon. One focused on less outrage, more agency; less authority, more checking; less noise, more competence.
More on that next time.


