Not that long ago on Substack Notes, someone haughtily informed me that she knew more about LLMs than I do.
Why? She’d been reading about them for almost two whole years.
She proceeded to make several false claims about what they’re “designed to do” and yelled at me when I tried to explain what she had wrong.
She was, if I recall correct (and I may not), a romance writer.
I’m a data scientist who is literally customizing the training corpus for an LLM at my company — and the author of one of the clearest, most accessible guides to how LLMs work.1
I don’t claim to be an expert. The number of actual experts is minuscule.
But I do know a few things, which is a funny place to be — apparently we’ve reached a point in the culture where only comedians can weigh in on the Middle East, and only romance writers feel qualified to lecture data scientists about artificial intelligence.
Ah, 2025. Where everyone is an expert (except those of us with relevant professional experience) and nobody reads past the headline.
The Latest Cool Kids Club
Lately, I’ve noticed a trend — in both Substack posts and Substack Notes.
People include some kind of disclaimer that their post is entirely free from AI content.
It functions as both a virtue signal and a declaration of disdain — a way of saying “I’m not one of those people,” while giving themselves a pat on the back for their moral purity.
If that’s how you feel about it, I’m not here to talk you out of it.
But I am going to point out a few things you may not have considered — including the ways you’re probably already using AI without realizing it.
And I’m going to try to make the case that rather than cultivating performative disgust, you should be developing a personal ethic around AI.
You should be consciously and purposefully deciding how and when to use it with intention and integrity, rather than just getting high off your own ethical supply.
You’re Already Using AI (Whether You Mean To or Not)
Let’s start with the obvious: if you’ve Googled anything lately, you’re already using AI.
Search results are now packed with summaries and scraped answers written by LLMs — sometimes helpfully, sometimes incoherently, often without clear disclosure.
Even when you're clicking on a "real" article, there's a good chance it was partially written or edited by AI. Many sites (especially the SEO-churn farms masquerading as lifestyle blogs) are pumping out AI content around the clock, optimized for clicks, ad revenue, and ranking.
Significant parts of the internet that appear to be written by humans — Twitter, anyone? — are in fact written by AI.
The internet, in other words, is no longer just tainted with AI — it's soaked in it.
If you’re online much at all, your declaration of not using AI is as silly as a fish who starts up a chapter of Water Haters.
And that’s just the obvious part.
If you use Gmail’s autocomplete, that’s AI.
If you let your phone suggest text replies or correct your spelling, that’s AI.
If you rely on grammar tools like Grammarly or even Word’s built-in editor, that’s AI, too.
Spotify recommendations? AI.
Netflix thumbnails tailored to your watching habits? AI.
The TikTok algorithm that knows you better than your therapist? Absolutely AI.
You are not escaping this. You’re swimming in it.
So the question isn’t whether you use AI — you do.
The question is whether you want to pretend you don’t, or whether you want to engage with it consciously, thoughtfully, and on your own consciously decided terms.
Yes, Some People Have Outsourced Their Brains
Yes, I’ve seen it too.
There are people who have effectively turned themselves into meat puppets for ChatGPT. They let it do all their thinking, all their writing, all their problem-solving — and it shows. They’re not engaging with ideas, they’re just outsourcing them.
It’s hollow.
It’s obvious.
Worst of all, it’s boring.
We are, undeniably, in the worst part of the adjustment period.
It’s like when smartphones first hit the mainstream and everyone thought it was fine to hand a toddler an iPad to keep them quiet at dinner. What harm could it possibly do?
Only recently, about twenty years in, are parents fiiiiiiiiiinally starting to reckon with the fallout. They’re waking up to the reality that smartphones are developmental wrecking balls.
They wreck the self-esteem of girls — constant comparison, filtered perfection, the inescapable gaze of the algorithm. And they wreck damn near everything for boys. There are now 18-year-olds who need Viagra, not because of any medical condition, but because six years of unfiltered internet access and algorithmic porn have made real intimacy impossible.2
We are only just beginning to understand what we did. And we will look back on it the way we now look back on other bygone mistakes — like smoking while pregnant, or letting children ride in cars without seatbelts. We’ll shudder. We’ll ask each other, “What the hell were we thinking?”
Because that’s the American pattern: a tech breakthrough hits, we embrace it with manic enthusiasm, and only start asking hard questions after the damage is done. It’s either a miracle or a menace — nothing in between.
My friend
and I have a name for this: APBPD.American Political Borderline Personality Disorder.
We’re a nation of extremes.
Hope and change? Elect Obama.
Burn it all down? Elect Trump.
Back to brunch? Time for Biden.
We ping-pong between utopian optimism and apocalyptic doom, with very little capacity for moderation, or ambiguity, or slow, careful calibration.
One minute we’re declaring something the savior of civilization, and the next we’re calling for its banishment and blaming it for everything wrong with the world.
Maybe this time, with AI, we could do it differently?
Maybe, instead of the usual moral panic or blind obsession, we could treat this tool like adults.
With nuance.
With boundaries.
With awareness.
Because here’s the thing: there are use cases — good ones, powerful ones — that don’t zombify you. That don’t make you dumber, but smarter.
That help you do more, learn faster, and stretch further.
Not by doing the work for you, but by partnering with you in the doing.
So before I lay out the personal ethic I use to guide my own work with AI, I’m going to walk you through a few of those use cases — the ones that make me better, not worse.
And if you still want to slap a big “NO AI USED” banner on your next Substack post, go for it.
Just maybe take a beat to ask yourself what you’re really rejecting — and who you’re performing that rejection for. Because you’re not actually free from AI.
Nobody is.
Good Use Cases
There are plenty of ways to use AI that don’t rot your brain.
In fact, these use cases make mine work better. They help me learn more, create faster, and stay focused on the parts that actually require me.
The key: it works this way not because I don’t know what I’m doing, but because I do.
And I want to conserve energy for the parts that matter.
Here are a few of the ways I use it.
1. Reviewing things I’ve already learned but need to brush up on.
I have a math degree, but like anyone who isn’t actively teaching or using every single topic daily, I get rusty. When I needed a refresher on Rolle’s Theorem, I asked ChatGPT to walk me through it. That’s a good use case. It’s not doing the thinking for me — it’s just helping me remember what I already know, faster than flipping through a textbook.
2. Visual editing and mockups.
When I take or find a photo I love and want to draw — say, of an owl — but want to turn it into a drawing with a richer context, like placing the owl inside a wooden barn with a clear light source, I can use AI to generate a reference image. The owl stays exactly the same. I’m not asking it to make art — I’m using it like a staging assistant, to create the reference photo I want.
3. Learning color theory and testing palettes.
I’ve been expanding my artistic repertoire from graphite to selective coloring (choosing one element of a drawing to be color while the rest remains graphite).
That means that I isolate one part of an image — like an owl’s eyes — and render it in colored pencil while leaving the rest in graphite. When I do that, I’ll tell ChatGPT what pencils I have (Luminance, Polychromos, Prismacolor, full or partial sets, etc.), describe the lighting and color I’m going for, and get step-by-step guidance on how to layer and blend them effectively to get the eyes the way I want them.
Which is sometimes exactly like the photo, or sometimes brighter, or dimmer, or moodier, or whatever I’m going for.
I’m still making the choices. I’m still doing the drawing.
But it’s like having a highly trained assistant who knows all 370 pencil hues by heart—and instead of throwing away fifteen or twenty attempts and spending eighteen hours on a drawing, I throw away one or two false starts and spend eight or nine hours. That’s pretty great.
4. Writing boilerplate data vis or data wrangling code.
When I need to write a basic bar graph in Python or reformat a CSV for a data project, I could absolutely do it myself from scratch. But why waste time typing out import statements, axis labels, and repetitive formatting when I already understand what they do? The AI writes the scaffolding, and I build the logic. It’s a time-saver — not a mind-replacer.
To be crystal-fucking-clear: LLMs absolutely suck at writing complex code, generating original approaches, or having good ideas. They’re not going to design an elegant algorithm or debug something subtle.
But they’re fantastic at the mindless stuff.
And when I’m testing out a data vis and the bar graph doesn’t feel right, I can ask it to turn the same data into a line graph or a scatterplot — and it does, in seconds. That might’ve taken me an hour and a half from scratch, especially if I was starting cold.
It’s not that I can’t write the code. It’s that I’d rather spend my energy on what the code is for — and especially on the rest of the project.
AI writes what I call the “monkey stuff,” while I spend my time and cognitive energy writing the part of the code that needs a human’s insight and ideas to design the function.
5. Finding new books, articles, or ideas.
I’ve mentioned books I love, and gotten spot-on recommendations for others I might like. Not because AI “understands” my taste in some deep way, but because it’s good at making lateral connections I might not have thought of.
It’s like a librarian who doesn’t sleep and remembers everything I’ve ever said I enjoyed.
6. Recipes, search terms, and ethical sourcing.
I’ve asked for keto recipes. I’ve gotten help brainstorming search terms when I’m trying to find a particular type of image. Because it’s so good at giving me search terms, I’ve even shifted from using Midjourney to Unsplash for photo references — because while I appreciate what AI can do, I also care about supporting human-created work where it matters. And because I can paste in an essay and say “Give me a search term for Unsplash for an image that’ll fit the mood,” then voila! A human’s artistic work gets spread. I even paid for an Unsplash membership because this is working so well.
7. Managing PTSD triggers.
This one might be the most personally important.
When I’m thinking about watching a movie or reading a book, I sometimes need to know if I’m going to walk straight into a landmine. I have C-PTSD, and depending on what kind of day I’m having, some things might be ok and others might never be ok.
So I explain my limits clearly — I spell out exactly what I can’t handle on a given day — and I ask ChatGPT to tell me if the content crosses that line. I don’t need a scene-by-scene breakdown. I just need a heads-up.
The film Sleepers is a good example. On the surface, it looks like a courtroom drama or a mafia story. But it includes some of the most harrowing depictions of sadistic institutional abuse I’ve ever heard of — exactly the kind of thing I’m likely never going to watch. And while a trailer or brief synopsis might not mention that, ChatGPT will. I can ask it directly, “Is this going to fuck up my sleeping for a week?” and it will tell me, yes. It’s not for you.
In that moment, it’s not an entertainment assistant or a plot summarizer.
It’s a buffer between me and something I don’t want my nervous system to have to survive again.
8. Argument stress-test and steelmanning.
AI is great at poking holes in my reasoning — especially when I ask it to. I’ll write out a take or an idea and then tell it, “Push back. Find flaws. Steelman the opposing argument,” or “Assume a jerk who’s smarter than I am is going to pick apart my argument. Help me strengthen it in advance.”
It helps me get out of my own confirmation bias and anticipate what smart, critical readers might say. Not because it’s infallible, but because it can simulate objections quickly, without the social friction of an actual person disagreeing with me in real time.
Note that this isn’t reasoning; it’s pattern-matching based on debate structures it’s seen before. It doesn’t understand the argument. It just recognizes that arguments of this type often meet with counterpoints of that type.
And that’s fine. That’s useful.
I don’t need it to reason — I just need it to fight like someone who reads the comments.
9. Planning and time management.
When I’m overwhelmed, AI helps me get my head above water. I’ll describe everything I’m juggling, what deadlines are approaching, what I’ve already done, and what’s still haunting me from the to-do list. Then I ask for a plan. A triage map. A suggested schedule. I don’t always follow it exactly, but seeing things sorted by urgency and estimated time helps cut through the fog.
It’s like borrowing someone else’s executive function when mine is fried.
10. Clarity on complex topics.
There are things — lots of them — about which I don’t know what I don’t know.
When my windshield had a fast-spreading crack not long ago, the various choices were confusing AF. Repair vs replace. Recalibrate vs not bothering. The place that said it would take two days vs the place that said it would take two hours — what the fuck was that about?
I didn’t know enough to ask good questions, which is scary when something is potentially very expensive.
But I can describe my situation in plain terms and get back a likely diagnosis, as well as a list of questions to ask — which is extremely helpful.
That’s what good AI use looks like: a map, not a shortcut.
Something that points, nudges, assists — not something that replaces.
And every one of those examples is shaped by a particular ethic — one that I try very hard to follow.
And that’s what I’m going to share with you next.
AI Is Neither A God Nor A Moron
I’m lucky. My company has an AI policy that’s actually sane.
It goes like this: Use whatever you want. But maintain data secrecy, and you’re responsible for the accuracy of the results.
That’s it. That’s the whole policy.
It assumes we’re grown-ups.
That we can be trusted with tools because we’re also trusted with consequences.
And the fact that we are trusted is, in large part, thanks to our CEO — who is easily a top 1% leader. He’s the kind of guy who knows that exceptionally talented people are often also exceptionally screwed up — and hires us anyway.
The kind of guy who inspires actual loyalty. Like, it would take life-changing money to even potentially lure me away.
(And even then I’d probably ask him if I should take it.)
So that company policy — that ethic of trust and accountability — sets the tone for how I use AI at work and beyond.
Most of my professional use centers on ChatGPT’s Deep Research, which is a specialized interaction mode for LLMs that dramatically reduces hallucinations.
It takes forever — often fifteen to twenty minutes for a single question. It takes that long because it’s not bullshitting; it emphasizes grounded reasoning, context retention, and citation-backed output. It’s basically the opposite of vibes-based chat. You give it a specific research goal, feed it relevant documents and context, and it helps synthesize.
It’s like hiring a speed-reading intern with a photographic memory and no ego.
Even so, I literally never treat it like gospel.
If I use Deep Research for something important, I still verify the sources and double-check the results.
Usually, though, I’m not relying on it to settle debates or make high-stakes decisions. I’m using it as a research helper. A learning companion.
Someone who knows more about color theory than I do.
Someone who’s read more books than I have.
Someone who can help me see the shape of something faster than I could on my own.
That is, I believe, the right mindset.
Because if you start treating it like an all-knowing oracle, you're going to get burned.
It can’t reason. Not even a little.
Its mathematical reasoning is laughably bad — I wouldn’t trust it to check the simplest proof.
And it completely collapses under complexity. Like the time I asked it to find and list all 251 sites in the Vermont 251 Club. Should’ve been straightforward.
But it couldn’t do it; it kept apologizing for missing some while missing others next time, and next time. I ended up typing most of it myself, muttering the whole time like an 1800s schoolmarm with a chalkboard and a migraine.
So here’s my actual ethic, in summary:
Never treat it as an all-knowing God.
Never treat it as capable of reasoning.
Always treat it like a really well-read assistant with no original thought but a damn good memory.
Used that way — with eyes open, hands on the wheel, and some pride in your own abilities — AI can be incredibly helpful.
Not because it replaces you, but because it helps you get a little better—and faster—at being you.
That’s not cheating. That’s not hollow.
And it’s not the beginning of the robot apocalypse.
That’s just smart tool use.
The people slapping “NO AI USED” banners on everything they publish might mean well — but they’re missing the point.
The question isn’t whether you use AI. You do. We all do.
The question is whether you’re using it on purpose.
Because the real danger isn’t in letting AI think with you.
It’s in pretending it isn’t here at all — and letting it think for you without realizing it.
So stop performing your purity. Stop wringing your hands.
And start deciding — like an adult — how to use this thing in a way that serves you, not the other way around.
That’s the ethic.
That’s the whole point.
I say that not out of hubris but because it’s widely read, at least judging by its views compared to my usual posts.
See Chris Rock’s Netflix special, “Tambourine,” for a really powerful discussion of this.
Holly, I wish my companies AI policy was as clear and concise! Unfortunately what we see in IT is company data being uploaded to public LLM models and users relying on the output. Trying to put reasonable guardrails and have ethics - requires ethical people and some basic checks/balances.
That being said, I’m using it to help me explore a new (to me) data query language as someone who never learned much coding beyond basic and PL/1 in the 80’s.
Autocorrect on a phone is an AI blessing and a curse. There are times that I have typed in names 10 times and it autocorrects every time, even when i click on the intended spelling.
Anyone who uses any shopping app definitely uses AI because even if you accidentally click an icon, it becomes a permanent part of your shopping list.
The scary part to me is that AI can "create" the perfect image of you robbing a bank or making a speech, even when you had no part in it. The voices can be so realistic as can the images. There used to be a question "Who are you going to believe? Me or your lyin' eyes?" Well, in this day of AI, your "lyin' eyes" may see something that is complete science fiction.
I feel similarly with 3D printers. Things that should be of tremendous benefit to society can also be used for nefarious purposes.
The ability to recognize what is real and what is contrived will become critical!