AI Customer Service Is Here
In this post, I’ll share a frustrating experience I had when I needed help from my cellular provider. Instead of reaching a human, I found myself stuck communicating with an AI customer service agent. Based on its responses, I’m fairly certain it was a Large Language Model (LLM) trained on company procedures. It was definitely not a human.
If you’re unfamiliar with LLMs—such as Grok or ChatGPT—I’ve written a simple, non-technical guide that explains how they work. I originally wrote it for homeschooling parents I advise on math curriculum, after I noticed some were starting to trust these models' answers—a potentially dangerous mistake.
The short version? Don’t trust LLMs for anything that requires true understanding. They’re great at sounding confident but lack the ability to reason or comprehend nuance. Avoid relying on them for tasks like doing math, writing code, or making decisions. And be skeptical of anyone claiming to "train AI" without access to the kind of massive supercomputing resources only a handful of companies possess. Such claims usually reveal more about what they don’t understand than what they do.
What Was Going Wrong With My iPhone
I rely on my iPhone SE for almost everything. It controls my hearing aids, which are essential for my day-to-day life, and I use the brain.fm app1 to stay focused — often for over twelve hours a day. Between work, reading, and writing my novel, my iPhone is always in use. Not actively—I’m not staring at it—but it’s either controlling my hearing aids in general or feeding me the brain.fm app’s concentration help.
The hearing aid app is a memory hog, but it’s worth it because it gives me a life as close to normal as possible. So when my phone’s battery started to fail, it wasn’t just an inconvenience — it was a major problem.
I logged onto my cellular provider’s website to explore my options. If they had a good deal on a new iPhone, I’d upgrade. If not, I’d head to BestBuy and let the GeekSquad replace the battery. Either way, I needed a solution quickly.
Because of my situation, being without my phone for even a day would cause serious disruptions. On top of checking for deals, I needed to speak to a human to verify shipping times and see if expedited options were available. The stakes were high, and I couldn’t risk miscommunication or delays.
AI Is Now Programmed to Lie
This story is insane, and I’m reconstructing this from memory. I didn’t take screenshots at the time—I had no idea I’d want to write about it until hours later, when I was recounting the experience to my therapist.
To the best of my recollection, I realized pretty quickly that I wasn’t dealing with a human. The responses I received were formulaic and appeared almost instantly. Far faster than any human could type.
So, I asked to speak to a human. The AI agreed, and I waited about three minutes. Then “Jessica” appeared. But her welcome message was a huge red flag: it was long, overly polished, and arrived the instant she “joined” the chat.
Highly suspicious.
I decided to test this “Jessica.” I asked, “Are you a human?”
She responded with something like: “I understand how frustrating it is to try to reach a human, but no worries! You’ve reached one now!” The answer was generic, overly empathetic, and, again, instant—much faster than any human could have typed.
That confirmed my suspicion.
So I replied, “Great! Then you should have no trouble telling me what year in school you learned the multiplication tables.”
A real human might have answered, “Third grade,” “Fourth grade,” or even something personal, like, “I didn’t learn them as a separate task. In my school we learned them as we went along.”
But “Jessica” replied with the same canned response: “I understand how frustrating it is to try to reach a human, but no worries—you’ve reached one now!”
At that moment, it hit me: this AI wasn’t just pretending to be human—it had been programmed to lie about it. Someone, somewhere, had decided that deceiving customers about whether they were interacting with an AI was acceptable.
And that is where we are now. AI agents, programmed not just to assist, but to flat-out lie.
Audacity Beyond Words
After exhausting all patience with “Jessica,” I finally got a phone number to call. It took nearly an hour of navigating endless prompts and infuriating loops back to their “automated system” before I figured out a workaround. Pretending to need information about coverage for overseas travel—something I hoped still required a human touch—finally connected me to a live agent.
To her credit, the human agent was polite and acknowledged how terrible my experience had been so far. She even offered me a good deal on a new iPhone, which momentarily felt like progress.
But then came the catch.
To take advantage of the deal, I’d have to change my phone number. Temporarily, she swore. I could change it back as soon as the new phone arrived. Really and truly! She repeatedly assured me that changing my number back was easy and painless.
Her words sent a wave of dread through my entire nervous system. I rely on two-factor authentication for everything—banking, email, even logging into random apps. I get text codes a dozen times a day, if not more. The thought of changing my phone number, even temporarily, was unthinkable.
If I had to choose, I’d probably take a nasty flu over changing my phone number.
And yet, after the audacity of having their AI agents lie to me—pretending to be human—the company somehow expected me to trust them with something as critical as my phone number. The suggestion was so absurd, I thought I must have misunderstood.
I hadn’t.
I politely declined the “deal,” walked into BestBuy, and had the GeekSquad replace my battery.
Problem solved, no thanks to my cellular provider.
Predictions For What Comes Next
I discipline myself to think, not listen to podcasts or music, during most drives. Living in rural Vermont, most drives are long and beautiful, and facilitate thinking. For the past few months, my driving-thoughts have been in fairly intense territory, rooted in my past, which I wrote about recently.
But since this experience, I’ve mostly been thinking about how insane it is that we’re voluntarily using AI for so much.
Companies are replacing humans, which is going to be another one of those American pendulums — one of those ways we swing from one extreme to another.
I do believe the pendulum will swing back. It’s inevitable. An LLM will hallucinate something, a person who doesn’t understand that LLMs aren’t using reason will trust it, and something tragic will happen. It might be as simple as an idiot in the medical profession asking an LLM for a dose and killing a patient. Or it might be even worse.
The resulting lawsuit will cause the pendulum to start to swing back, and if we’re lucky it’ll land somewhere reasonable.
But so many people are doing this now, even people who know better. I used an LLM to edit my written work for a few months. I’d paste in a draft and say something like “please make a list of suggested changes — leave my tone and style alone, focus on grammar or other legit errors, but also suggest changes to tighten it up a bit and improve the flow.” Typically I’d integrate about half the suggestions.
I’m not doing that anymore, and for the same reason that I make myself try to do most arithmetic in my head. I also try to solve problems from either the Daily Epsilon math calendar or this Twitter account several times a week. I love math, so this is usually fun for me, but I’ve committed to doing this no matter how I feel or how busy I am. Even when the darkness is close, which it definitely is at present, I am making myself keep this up.
Why? Because I want my brain to stay in shape.
We have a whole industry, gyms, around keeping our bodies fit now that we no longer get exercise from physical labor.
How long will it be until the mental equivalent of a gym becomes something for which there is demand, and the market will meet the demand?
Will that be enough to wake us up?
I am shocked at how quickly companies got bamboozled into using AI for customer service like this, or, if they were conscious of it, how uncaring they appear to be about giving off a bad impression to their customers by outright lying.
I've noticed a very interesting phenomenon with LLMs and writing--they're great at initial stages but once you start editing rather than getting more precise in each edit cycle they get worse unless you are extremely explicit about changes. I was doing a resume and they "suggestions" were made up facts and reduced precision in language throughout the entire documet.