Author’s Note:
This is Part II of a series on what social media is doing to us. Part I: Shards of the Self explores the psychological fragmentation that social media causes, reinforces, and exploits. The paywall on Part I was set by accident — I originally intended to paywall the whole series, then changed my mind, then forgot to remove it before publishing. I apologize. It is now removed. The broader framework — the grand theory of which these essays examine the parts — can be found in Unified Theory of Networked Narcissism.
Part II: Algorithmic Reinforcement and the Incentive To Perform
I haven’t tweeted in almost three years, but I’ve kept my Twitter account. There are a few people with whom I only communicate via Twitter DMs, but that’s not why — I could easily give them my number or my email.
It’s because I’m a lone ranger. No family, no backup, no fallback plan beyond a couple of months worth of expenses in the bank. If I suddenly found myself without income, I could manipulate the Twitter payout system — and I know it. I know exactly what I’d need to do to rack up engagement, enough to turn it into a near-effortless income stream.
It would shred my soul.
But I wouldn’t be homeless.
How do I know this? Because I used to be on Twitter, and I elicited engagement pretty successfully. But more than that: I’ve studied how it works. I’ve watched how differently the platform behaves for different people. I’ve compared notes. I’ve noticed patterns. I’ve thought hard about what those patterns suggest.
And I’ve got a few working hypotheses about how to get into several algorithmic branches at once — yes, I’ll explain what I mean by that in a moment.
But first, let’s take a step back.
Because none of this is about me, not really.
It’s about how social media systems — Twitter especially — don’t just show us what people are like. They train us to behave in ways that maximize our visibility. They reward the worst impulses, distort our sense of what’s normal, and feed us curated outrage until it starts to feel like reality.
Social media does not simply reflect who we are.
It trains us.
And it doesn’t train us to be better — more curious, more merciful, more human. It trains us to perform.
Not to think clearly, but to speak quickly. Not to connect, but to posture. Not to reflect, but to declare.
And above all: it trains us to optimize for reach. For engagement. For reward.
In Part I, I argued that social media fragments us — not just psychologically, but relationally, morally, even spiritually. We become disjointed, not just in how we see ourselves but in how we experience others.
This next installment is about how that fragmentation gets reinforced. Systematized. Monetized.
It's about the role of the algorithm.
Because the algorithm doesn’t just watch what you do. It shapes what you do. It suggests, predicts, rewards, and curates in a way that distorts incentives so profoundly that it’s easy to forget you ever had different ones.
If you want to understand why so many people online sound the same — why they perform the same five emotional registers, why their rage feels scripted and their righteousness performative — this is why.
If you want to understand why online activism often looks like cosplay and why nuance is not just rare but punished — this is why.
The algorithm doesn’t just reflect our engagement. It rewires it. It rewards the fragments, punishes the whole, and teaches us to mistake virality for virtue.
Let’s look closer.
What Is An Algorithm?
Before we go on, let’s talk about what an algorithm actually is.
Put simply, an algorithm is a set of instructions — a recipe for achieving a specific result.
For example, here’s a basic algorithm for checking whether a number is prime:
If the number is less than 2, it’s not prime.
If the number is 2 or 3, it is prime.
Otherwise, check for divisibility from 2 up to the square root of the number, one number at a time.
If any divisor is found, it’s not prime. If none are found, it is.
That’s it. Clear inputs, clear rules, clear outcome.
What makes social media algorithms tricky is that their goals are hidden, proprietary, and constantly evolving — and their “inputs” are not just data points. They’re us. Our behavior, our preferences, our vulnerabilities.
And unlike the prime-checking algorithm, social media’s output isn’t a clean yes or no.
It’s a feed.
A feed designed to shape what we see — and by extension, how we think.
How This Applies to Twitter
Now, I want to be clear: I don’t know exactly how the Twitter algorithm works. No outsider does. But I have working theories:
I suspect there are multiple branches — clusters of algorithmic goals that overlap, intersect, and sometimes contradict. It’s not just one feed; it’s a set of curated psychological corridors, customized to extract the most attention from you.
Let me give you an example.
A friend of mine has seen a lot of anti–Shiloh Hendrix content on Twitter: vicious, vile, unhinged commentary. Posts advocating violence against her. Posts comparing her to war criminals. If he were judging solely by what he sees, he would be right to conclude that she’s in constant mortal danger — that a significant portion of Americans, especially black Americans, are so triggered by a white person saying a forbidden word that they view her death as morally justified. That they see her as the moral equivalent of an unrepentant pedophile rapist.
It’s horrifying.
But I haven’t seen any of that. Not once. Not one single tweet to this effect, or even remotely close to it, has appeared on my feed organically.
In total, I’ve seen two such statements — and both were screenshots.
What I have seen — and seen in uncountable numbers — is the opposite.
I’ve seen endless, unhinged defenses of her. I’ve seen tweet after tweet glorifying her “bravery” for repeating a slur, many of them drenched in a level of patriotic fervor usually reserved for fallen soldiers. I’ve seen people who simply object to publicly berating a child — regardless of the word — accused of supporting pedophilia, of wanting white women to be raped, of being Zionist shills, and more than anything else, over and over and over again, of being Jewish.
If I judged by my feed, I would conclude that America has a racism problem far worse than I was ever taught — that the Robin DiAngelos of the world underestimated it.
So how can two people on the same platform, looking at the same topic, experience two completely opposite versions of reality?
The answer, I think, lies in what the algorithm wants from us.
My friend’s feed appears designed to keep him furious. Locked in. Fighting. Addicted. A constant IV drip of justified rage.
Mine seems to be doing something different — maybe trying to train me into tribal allegiance. To push me toward becoming a defender of the Shiloh side, not because I chose it, but because everyone I’m shown is deranged enough to make me feel like I have to.
My feed doesn’t want me angry — not yet. It wants me aligned.
Polarization first. Outrage second.
Two different goals. Same effect: engagement.
Same reward loop. Same addiction.
Different bait.
That’s the genius — and the poison — of algorithmic reinforcement. It adapts itself to the fragment it sees. It figures out what will keep you scrolling. What will keep you performing. What will keep you from logging off.
It doesn’t care what you believe — only what will get you to click.
And if it has to make you feel righteous, or scared, or endlessly misunderstood to do that?
It will.
The Brilliance of Horseshoe Theory
If you’re not familiar with horseshoe theory, it’s the idea that the political spectrum doesn’t exist as a straight line, with the far left and far right at opposite ends. Instead, it's more like a horseshoe — where the extremes curve toward each other and start to resemble one another more than they resemble their own respective moderates.
In other words: the far left and the far right often end up looking eerily similar — not in what they believe, necessarily; though it often works out that way — but in how they believe it. The style. The affect. The posture.
The moral absolutism, the apocalyptic certainty, the contempt for nuance, the theatrical disgust.
And this is the majority of why social media algorithms work so well.
The machinery doesn’t need to know your ideology. It only needs to detect your emotional rhythm: how fast you react, how often you share, how long you hover. And nothing hits the dopamine lever harder than someone on the far left or the far right spewing bile at cartoon villains.
To the algorithm, the outrage is the point. The content is irrelevant.
That’s how you end up with a situation I see a dozen times a day, having to click on profiles to see if the tweet I’m reading is from someone on the Woke Left or its mirror image:
“Israel is a genocidal parasite” —
because the Jooooooos, despite being inferior, neurotic, and weak, somehow control Hollywood, global finance, and the U.S. government…
…or because they’re militarized blood-lusting monsters who live only to shoot children in Gaza and dance around the corpses?
You can’t tell if it’s a white nationalist or a grad student in postcolonial studies — not until you click on the profile. And sometimes not even then.
Or this:
The Constitution is evil —
because it enshrines white supremacy, protects property over people, and was written by slave holders…
…or because it enshrines religious freedom, protects pornographers, and fails to declare Jesus King of America?
Again, same sentence. Same venom. Just add different footnotes depending on which cult you belong to.
Or this one:
The biggest problem with the world is white women —
because we are permitted to vote, to work, to speak without a male head of household…
…or because we’re all so stupid and emotionally driven that we vote however our husbands tell us to?
Are we diabolically powerful or childishly helpless?
Yes.
Does the ideology matter?
Not to the algorithm.
To the algorithm, the only thing that matters is that you will react.
And because American discourse has become so polarized and so radically flattened — so flooded with absolutes and conspiratorial certainty — sorting people into emotional channels is now insultingly easy.
This is the quiet brilliance behind outrage-fomenting algorithms: they don’t need to know your beliefs.
Think about that — we’re all being manipulated by our predictable emotional states, and it has fuck-all to do with our principles, no matter how deeply held or how much we think they matter.
The algorithms just need to know what upsets you, what confirms your disgust, what makes you feel triumphant and sneering and vindicated.
They don’t have to sort by ideology. They sort by amygdala.
And once you’ve been sorted, the machine can feed you a steady diet of content that matches the emotional signature you’ve shown a taste for. Fury. Disdain. Panic. Glee. The smug righteousness of knowing that the “other side” is worse than you imagined — and that your contempt is not only justified, but moral.
It’s no longer about ideas. It’s about affect.
That’s what makes the Unified Theory of Networked Narcissism tick.
Because once your identity is emotionally tied to your reactions — once you’re defined by your performative disgust or your dopamine-driven moral clarity — the line between belief and performance disappears.
Your moral judgment gets hijacked by your emotional regulation system, which has been hijacked by the feed.
You’re no longer weighing harm, risk, or fairness. You’re just reacting — with whatever mode gets rewarded most consistently by the machine.
The algorithm doesn’t create this polarization out of nowhere. It just takes the cracks that already exist — psychological fragmentation, unresolved trauma, unexamined ego states — and pulls them wide open.
And it keeps you there. Clicking. Fuming. Performing.
Its only goal is stickiness — keeping you on the site as long as possible.
Because that’s what keeps the lights on.
Emotional Hijacking and the Death of Recognition
When people talk about the emotional toll of social media, they usually mean burnout — the fatigue of too many arguments, too much doomscrolling, too many dopamine crashes.
But burnout is just the end state.
The actual damage starts long before that.
The real harm is emotional hijacking — the way constant stimulation trains you to feel before you think, react before you reflect, and perform before you connect.
The algorithm’s primary tool isn’t ideology or content. It’s arousal — emotional heat, heightened attention, moral panic.
It rewards speed over depth, volume over meaning, certainty over doubt.
And once you’re trained to respond that way — once it becomes your default mode — you start losing access to your quieter self. The reflective self. The real self.
Over time, even the most thoughtful people begin outsourcing moral judgment to emotional reaction. If it feels outrageous, it must be wrong. If it feels righteous, it must be good. And if it feels good to say out loud — to declare, to tweet, to post — then surely it must be good.
That’s not discernment. That’s performance.
And performance is the opposite of virtue.
Because real virtue is boring. Real virtue is slow. Real virtue often looks like not responding — or responding gently when you’d rather rip someone’s throat out with a quote tweet.
Real virtue is private. It’s quiet. It shows up in what you don’t say, don’t post, don’t use as proof of your goodness.
But social media doesn’t reward silence.
It doesn’t reward mercy.
It rewards certainty, theatrics, and the algorithmic equivalent of applause. The more you play the role well, the more the platform reflects that role back to you. People follow the fragment. They like the fragment. They defend the fragment.
And pretty soon, you become the fragment.
This is where it starts to look like narcissism.
Not the grandiose kind, necessarily — not the guy at the gym flexing in the mirror or the woman who posts selfies with bible quotes about humility.
I mean narcissism in something closer to the clinical sense. Specifically: the failure to recognize that other people are people — that they have an interior world as rich and vivid as yours.
A defining feature of narcissistic pathology is what psychologists call “object use.” Other people are not seen as ends in themselves — they’re seen as tools, mirrors, threats, or extras in the narcissist’s movie.
And that’s exactly what the algorithm trains all of us to do.
It doesn’t matter if you’re not a narcissist by temperament or diagnosis. Spend enough time in the architecture of Twitter — or TikTok, or Instagram, or anywhere else that rewards reaction over reflection — and you will start to exhibit narcissistic behaviors.
You will treat people not as people, but as data points.
You will respond to them not based on who they are, but on what role they seem to be playing in your narrative.
You will lose the ability — or at least the habit — of wondering what they meant, what they’re carrying, what they’ve survived.
And when you’re being trained to optimize for performance, why would you bother?
You’re not incentivized to connect. You’re incentivized to extract.
Extract agreement. Extract praise. Extract evidence for your own moral stance, which can then be performed in public for likes.
If someone disagrees — especially sloppily, especially in public — they become a threat. Not a person with a point you haven’t considered.
A threat to be humiliated, blocked, or quote-tweeted into oblivion.
That’s narcissism, plain and simple. It may not be the structural kind that lives in your bones — but it becomes a functional operating mode.
And like all modes that get reinforced, it begins to dominate.
This is the death of mutual recognition.
Because you’re not being seen anymore — not really. What people see is the persona you’ve been rewarded into becoming. And you’re not seeing them, either. You’re seeing their performance. Their algorithm-trained mask.
You can’t connect person-to-person, because the algorithm has trained you to speak persona-to-persona.
And that means you can’t be truly known.
Or truly loved.
Because real love — real human connection — requires being seen. It requires mutual recognition: I see you. You see me. Not as brands. Not as arguments. As human beings, contradictory and tender and hard to pin down.
But when you’re performing your fragment for reward, and they’re performing theirs?
There’s no room left for that.
The algorithm has hijacked the space where recognition should live — and filled it with noise.
The Persona That Ate Me
I’ve always known I was screwed up.
Not in a self-deprecating, quirky-girl way. I mean the real kind — the kind that takes years to even see clearly, and more years still to start healing. I’ve been in trauma therapy for a long time now, with a specialist who understands complex PTSD and dissociation and fragmentation. And I’ve made more progress than I ever thought I would, which isn’t saying much.
But even now — even after all that work — I can say with clarity that Twitter didn’t just prey on my wounds.
It helped carve them deeper.
I had a busy personal account for a few years — over 16,000 followers at peak. Not huge by platform standards, but far too many for a private citizen who wasn’t trying to build a brand. I wasn’t selling anything. I wasn’t trying to become anything. I was just talking.
And I got good at it.
Not good at thinking clearly, or listening well, or engaging with nuance. Good at sounding right. Good at feeling true. Good at inhabiting a tone — righteous, clever, devastating — that would light up the feed like a pinball machine.
I had a decent eye for rhetoric. I knew when to be serious, when to be savage, when to be funny. And I could feel, often before I hit post, which tweets would get traction. Which fragments of me would land.
And that’s exactly what Twitter rewards: fragments.
Not wholeness. Not integrity. Not integration.
Just the mode of me that played best.
Sometimes it was the morally indignant version. Sometimes the darkly funny version. Sometimes the achingly raw, wounded-but-wise version.
All of them were real. But none of them were whole. And slowly — steadily — I began to lead with those parts. To default to them. To become them.
That’s what algorithmic reinforcement does. It doesn’t teach you to lie. It teaches you to tell the most rewarding truth — over and over, until it stops being the truth and becomes the script.
And I followed the script.
To be fair, it wasn’t all bad. In college — when I was barely holding it together — Twitter was a lifeline. I would walk out of a class where everything I believed had just been declared bigoted or backward, and within five minutes, I could find someone online saying what I was afraid to say out loud. Or someone validating that no, I wasn’t crazy. That yes, what I had just heard in that gender studies lecture really was as deranged as it felt.
That kind of affirmation — especially for someone with my history of gaslighting and emotional confusion — was invaluable. It helped me stay sane.
But the medium that saved me also warped me. Not emotionally, at first — but morally.
Twitter didn’t damage my mental health nearly as much as it damaged my character.
It taught me to prioritize performance over substance.
It taught me to make points, not space.
It taught me to treat other people’s pain as set dressing for my punchlines — or worse, as raw material for my moral superiority.
And maybe most dangerously, it taught me to think of myself as someone who knew better — someone who had already done the work, already seen the patterns, already earned the right to critique.
It didn’t turn me into a monster. But it made it easier — day by day, tweet by tweet — to stop growing.
The parasocial stuff was the final straw.
Not the creeps or stalkers — that’s another story. I mean the good people. The earnest ones. The ones who thought they knew me. Who built entire inner worlds around the version of me that Twitter had helped me build.
I started getting long emails of contrition — people apologizing for arguments I couldn’t remember, pouring out their hearts about something that hadn’t even registered to me. And I couldn’t respond honestly without sounding cruel:
Hi — thanks for the seven paragraphs of reflection. I have no memory of this spat. I don’t know who you are. It meant nothing to me. Please send me your handle so I can unblock you, I guess?
I never wanted to be someone who had that kind of relationship with another person. I still don’t. But Twitter made it unavoidable.
And I hated what it brought out in me.
The part of me that liked being read as wise. The part that craved being thought of as strong. The part that secretly enjoyed being someone whose silence could devastate.
Those parts didn’t come from nowhere. They came from pain. From a lifetime of fragmentation. From trying to survive things that should never have happened.
But Twitter didn’t help me heal. It handed those parts a microphone and gave them a standing ovation.
And I was not strong enough to put it down on my own.
As I’ve already admitted, I haven’t deleted the account. I still haven’t cut the cord entirely.
I keep it, mostly as a contingency. I don’t tweet, but I don’t log out, either. Because I live without a safety net — no family, no deep reserves, just a couple months of runway at any given time.
And if the bottom fell out of my life again, I know exactly how to monetize my worst self.
It would shred my soul into pieces, but I wouldn’t be homeless.
Which is why I’ve made a promise to myself: when my student loans are finally paid off — the day they’re paid off — I’m logging in one last time. I’ll give my private contact info to the people I still talk to via DM.
And then I’m going to delete it.
Because even keeping that door cracked open — just in case — means I haven’t fully walked away. And I want to.
Not because I think I’m better than it now.
But because I know exactly what it cost me.
And I know what it’s still costing all of us. Yes, including you.
Because I was never the only one being shaped.
And I was never the only one performing.
The Algorithm Isn’t Just Breaking People — It’s Breaking the Commons
When I talk about what social media is doing to us, I don’t just mean what it’s doing to me — or to people like me, with trauma histories and dissociative tendencies and an outsized vulnerability to validation.
I mean all of us. The whole civic fabric. The emotional commons. The mental neighborhood we all live in.
Because the algorithm doesn’t just fragment individuals. It fragments cultures. It trains people to relate to each other the way addicts relate to dealers — and then tells them they’re being righteous while they do it.
It doesn’t just make people worse privately.
It makes people worse together.
And what we’re building — what we’re now trying to function inside — is a society that no longer has a shared moral grammar. We don’t weigh harm the same way. We don’t apply judgment the same way. We don’t even feel the same emotional charge at the same events.
It’s not just polarization anymore. It’s emotional balkanization.
We are not seeing the same world.
We are not feeling the same things.
And the algorithm has no reason to care.
In fact, the algorithms prefer this state of fracture. They reward the collapse of shared sense-making because it creates constant friction.
Friction is engagement. And engagement is money.
That’s the only real goal.
So now we live in a world where tragedy, triumph, and total fabrication scroll past each other in the same ten seconds — and no one reacts the same way. No one can. Because what used to be a commons is now a feed.
And the feed is tailored for maximal reaction, not mutual recognition.
That’s why online discourse feels like a never-ending culture war conducted entirely through interpretive dance. No one is speaking the same language. Everyone’s choreographed by a different algorithm. Everyone’s rewarded for different tones, different villains, different moods.
Some people are posting for approval.
Some are posting for revenge.
Some are posting because the algorithm has taught them that if they don’t, they disappear.
And so we’ve replaced dialogue with disinhibition theater — a parade of personas shouting at hallucinated enemies, while others watch and clap or cringe or pile on.
No one stops to ask what’s true. No one even remembers how.
Because truth doesn’t trend.
Nuance doesn’t trend.
Complexity definitely doesn’t trend.
What trends is performance. And that’s what the algorithm selects for: the performance of feeling. Not the slow, tentative emergence of thought. Not the careful offering of perspective. Not the communal building of understanding.
Just the fragment that hits hardest and fastest.
And the cost is more than just interpersonal breakdown.
It’s institutional decay. It’s the death of epistemic trust. It’s the collapse of any shared framework for determining what is real, what is right, what is worth fixing, and what is beyond repair.
Because when no one is rewarded for holding the center — when every incentive tilts toward extremity, velocity, and spectacle — we don’t just lose discourse.
We lose the possibility of repair.
And without repair, the only thing left is escalation.
Narrative Addiction and the Death of Moral Seriousness
The algorithm doesn’t just reward emotional reactivity and theatrical certainty.
It rewards coherence.
Not truth. Not integrity. Not the slow, uncomfortable evolution of a thought over time.
Just coherence — the kind that arrives prepackaged. The kind that fits neatly into a frame your followers already recognize. The kind that lets them nod along before they’ve even finished reading.
This is what I mean by narrative addiction.
The feeling of narrative coherence — that click of emotional rightness — is one of the most powerful hits social media offers. It’s the slot machine bell that makes you think you’ve won something. It delivers the illusion of clarity. A momentary sense of knowing exactly who the villain is, what team you’re on, and what to do next.
And once that hits your bloodstream enough times, you start craving it.
You start needing every event — every headline, every controversy, every personal interaction — to slot into the story you already believe.
Even when it doesn’t fit.
Especially when it doesn’t fit.
So you hammer it in. You sand down the edges. You round the facts to the nearest outrage. You discard the weird, inconvenient, clarifying parts of reality until the only thing left is the genre: victim, oppressor, savior. Again and again.
This is how people end up reading about a public meltdown, or a viral assault, or a campus protest, and responding before they know anything.
They don’t need to know anything. They already recognize the plot.
They’ve seen this movie before.
The actual facts — the ones that might destabilize the feeling of moral clarity — become the threat.
And curiosity becomes treason.
Because curiosity requires suspense. It requires that you not know how the story ends. And if you’ve built your moral identity around a set of predictable roles and arcs, you cannot afford suspense.
You have to already know.
And that’s how we lost moral seriousness.
Because seriousness requires patience. It requires listening. It requires delaying judgment — not indefinitely, but long enough to ask questions, test assumptions, and weigh things that don’t slot neatly into the frame.
Serious people don’t just ask, “Who’s the victim?” They ask, “What’s the context? What are the tradeoffs? What will this cost?”
But serious people are algorithmically invisible.
They don’t post fast enough. They don’t dunk. They don’t give you a script you already love.
When the news cycle is ready to move on, the serious people are still thinking, which is awkward and annoying.
So the algorithm buries them.
And it elevates performers of moral clarity — people whose entire presence is narrative-driven. Every issue, every tragedy, every story becomes another beat in the same symphony. They post not to investigate but to confirm. Not to learn but to lead — and not lead toward truth, but toward narrative cohesion.
This is why every issue now feels like a referendum on your soul.
Because moral seriousness has been replaced by moral branding.
You don’t engage with abortion, or Gaza, or public education, or vaccine mandates, or the criminal justice system as issues with histories, complexities, and tradeoffs. You engage with them as loyalty tests. As scripts.
And once you're addicted to the script, you're no longer capable of responding to reality. You can only respond to genre.
And genre always ends the same way: with applause for your side, and contempt for everyone else.
That’s the real damage of narrative addiction.
It doesn’t just dull your intellect. It hollows out your moral imagination.
And once your morality becomes unthinking — once it becomes automated, performative, reactive — it’s not really morality anymore.
It’s cosplay.
It’s what seriousness looks like when you’ve forgotten how to be serious.
And for Americans who’ve spent their whole lives online, it may be the only version of seriousness they’ll ever be capable of recognizing.
Conclusion: Toward Wholeness (Or Not)
In Part I, I argued that social media doesn’t just expose our fragmentation — it creates it. It encourages us to treat people as issues and issues as people. It rewards consistency across contexts that should never align and flattens complexity into caricature. It teaches us to see quirks as crimes and disagreement as betrayal. It breaks our sense of others — and eventually, our sense of self.
In Part II, we’ve gone deeper into the machinery that makes that breaking stick.
The algorithm is not just a mirror. It’s a mold. It doesn’t merely reflect what we give it — it reshapes it. It isolates the parts of us that are quickest to spark and easiest to share, and it rewards those fragments until they become identities.
And once those identities are attached to moral certainty — once our righteous rage becomes our brand — it’s almost impossible to let go.
What we’re left with isn’t a platform for communication. It’s a stage for performance. A factory for persona. A distortion field where every interaction becomes a chance to signal, defend, retaliate, and prove.
This is how the worst versions of ourselves become habits. Then reflexes. Then selves.
The incentive to perform — to be the kind of person who gets engagement, who earns reinforcement, who never has to sit in awkward uncertainty — is so strong that even the best of us start to bend.
I know I did.
And I know I'm not alone.
I’ve spent this essay talking about performance, about incentive, about fragmentation and feedback loops and persona. But underneath all of that is one bleak truth:
This is not sustainable.
Not for me.
Not for you.
Not for a country.
Not for a civilization that wants to remain one.
Because the real-world consequences — to institutions, to relationships, to our ability to hold a shared moral language — are not abstract. They’re already here. We are watching, in real time, what it looks like when a population can no longer see each other. When mutual recognition collapses. When curiosity becomes disloyalty and seriousness becomes a liability.
And unless something shifts — unless enough people start reclaiming the fragments, tracing the breakage, and walking away from the performance — we are going to lose more than ourselves.
We are going to lose each other.
The next part of this series will ask what it would mean to rebuild a self — and a culture — with something like wholeness in mind.
Until then: log out if you can. Step away if you must.
And remember that the person on the other end of the screen is not your villain. They are not your audience. They are not your mirror.
They are a person.
Which means they are more complicated — and more worthy of mercy — than the algorithm will ever let you see.
And yes, once again, I fully acknowledge the contradiction of posting this on Substack.
No caveats.
Substack Notes is Twitter with training wheels. A little slower. A little softer. Fewer sharks in the water, but the blood is still there. And the structure — the addictiveness, the performance incentives, the flattening — is still there too. It's just wearing a sweater.
But I’m here, on Substack, anyway. I’ve reduced my participation on Notes to almost nothing, but still, I’m there.
I’m trying, very hard, to be serious in this — to be a fire fighter who hates smoke, and fully understands the danger of exposure to it, but keeps setting controlled burns so he can teach people how to breathe through the choking.
I want to salvage something true — to protect what moral seriousness we have left — but yes, I’m still using the medium that helped incinerate it in the first place.
That doesn’t make me innocent.
But it doesn’t make me wrong.
It just means I haven’t given up on clarity — or the idea that we might still find it, even in the wreckage.
Part I focused on psychological fragmentation, Part III on emotional whiplash, and Part IV on defeating the misery machine.
People on social media need to realize they’re messing with ADDICTION. Get off it, cold turkey. Get a life in the Real World, for the sake of your mental health, and for the sake of civilization. Social media is manipulating you into fixations, grievances, derangements, and a slew of other mentally-unhealthy mindsets. You don’t need this! Go for a walk in nature; go out to lunch with a real friend; read an inspiring book!
Weirdly enough, previous generations had their versions of this. During the 1950s, many American women got themselves caught up in a ‘need’ to be perfect. You had to be a perfect wife and mother, with a perfectly clean and organized house, and look beautiful at all times. It was a disaster if a neighbor caught a glimpse of you in hair curlers - the shame! Society became overly-judgmental, with an overarching fear of “what would the neighbors say!” This lead everyone to hide aspects of their lives, their ‘skeletons in the closet’. Just like people today on social media, who won’t show themselves online without Instagram digital filters, 1950s society would pretend to be someone other than who they actually were. It was hypocritical and mentally unhealthy.
Young people today need to find a balance and ‘realness’ to their lives. Social media is manipulating and disrupting you - resist it! You can be the revolutionaries who chuck it and build a healthy, real world!
I deleted my old Twitter account because I was trolling big time.
Later I made a new one, and only followed a select few news sources and people. There I keep up with security threats across the world and here, guntubers, and a few government officials.
On the new profile I have a whopping two followers that are bots.
Then I stay in the following tab instead of the unhinged For You tab.
Facebook’s feed is getting stupid with the pages I don’t follow and advertisements.