b-tec logo
← Return to Signal Log

2026-03-24

Check Your Illusions

The AI backlash isn't really about AI.


Let me tell you about someone I know.

This person is smart. Not "went to MIT" smart, just regular smart. The kind of smart that gets you through life. They can hold a conversation about almost anything. They can write a decent email. They can look at a situation, figure out what's going on, and explain it to other people. They've spent years getting good at thinking. School rewarded them for it. Their career rewarded them for it. Somewhere along the way, being "the smart one" stopped being a thing they did and became who they are.

Then Generative AI showed up.

And this person, this smart, capable, reasonable person, started acting a little strange.

They began picking fights with technology. Gut-level hostility dressed up as logic. "It's just autocomplete." "It doesn't actually understand anything." "It's a stochastic parrot." Said with a certainty that felt too hot, too fast, like someone swatting a fly with a sledgehammer.

When someone shared something impressive that AI had created, this person didn't seem curious. They seemed annoyed. They went looking for the flaw. And when they found one, a hallucination, a factual error, a weird sentence, you could see the relief wash over them. Like a doctor just told them the lump was benign.

They started sharing articles about AI failures the way some people share news about an ex who's doing badly. With an energy that has nothing to do with technology.

I've watched this play out enough times to know what it actually is. It was never about the technology.


Every argument against AI eventually circles back to the same unspoken claim: human thinking is sacred. It's ours. It's different. It's special in a way a machine can never touch. And this was never a conclusion anyone arrived at through careful thought. It's load-bearing. The foundation underneath an entire sense of who they are. Remove it and the building comes down.

So they aren't evaluating AI. They're defending an imagined identity.

Most of us were trained from childhood to locate our worth in cognitive output. School grades. Career performance. The ability to solve, create, articulate, impress. Over decades this becomes so automatic, so assumed, that it stops feeling like a story and starts feeling like a fact. Something discovered about ourselves rather than something built.

Byron Katie asks a simple question when you're suffering over something: is it true? As a genuine inquiry, not a debate move. You feel certain about something, certain enough to suffer over it, and the question is just: can you absolutely know that it's true?

Try it here.

Is it true that your value lives in what you can do that a machine can't? Is it true that if something else can reason, write, or create, you lose something essential? Is it true that "special" is a category you need to occupy in order to matter?

Most people have never asked these questions. The story was never visible enough to question. It was just the air, in the background.


Cognitive scientist Joscha Bach said something recently that stopped me cold.

Consciousness, he said, is a model. Your brain builds a picture of what it would be like if you existed as a person moving through the world, and then it runs that picture so convincingly you believe it completely (look at dreaming, and how quickly it can grip you). The felt sense of being a continuous self with a history and a future is a very sophisticated mental representation. A process. And he said this: suffering is created inside your own mind. A signal from your own system telling you something needs attention. Specifically, your resistance to what is.

If watching a machine produce a paragraph that looks like yours makes something clench in your chest, that feeling is valid. But your story about yourself did that, not the machine. The story that says your worth is stored in cognitive output. The story that says if something else can do this, you've lost something.

Adyashanti describes this pattern clearly. We build elaborate structures of belief about who we are. We identify with them completely. And when reality contradicts the structure, we experience it as a threat to our survival, even when nothing physical is threatened at all. The mind treats an identity crisis like a physical attack. The body responds accordingly.

This is what we're watching in the comment sections and the hot takes and the "it's just autocomplete" crowd. A very human response to feeling like the ground is shifting under a "self" that was never as solid as it seemed.


The suffering here is optional. (I'm not saying pain is not inevitable, I think it's required, or some kind of rule.)

The questions about who controls AI and how it's used matter, and they require clear thinking. But the suffering isn't coming from those questions. It's coming from attachment. From holding a story about specialness that was always, at its root, an illusion.

Many years ago I read: "I am not the victim of the world I see." The world isn't doing this to you. The interpretation is doing this to you. And interpretations can be questioned. They can be released. (ACIM)

What opens up when you release the need to be cognitively special is space. Suddenly you can be genuinely curious about a tool without it meaning something about your worth. You can encounter something that does what you used to do, and instead of flinching, ask: what does this make possible now?

That's where the real opportunity is. In asking what becomes available when you stop burning energy on the defense.


Adyashanti recently stepped back from teaching after 25 years. In his farewell letter he wrote this:

"A totally certain mind is a closed and protected mind; a mind that is essentially afraid of the immensity of the reality it claims to know. It is vital to understand that deep spiritual practice has a lot more to do with conceptual unknowing than knowing. Wisdom is fully embodied and digested experience, not simply concepts in relationship to still more concepts."

Read that again in the context of the AI debate.

The certainty you see in the loudest critics is a closed and protected mind. Afraid, not informed. The hot take is not an analysis. It is a defense. And the thing being defended is a concept of self that has never been examined closely enough to be questioned.

The invitation here is not to adopt new concepts about AI. The invitation is to unknow. To sit with genuine uncertainty about what you are, what thinking is, what any of this means. That uncertainty is the beginning of actual seeing more clearly.

A curious mind is an open mind. And an open mind is the only kind that can encounter what is actually here.


"Not special" doesn't mean worthless. It never did.

It means the thing you were protecting was never what you thought it was. The specialness was a story layered over something much quieter and more durable. Something that doesn't need defending because nothing can actually threaten it.

AI is threatening a story about you. And the story was never true.

You can put it down now.


Brian Earsley | b-tec.org