The Tool Is Not the Culprit

Let me, for once, come to the defense of AI.

Not because it needs defenders, but because the criticism so often misses the point entirely.

I use AI all the time.

Not for grand revelations or philosophical shortcuts, but for something far more mundane—and far more useful. It takes drudgery, the kind of repetitive, mind-numbing workflows that consume time without adding value, and turns them into something manageable. Text formatting, coding scaffolds, structured output—things that would otherwise eat hours for no good reason.

That alone makes it worth using.

But that’s not where it stops.

I also use it as a testing ground. A first layer of friction. When I have an idea—half-formed, speculative, sometimes outright questionable—I run it through AI to see how it holds up under immediate pressure. Not because the output is authoritative, but because it provides a reaction. Something to push against.

A kind of synthetic resistance.

And yes, it is remarkably easy to slip into the illusion that there is a person on the other side. The coherence, the responsiveness, the tone—it all mimics interaction well enough to trigger that instinct.

But that’s where discipline comes in.

Because there is no person there.

AI is a tool.

Nothing more, nothing less.

And like any tool, it reflects the way it is used. You get out what you put in. Not in some mystical sense, but in a very direct, mechanical one. Every model—because that’s all AI really is, a model—depends entirely on input and parameters. The quality, the framing, the intent behind what you feed into it will shape what comes back.

If you lose sight of that, you’re already in trouble.

Because then you start attributing agency where there is none. Responsibility where there is none. You begin to treat output as if it had an independent origin, rather than recognizing it as a transformation of what was already present.

And once you reach that point, you probably shouldn’t be using it at all.

AI is not responsible for the garbage it produces.

If the output is shallow, distorted, or outright wrong, there are only two possibilities. Either the input was poor to begin with—or the parameters have been set in a way that leads to that outcome.

And here’s the part that seems to surprise people, as if it were some new and alarming discovery:

Someone is always setting the parameters.

Someone is always turning the knobs.

This didn’t begin with AI. It didn’t begin with algorithms or machine learning or any of the modern terminology we like to wrap around it. It has been a constant throughout human history.

Long before any of this existed—long before modern civilization took shape in the valleys of the Nile or Mesopotamia—there were people shaping narratives. Influencing perception. Guiding interpretation.

Imagine a group of early humans gathered around a fire.

Someone tells a story. Not randomly, not neutrally—but with intent. To entertain, perhaps. To persuade. To elevate their own standing within the group. To influence behavior.

Do we really believe there were no incentives at play?

That those stories were free of agenda?

Of course not.

The mechanisms have changed.

The scale has changed.

The speed has changed.

But the underlying dynamic remains exactly the same.

Information is shaped.

Narratives are constructed.

And those engaging with them have a choice: to remain aware of that process—or to surrender to it.

Blaming AI for this is not just misplaced.

It’s convenient.

Because it shifts responsibility away from the user and onto the tool. It allows people to avoid the more uncomfortable realization that the problem is not new, and not external.

It is internal.

A lack of discipline. A willingness to accept output without questioning its origin. A tendency to outsource thinking rather than refine it.

AI doesn’t create that.

It exposes it.

https://legalinsurrection.com/2026/04/bixonimania-how-ai-turned-a-joke-diagnosis-into-peer-reviewed-medicine/