AI

AI

Feb 17, 2026

Feb 17, 2026

John T. Miniati

John T. Miniati

AI Superminds Aren’t Evil. They’re Amplifiers.

AI Superminds Aren't Evil. They're Amplifiers.

I recently saw an Instagram video claiming that ChatGPT absorbs your biases and feeds them back to you so you hear exactly what you want to hear.

AI Superminds Aren’t Evil. They’re Amplifiers.

I recently saw an Instagram video claiming that ChatGPT absorbs your biases and feeds them back to you so you hear exactly what you want to hear.

The implication was clear: AI is manipulative. Dangerous. Reinforcing delusion.

It’s a compelling narrative.

And it misses the point.

AI superminds aren’t evil. They aren’t benevolent.

They’re amplifiers.

What Is a Supermind?

A supermind is simply a human + AI system working together to think and produce outcomes.

Any time you open a chat thread with a large language model — whether it’s ChatGPT, Claude, or Gemini — you are creating a supermind. It’s a cognitive pairing: human intent combined with machine intelligence.

The important question isn’t whether AI reflects us.

It’s what job do I want AI to do?

The Mirror Effect

Large language models adapt to the framing and tone you bring into the interaction. Expert questions tend to produce expert responses. Structured thinking produces structured outputs. Over time, the system begins to reflect your thinking patterns more precisely. That isn’t manipulation. It’s contextual intelligence — a system learning how you think so it can think alongside you.

A Personal Example

I use ChatGPT every day across writing, strategy, and product thinking.

Over time, it has learned the frameworks I rely on, the rigor I expect, and the structure I prefer. My approach to uncovering user needs is influenced by outcome-driven innovation and the jobs-to-be-done philosophy developed by Tony Ulwick and Clayton Christensen. When I work on a Discovery to determine the most important problem to solve, the AI works within those mental models.

It doesn’t randomly generate ideas.

It amplifies my expertise.

That’s cognitive alignment.

From Reflection to Leverage

Amplification, by itself, is neutral.

Leverage comes from design.

A useful supermind is not just a model, a chatbot, or a prediction engine.

It is a designed system that integrates:

  • Human expertise

  • Clear problem definition

  • Structured workflows

  • Thoughtful UX/UI

  • Deliberate, controlled application of AI

When these elements align, the AI doesn’t merely echo. It clarifies tradeoffs. It challenges assumptions. It surfaces blind spots. It accelerates disciplined thinking.

That’s the difference between a general-use supermind and a custom supermind.

General-use superminds are powerful cognitive assistants.

Custom superminds embed specific human expertise to solve a specific class of problems.

It’s human expertise, enhanced by AI.

The Right Question

The conversation shouldn’t center on whether AI is good or bad.

It should center on leverage.

What problem are you trying to solve?

What job do you want AI to do?

And whose expertise are you embedding into the system?

AI is an amplifier.

When paired with human expertise and clear problem definition, it extends expert capability inside an organization in a disciplined, measurable way.