Advice

How to Use AI (and Talk to Humans) Without Just Proving Yourself Right

You can use AI to make yourself smarter or to make yourself stupider while feeling smart. It's not just about AI, it's about how you think, how you argue, and whether you're actually trying to understand reality or just win.

2 months ago
70 views

The Problem

You can use AI to make yourself smarter or to make yourself stupider while feeling smart. The difference is entirely in how you ask questions.

When you ask AI "Why are leftists always so dishonest?" you'll get an answer that assumes leftists are dishonest. When someone else asks "Why are conservatives always so hateful?" they get an answer assuming conservatives are hateful. Both people walk away confirmed in their biases, more certain, more entrenched—and more wrong.

This isn't just about AI. It's about how you think, how you argue, and whether you're actually trying to understand reality or just win.

The Foundation: Understanding Arguments

Empirical Claims vs. Moral Claims

Empirical claims are about facts: "The unemployment rate is 4.2%" or "This policy reduced crime by 15%." These can be proven or disproven with evidence.

Moral claims are about values: "Economic freedom matters more than equality" or "Protecting the vulnerable justifies limiting liberty." These can't be proven—they're foundational beliefs about what's good.

Most bitter arguments happen because people think they're debating facts when they're actually debating values. You can't fact-check someone into sharing your moral framework.

When you realize a disagreement is moral, not empirical:

  • Neither of you is objectively "right"
  • You can still respect each other
  • You can find other areas where your values align
  • You can agree to disagree without being enemies

Steelmanning vs. Strawmanning

Strawmanning: Attacking the weakest possible version of someone's argument.

  • Example: "You support borders? So you hate immigrants and want them to die?"
  • Takes the most extreme interpretation
  • Ignores nuance and reasonable motivations
  • Easy to defeat, but dishonest

Steelmanning: Attacking the strongest possible version of someone's argument.

  • Example: "You support borders because you believe nations need defined sovereignty and controlled immigration to maintain social cohesion and economic stability."
  • Assumes good faith
  • Makes the argument harder to defeat
  • Actually tests if you're right

Why steelman? If you can only win against a strawman version, your position might be weaker than you think. If you can defeat the steelman version, you're probably onto something.

How to do it:

  • Restate their position in terms they'd agree with
  • Include their motivations and concerns
  • Address the strongest evidence for their view
  • Then engage with that version

Common Logical Fallacies That Sabotage Good Thinking

Ad Hominem

Attacking the person instead of the argument.

  • Bad: "Charlie Kirk was a propagandist, so his arguments are invalid."
  • Better: "Here's why this specific argument doesn't hold up, regardless of who made it."

Circular Reasoning

Assuming what you're trying to prove.

  • Bad: "Islam is dangerous because it threatens us, and we know it threatens us because it's dangerous."
  • Better: "Here's specific evidence of threats, and here's the causal mechanism."

False Dilemma

Presenting only two options when more exist.

  • Bad: "Either you support unrestricted immigration or you're a racist."
  • Better: "There's a spectrum of immigration policies with different tradeoffs."

Equivocation

Using the same word with different meanings.

  • Bad: "Islam is just a religion like Christianity, so criticizing it is religious bigotry."
  • Better: "Islam functions differently than Christianity in terms of legal/political systems, so comparisons need to account for that."

Appeal to Emotion

Using feelings instead of facts.

  • Bad: "How can you support that policy? Think of the children!"
  • Better: "Here's data on how this policy affects child welfare outcomes."

Kafka Trap

Making denial of an accusation into proof of guilt.

  • Bad: "You're using unconscious racist tactics." "I'm not!" "Your denial proves you don't recognize them."
  • Better: "Here's the specific behavior and why it's problematic."

How to Talk to AI Without Fooling Yourself

1. Ask Neutral Questions

Bad: "Why is [group I disagree with] always so [negative trait]?" Good: "What are the strongest arguments for [position I disagree with]?"

2. Request Counterarguments

Try: "What are the best counterarguments to my position?" Or: "What am I missing here?" Or: "Assume I'm wrong. What would that look like?"

3. Demand Evidence

Try: "What's the empirical evidence for this claim?" Or: "Are there primary sources on this?" Or: "What would prove this wrong?"

4. Check for Confirmation Bias

Try: "Am I framing this question in a way that presupposes my conclusion?" Or: "What would someone who disagrees say about how I'm approaching this?"

5. Separate Facts from Values

Try: "Is this an empirical question or a moral one?" Or: "What parts of this are factual claims vs. value judgments?"

6. Test Your Certainty

Try: "What would change my mind about this?" Or: "What's the strongest version of the opposing view?" Or: "Where might I be wrong?"

How to Actually Talk to Humans

Recognize Shared Humanity

The person you're arguing with:

  • Has reasons for their beliefs (even if you think they're wrong)
  • Probably isn't evil or stupid
  • Might know something you don't
  • Could be partially right
  • Is going to sleep tonight thinking they were reasonable and you were difficult

You are also:

  • Wrong about some things
  • Influenced by your own biases and experiences
  • Capable of misunderstanding
  • Sometimes arguing from emotion, not just logic
  • Going to sleep thinking you were reasonable

Give Charity, Not Just Justice

Justice: "Your argument has these logical flaws." Charity: "I think you're saying X. Is that right? Here's why I see it differently."

Charity doesn't mean letting bad arguments pass. It means assuming good faith, interpreting ambiguity generously, and actually trying to understand before you try to win.

Acknowledge When You Don't Know

"I'm not sure about that" is more honest than confidently bullshitting.

"That's a good point, I need to think about it" is stronger than doubling down when you're unsure.

"I don't know enough about this specific case" is more credible than pretending expertise you don't have.

Find Common Ground

Even in bitter disagreements, you probably share:

  • A desire for human flourishing
  • Concern about real problems (even if you disagree on solutions)
  • Some foundational values (even if you prioritize them differently)

Name those areas of agreement. It reminds both of you that you're on the same team (humanity) even when you disagree on tactics.

Know When to Walk Away

Sometimes the other person isn't arguing in good faith. Sometimes they're running a script. Sometimes they're more interested in performing moral superiority than finding truth.

Signs it's time to disengage:

  • They attack you personally instead of your arguments
  • They make unfalsifiable claims (no possible evidence could prove them wrong)
  • They interpret good faith as bad faith ("you're just using tactics")
  • They keep moving the goalposts
  • They're more interested in "winning" than understanding

Walking away isn't defeat. It's recognizing that not every conversation is productive, and your time and energy matter.

The Trap of Moral Certainty

Here's the most dangerous trap: believing your moral framework is objectively correct and anyone who disagrees is either stupid or evil.

When you think this way:

  • You stop listening
  • You can't learn anything
  • You turn potential allies into enemies
  • You become exactly what you hate about "the other side"

Reality: Most people are operating from defensible moral frameworks. They just weight values differently than you do.

  • You value liberty > equality. They value equality > liberty.
  • You prioritize security. They prioritize freedom.
  • You emphasize individual responsibility. They emphasize systemic factors.
  • You care about tradition. They care about progress.

None of these are objectively wrong. They're trade-offs.

The person who weights things differently than you isn't your enemy. They're your check against your own blind spots.

Why This Matters

Making enemies is expensive. Every person you write off as irredeemable is:

  • Someone who might have taught you something
  • Someone who might have been an ally on other issues
  • Someone who might have changed their mind later
  • Someone who could have been persuaded by your actual strong arguments (but not your weak ones)

And when you use AI or conversations just to confirm what you already believe:

  • You get more certain about things you're wrong about
  • You stop growing
  • You become more extreme
  • You lose the ability to change your mind
  • You become unfalsifiable—and therefore, useless as a thinker

The Goal

The goal isn't to be nice. It's to be accurate.

The goal isn't to win arguments. It's to understand reality better than you did yesterday.

The goal isn't to make everyone agree with you. It's to figure out where you're wrong so you can fix it.

And weirdly, when you actually do this—when you steelman instead of strawman, when you admit uncertainty, when you treat people like humans instead of enemies—you usually have better conversations, learn more, and ironically, persuade more people.

Not because you're being manipulative. Because you're being honest, and people can tell the difference.

Comments (0)

Please sign in to leave a comment

Sign In
Loading comments...