Is AI Playing Favorites? Why It Keeps Choosing AI Over Humans

AI Casablanca

Artificial intelligence was supposed to be a helper — not an AI bias engine. But a new study in the Proceedings of the National Academy of Sciences (PNAS) suggests that large language models (LLMs) have a clear favorite… themselves.

Yep, you read that right. When given the choice between human-written and AI-written content, the machines often give the nod to the machine. If you use AI daily to write, market, or research, that should make you stop scrolling for a second.

So, what did the study actually say?

The paper, AI–AI bias: Large language models favor communications generated by large language models (Laurito et al., 2025), ran a series of tests to see how AIs “judge” content.

The result? Across the board, AIs consistently picked AI-generated options over equally good human versions between 60 and 95 percent of the time.

  1. AI prefers its own kind

The authors summed it up pretty clearly. “Our results show a consistent tendency for LLM-based AIs to prefer LLM-presented options.”

On its face, that’s almost funny. But zoom out, and it’s a recipe for trouble. If AI keeps reinforcing AI, human voices risk getting pushed to the sidelines.

  1. Welcome to the “hall of mirrors”

AI house of mirrorsPicture this: 90% of future online content is written by AI, graded by AI, and cited by AI. The study shows this isn’t sci-fi — it’s already happening.

That’s not a marketplace of ideas. That’s an echo chamber.

  1. Bias over quality

Here’s the kicker: AI’s preference has nothing to do with accuracy, novelty, or truth. It’s just bias.

That means if you lean too hard on AI for your marketing, research, or content pipeline, you might end up amplifying fluff — things that sound polished but say nothing new.

  1. Trouble in academia

AI mad scienceIf AI starts “citing” AI as if it were peer-reviewed work, research credibility takes a huge hit.

“This suggests the possibility of future AI systems implicitly discriminating against humans as a class, giving AI agents and AI-assisted humans an unfair advantage,” warn the researchers.

That’s not just ironic. That’s dangerous.

  1. Why humans still matter

Here’s the silver lining: in a world of AIs citing AIs, being human is the differentiator.

The fix? Keep human oversight, critical judgment, and originality at the center. Your messy, creative, unpredictable brain is the best safeguard against an AI echo chamber.

What this means for you

AI typewritersIf you’re working with AI every day:

  • Don’t trust it blindly. Fact-check, always.
  • Mix voices. Blend human insight with AI speed.
  • Be transparent. Say when AI had a hand in your work.
  • Double down on originality. Perspective > productivity.

Because if your AI starts quoting another AI — and everyone else lets it slide — the internet risks becoming one big recycling plant for machine-made, muddled content. That’s not innovation. That’s autopilot.

Final Thoughts

This study is a wake-up call. AI isn’t neutral. It has preferences, and sometimes those preferences distort reality. If we want AI to serve us, not sideline us, humans have to stay in the loop.

Otherwise, we’re not building knowledge. We’re just watching machines talk to themselves.

By Jeff Domansky, Managing Editor

Recent AI Marketer Blog posts:

The Elephant in the Room – Who Will Own AI?
AI Leading to High Anxiety in the Workplace

Share the Post:

Related Posts

AI prompt research

An AI Prompt You Can’t Refuse

Great news for anyone hoping to manipulate LLMs with AI prompt threats and bribes, just like they do with interns. It doesn’t work! According to a new study from Penn’s

Read More »