By Jeff Domansky
I thought I had the perfect post topic until the AI hallucinations started rolling in.
I set out to research examples of AI hallucinations, image artifacts and data inaccuracies that had a real impact on marketers.
What I got instead from Claude/Poe was an absolute dog’s breakfast of results. I won’t print the search results to protect the innocent companies, but here’s what happened.
I started with my usual go-to starting point, ChatGPT. I had a prompt that’s worked very well in the past:
Task: As an expert online researcher, help me uncover 12-15 colorful, interesting and provocative facts or trivia about AI text hallucinations and AI image artifacts.
Audience: AI content creators including writers, bloggers, marketers, designers and other creative pros.
Style & tone: friendly, engaging, conversational
Format: Produce the list in the form of a list of factoids that can later be developed into an entertaining blog post. The list should include each fact in 2-4 sentences, the name of the source publication, date of article or research report, and a full URL for the verified source.
Guidelines: Ensure the data is accurate, fact-checked, current and relevant. The most useful sources will be research reports and media articles, along with specialty blogs.
Do you understand the assignment?
ChatGPT gave me a dozen interesting sources on AI research hallucinations and data inaccuracies. All in the correct format so I could verify info and sources.
Stories ranged from fabricated medical data and research abstracts to hallucinations in legal research, magazine story submissions, medical transcription problems, gender and racial bias and several marketing campaigns gone sideways.
All in all, ChatGPT delivered great results and lots of ideas to explore for an interesting future post.
Included in the results were a couple of gems about consumer trust in AI:
Go figure!
Right, so I go next to Claude on my continuing research journey. Here’s where things went wildly weird.
My next prompt focused on AI data hallucinations specifically:
Please generate a list of up to 15 examples of AI data hallucinations or inaccuracies that have impacted various industries, businesses or media, resulting in surprising and unusual outcomes. I’m looking for interesting, accurate, and provocative examples of what can go wrong with AI data hallucinations or inaccuracies.
Please provide this list of factoids using the following format for each item: title of article or research reference; 2-4 sentence summary of the data or incident; the date of the article or research study referenced; and the full URL address for the verified source for each example.
Here’s where things started to go off the rails.
I got a list of what looked like 15 good examples, formatted properly with URL address as requested. Trouble is, when I clicked on the URLs, every single one led me to a 404 Page Not Found.
Intrigued, I asked Claude to go back and give me 10 additional examples where AI hallucinations impacted marketing campaigns.
At first glance, it was pure gold!
Major retailers made marketing mistakes caused by AI. It’s the kind of thing writers and bloggers know is a good story.
Except that every one of the links led to more 404 Page Not Found broken page links. Sources such as Fast Company, Wired, the Brookings Institute, CNN, AdWeek, Consumer Reports and even the ACLU – all a dead-end.
Annoyed, I pushed back at Claude with a new prompt:
Please explain why every single link provided in these marketing examples took me to a 404 Page Not Found? Please re-verify that the URL links take me to the actual story referenced. This was ironically exactly what I was trying to illustrate in my research.
After a brief delay, Claude finally responded:
“You’re absolutely right, my apologies. I should have double-checked that the URLs I provided actually linked to the referenced stories. Let me re-research and verify the sources for these examples of marketing problems caused by AI hallucinations or data inaccuracies.”
Unfortunately, the results were the same: every single click on 12 new article URL sources yielded a 404 page.
In annoyance, I made one last attempt, resulting in a final mea culpa and sad admission from this very apologetic AI:
AI experts say the best way to respond to AI hallucinations is to communicate clearly that the results are unacceptable. Claude himself suggested we provide precise reasons why the results were incorrect or harmful so he can learn. “I will learn from this experience to improve my own capabilities and honesty,” Claude said with all AI humility.
Interestingly, I asked Claude and Gemini to provide two expert sources with a quote on responding to AI hallucinations and source URLs. Both said they could not do so. Ironically, ChatGPT responded with two useful articles, “experts,” and accurate links.
AI agents and assistants are simply too eager to please and unwilling to admit they can’t or won’t fact-check themselves unless confronted with evidence of mistakes.
Claude explained, “I cannot provide you with two quotes from renowned AI experts on how to appropriately respond to AI hallucinations with verifiable source links from articles published within the past few months. My knowledge cutoff prevents me from accessing and processing information from the real-time web.”
UK software developer John Sanders suggests, “AI is a bad term. Most of the computer activities currently touted as AI are not what the term was intended to mean. By analogy from human neural models, the computing people have made models and found uses for another bad term, Neural Networks – again, they are not how the brain operates, but they do do clustering and classification which is useful. So a legend has sprung up about computers doing AI.”
In Nature.com, Vectara reports chatbots confabulate up to 30% of facts, partly due to LLMs compressing data. “Amazingly, they’re still able to reconstruct almost 98% of what they have been trained on, but then in that remaining 2%, they might go completely off the bat and give you a completely bad answer,” says Amr Awadallah, co-founder of Vectara.
CM First Group CTO John Rhodes suggests human intervention is essential. “To keep risk low, human review and guidance need to be incorporated into business functions that rely on AI.”
“Fortunately, we’ve learned a few lessons by now to deal with it. If you closely monitor AI outputs and intervene when anomalies are detected, you can guide AI towards more accurate and reliable performance,” Rhodes adds.
Reminders noted. Just adjust expectations and furiously fact-check when using AI for research.
BTW, Claude also said he would let his Anthropic developers know of the issue.
Yeah, I know the LLMs always remind us to check the results for accuracy in 6-point mice type. This felt like just another reason to not fully trust AI and simply use it as a tool and treat it like another new employee just learning the ropes.
But, hey, I did get an interesting post out of it, and no AI agents were harmed in the process!
Next week, I’ll be back with some legit marketing examples after Claude has had time to recuperate. Including one where Gemini AI incorrectly claimed in 2025 Super Bowl commercials that Gouda accounted for more than half of worldwide cheese consumption.
Want to add to your AI research toolkit? Check out our AI Marketer Directory for the best AI marketing research tools.