AI-driven marketing enables hyper-personalized content and real-time targeting, but it also increases the risk of misinformation spreading at scale through automated systems. Ethical data use, transparent AI models, and robust fact-checking are essential to ensure trust and credibility in AI-powered marketing campaigns.

Ahrefs contributor Mateusz Makosiewicz has published an article featuring findings from an AI misinformation experiment.

He says, “I invented a fake luxury paperweight company, spread three made-up stories about it online, and watched AI tools confidently repeat the lies.

Almost every AI I tested used the fake info—some eagerly, some reluctantly. The lesson is: in AI search, the most detailed story wins, even if it’s false.

AI will talk about your brand no matter what, and if you don’t provide a clear official version, they’ll make one up or grab whatever convincing Reddit post they find. This isn’t some distant dystopian concern.

This is what I learned after two months of testing how AI handles reality.

I used an AI website builder to create xarumei.com in about an hour. Everything on it was generated by AI: the product photos, the copy, even the absurdly high prices ($8251 for a paperweight).”

I Ran an AI Misinformation Experiment. Every Marketer Should See the Results