AI assistants get news wrong, undermining public trust.
Top AI tools are making errors in news reporting. They misrepresent published content. This is happening nearly half the time.
It's shocking how often AI fails to tell the truth about news. Google's Gemini is identified as a major offender. This behaviour risks damaging public trust in AI. Publishers are seeing their traffic diverted.
These AI assistants often get answers wrong. They appear to pull information without proper verification. This creates a misleading narrative. It can be frustrating for users seeking accurate information.
The truth behind AI's news inaccuracies is complex. Future AI development must prioritize accuracy and source attribution.