- 306
Smarter AI, Bigger Lies
Hallucinations remain AI's biggest flaw
Hallucinations are still the Achilles' heel of artificial intelligence. The smarter AI models get, the more false information they tend to generate.
This issue has existed since the first generation of generative AI. Language models confidently provide information that is not grounded in reality. Worse yet, this issue is growing, not shrinking.
False data is on the rise
Last year, AI was dubbed the "Achilles’ heel". Companies like OpenAI, Anthropic, Google, and DeepSeek launched new AI models, promising more accuracy and reasoning. Yet, results show these models are making more mistakes and hallucinations than their predecessors.
A New York Times report revealed that OpenAI's "o3" and "o4-mini" models hallucinated in 33% and 48% of internal tests. That's almost twice the rate of earlier models.
The problem spans the entire industry
It’s not just OpenAI. Models from Google and DeepSeek face similar issues. The problem lies not in the models, but how they function. As Vectara CEO Amr Awadallah states: "No matter what we do, hallucinations will always happen."
This is a huge risk not only for users but for businesses investing heavily in AI. Incorrect answers can erode trust and lead to flawed decisions.
Is synthetic data to blame?
As previously reported, the real-world data used for training AI has run out. Now companies rely on synthetic data—data generated by other AIs. But this could cause errors to multiply exponentially.
Can it be fixed?
Some companies specialize in fixing AI hallucinations, but it’s not an easy task. The main issue: we still don’t fully understand how these systems work.
The solution might be obvious—or it may require a revolutionary breakthrough.
Don't forget to follow Telsat News to stay up to date with technology.