- Resources
- AI Factual Errors, VS AI Hallucinations
AI Factual Errors, VS AI Hallucinations
Written by Gregor Blaj, August 2025
Everyone’s talking about AI these days, and so are we. At Lancom Technology, we use the technology internally, and we provide a set of Generative AI services. In that use, knowledge of AI challenges and limitations is essential to achieve lasting value.
Recently, I was asked about errors in AI outputs. The question was around ‘hallucination’ and if that is the same as AI making errors of fact. While both fall into the broad category of inaccurate outputs, there is a difference. And it can matter, depending on the context of use.
Though the terms are sometimes used interchangeably, factual errors are generally smaller in scope. These occur when the model provides incorrect or imprecise details, such as wrong dates, figures, or locations, or when it applies faulty reasoning that leads to inaccurate conclusions.
From a technical standpoint, factual errors often arise from gaps, biases, or overgeneralization in the model’s training data. From a practical standpoint, these errors are usually minor inaccuracies or misunderstandings. Often inconvenient, but rarely entirely fabricated.
However, relying on incorrect facts can result in embarrassment or reputational impact. It doesn’t look professional, either.
Hallucinations, by contrast, are a much bigger kettle of fish, the AI confidently generates entirely fabricated or ungrounded information. The output can appear highly plausible, and include references to non-existent sources, events, or details. Examples include citing ‘research papers’ that aren’t there, creating lyrics nobody ever wrote, or code generation models referring to realistic but completely fabricated software packages or dependencies.
The key distinction is the confident, creative invention of ‘bodies of knowledge’ in hallucinations. How does that happen? Because AI isn’t thinking in the proper sense of the word but instead applies probabilistic next-word prediction. When this cascades as the LLM seeks to synthesize an output – well, it gets carried away, and you get a hallucination.
Take a look around online and you’ll find plenty of examples where AI hallucinations have caused problems for lawyers, doctors, and other professionals. The major issue? They didn’t spot the mistakes before using the output in their own work.
Avoiding factual errors and spotting hallucination, therefore, is emerging as a key skill for those using AI.
A good rule of thumb when getting started with AI is to use it for help in domains where you are already an expert. As a subject matter expert, you are equipped to notice when the AI outputs are weird, out-of-place or flat out wrong.
This, of course, leads to a more important realisation. AI should be used by experts, to accelerate their ability to get through more work.
Don’t, in other words, expect the AI to think for you. And do always scrutinise its outputs.
About Gregor Blaj
Gregor Blaj is the Technical Director at Lancom Technology with expertise spanning systems engineering, project management and customer relationship management. Gregor is also an AWS Certified Solutions Architect Professional and an Azure Solutions Architect Expert.
Accelerate Your Performance With Lancom Technology
Learn how Lancom's Managed Services helps sharpen your focus on core business activities.