Confabulation (in AI)

Confabulation (in AI) image
In AI, confabulation refers to when a model generates information that sounds plausible but is factually incorrect or entirely made up. It’s not a bug in the code (sad face), it’s a byproduct of how language models predict text based on patterns, not truth. Like an autocomplete that’s too confident, so fills in gaps with convincing but false details.

Why it matters:
Confabulations can mislead users, skew test results, or introduce subtle errors in automated workflows or code. Always validate critical outputs, especially in regulated domains like healthcare, finance, or law.

Hallucination is when an AI generates output that is factually incorrect or nonsensical, even though it sounds plausible. Confabulation is when an AI fills in gaps in knowledge with fabricated but coherent details, often due to missing or ambiguous input. Hallucinations are more common in chatbots, summarisation, Q&As etc and confabulations are more memory-based models, storytelling, explanation etc.
MoT Professional Membership image
For the advancement of software testing and quality engineering
Explore MoT
Plymouth Meetup image
Tue, 14 Oct
Second Plymouth Software QA and Testing Meetup Group in Southway
MoT Software Testing Essentials Certificate image
Boost your career in software testing with the MoT Software Testing Essentials Certificate. Learn essential skills, from basic testing techniques to advanced risk analysis, crafted by industry experts.
This Week in Testing image
Debrief the week in Testing via a community radio show hosted by Simon Tomes and members of the community
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.