Confabulation (in AI)

Confabulation (in AI) image
In AI, confabulation refers to when a model generates information that sounds plausible but is factually incorrect or entirely made up. It’s not a bug in the code (sad face), it’s a byproduct of how language models predict text based on patterns, not truth. Like an autocomplete that’s too confident, so fills in gaps with convincing but false details.

Why it matters:
Confabulations can mislead users, skew test results, or introduce subtle errors in automated workflows or code. Always validate critical outputs, especially in regulated domains like healthcare, finance, or law.

Hallucination is when an AI generates output that is factually incorrect or nonsensical, even though it sounds plausible. Confabulation is when an AI fills in gaps in knowledge with fabricated but coherent details, often due to missing or ambiguous input. Hallucinations are more common in chatbots, summarisation, Q&As etc and confabulations are more memory-based models, storytelling, explanation etc.
Reduce flakiness. Try Squish for free. image
Enhance test coverage, and streamline automation. Take a tour!
Explore MoT
Xray AI in action: Test Case & Model Generation for modern QA teams image
Wed, 17 Sep
with Ivan Fillipov, Solution Architect at Xray
MoT Software Testing Essentials Certificate image
Boost your career in software testing with the MoT Software Testing Essentials Certificate. Learn essential skills, from basic testing techniques to advanced risk analysis, crafted by industry experts.
Leading with Quality
A one-day educational experience to help business lead with expanding quality engineering and testing practices.
This Week in Testing image
Debrief the week in Testing via a community radio show hosted by Simon Tomes and members of the community
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.