Hallucination
When an AI model confidently generates information that is incorrect, fabricated, or nonsensical.
What It Is
A hallucination is when an AI produces output that sounds completely plausible but is factually wrong. The model might cite a study that does not exist, invent a person’s biography, or give you a confident answer to a question it has no real knowledge about. This happens because language models generate text by predicting the most likely next word based on patterns, not by looking up facts in a database. They have no internal sense of “true” or “false.” If the pattern of language suggests a confident answer, the model will produce one, whether or not the content is accurate.
Why It Matters
Hallucinations are the single biggest risk when using AI for anything that matters. If you trust AI output without verification, you will eventually publish false information, give bad advice, or make decisions based on fabricated data. Understanding that hallucination is a fundamental property of how these models work (not a bug that will be fully fixed) makes you a more careful and effective operator. Every AI workflow needs a verification step, especially for facts, names, dates, and citations.
In Practice
Before publishing any AI-generated content, cross-check claims against primary sources. When building AI tools for others, use techniques like RAG to ground responses in real data and include disclaimers where appropriate. Lower the model’s temperature setting to reduce creative improvisation when factual accuracy is the priority.