Usage of AI: Understanding Hallucinations
and the Need for Verification
Artificial Intelligence (AI) has advanced rapidly, bringing powerful tools capable of writing articles, analyzing data, and answering complex questions in seconds. However, the tools are not perfect. A significant and potentially dangerous flaw is the phenomenon known as AI hallucination, where the technology generates plausible sounding but entirely false information. These errors are not harmless glitches; they carry real-world risks, making human oversight and verification a necessity before using any AI-generated output.
In part one of this series, we will talk about what AI hallucination is, how it can be detrimental, and what users can do to reduce the chances of using false data.
What is AI Hallucination?
AI “hallucination” is a term used to describe when a large language model (LLM) produces content that is factually incorrect, nonsensical, or unfaithful to the provided source material, all while maintaining a convincing tone. Unlike human delusions, AI models aren’t “seeing things.” Errors stem from their core function of predicting the next word in a sequence based on vast training data.
This process prioritizes fluency and coherence over truth, meaning a model may confidently generate a fictitious fact or citation because it aligns with patterns it learned during training.
Real-World Examples with Real Consequences
The consequences of unverified AI output can be severe, impacting professional fields where accuracy is paramount. One such case was covered by The Guardian in May 2025.
In the case, a lawyer used ChatGPT to research and draft a brief that cited fictitious legal precedents. When the opposing counsel reviewed the document, they said:
“It appears that at least some portions of the Petition may be AI-generated, including citations and even quotations to at least one case that does not appear to exist in any legal database and could only be found in ChatGPT and references to cases that are wholly unrelated to the referenced subject matter.”
The cited case did not exist in any legal database. The lawyer was held accountable for failing to verify AI’s output.
Even specialized legal AI tools currently in use are not immune; a 2024 study conducted by Stanford found some legal AI applications like Lexis+ AI and Ask Practical Law AI still produced incorrect information 17-34% of the time, including misapplying citations that do not support claims.
In critical fields like medicine, hallucinations pose direct safety risks. An AI model might incorrectly diagnose a condition, suggest a non-existent treatment, or fabricate scientific studies, leading to misinformed decisions with potentially harmful outcomes for patients. Studies have shown AI medical chatbots can run with misinformation, highlighting the need for strong safeguards and human review in healthcare settings.
The Non-Negotiable Need for Verification
These incidents underscore a crucial point: AI is just a tool, not an infallible source of truth. The responsibility for the accuracy and reliability of any information rests with the end-user, particularly in professional and high-stakes environments.
Before using any AI-generated content, you should:
- Fact-Check: Cross-reference all key facts, statistics, and claims against multiple, reliable, and primary sources.
- Question Citations: If AI provides sources or citations, verify that they are real and that they actually support the claims being made.
- Employ Human Oversight: Where appropriate, subject matter experts should review AI-generated content for accuracy and relevance. Human review is a critical safeguard against errors.
- Use Specific Prompts: Clear and precise prompts can help guide AI to more accurate responses and reduce ambiguity.
The maturity level of generative AI means hallucinations are still an expected part of the process. By using a rigorous fact-checking process, we can benefit from AI while mitigating the risks posed by its tendency to hallucinate.
As always, NGT is here to help!
Contact ngthelp.com with questions.