AI tools like ChatGPT are now a normal part of student life. Many students use them for coding help, explanations, summaries, and research. But there’s a big problem that can quietly damage learning: AI hallucinations—answers that sound confident and polished, but are wrong or even made up.
This study asked a simple question: How do students actually experience hallucinations—and what do they do about them?
Researchers surveyed 63 university computer engineering seniors and analyzed their open-ended responses using thematic analysis.
What students say AI hallucinations look like
Students didn’t describe hallucinations as “weird robot talk.” They described real, practical failures that can slip into assignments:
- Fake or incorrect citations (the most common complaint) Students reported references that didn’t exist, wrong authors, or sources that led nowhere.
- Made-up facts and details Examples included invented statistics, wrong historical facts, or biographies filled with fake achievements.
- Confident but misleading answers Many said the AI can look “correct” and “professional” while being wrong—especially dangerous for beginners.
- Not following instructions AI sometimes ignores constraints, misunderstands the task, or answers a different question than the one asked.
- Persistence in wrong answers Some students described the model getting “stuck,” repeating the same incorrect path even after correction.
- Sycophancy (agreeing too much) A few students noticed the AI will accept the user’s wrong solution or “agree” just to be polite—making mistakes harder to catch.
In short: hallucinations aren’t only factual errors—they include behavior problems like overconfidence, drifting off-task, and “yes-man” agreement.
How students detect hallucinations (two main styles)
Students described two broad approaches:
1) “I can feel it’s wrong”
Many rely on intuition:
- “It doesn’t make sense”
- “It’s illogical”
- “It sounds like nonsense” They also flagged warning signs like overly generic answers, irrelevant content, or too much “fancy wording” without real proof.
2) Verification (the safer method)
Others actively verify using:
- Cross-checking: lecture slides, textbooks, Google, reliable websites, or running the code to see if it works
- Double-checking inside the AI: asking again, requesting sources, asking for confidence level, or forcing the model to explain step-by-step
The key takeaway: a lot of students still rely on “gut feeling,” even though hallucinations often sound convincing.
Why students think hallucinations happen (and the misconceptions)
Students offered several explanations:
- Some had a fairly accurate idea: AI predicts text based on patterns and doesn’t truly “know” facts.
- Others believed a misconception: AI is like a search engine with a database, and when it can’t find the answer it “makes one up.”
- Some blamed prompting (unclear prompts, too long chats).
- Some blamed training data quality (gaps, errors, bias).
This matters because your mental model changes your behavior: if you think AI is a fact database, you may trust it too much.
Why this matters for education
The study’s message is clear: AI literacy can’t just be “prompt engineering.”
Schools should teach students:
and why AI can sound certain even when it’s guessing.
how hallucinations show up (especially fake citations and confident wrong answers),
how to verify outputs with a simple checklist,
and why AI can sound certain even when it’s guessing.
source: https://arxiv.org/pdf/2602.17671