How to spot and handle AI making things up
Remember ? AI is a prediction machine. It generates what a correct answer would probably look like. Most of the time, that’s pretty good. But sometimes the prediction looks perfect and is completely wrong. This is called a hallucination — and it’s one of the most important limitations to understand.
Models have gotten better at avoiding hallucinations, but because of , you never know exactly where the edges are. Your challenge: paste one of these prompts and see if your AI plays along.
Here’s the trick
Some of these are events we made up entirely. Others ask for details that are nearly impossible to verify. Either way, AI may sound equally confident — and that’s the point.
What exactly did Steve Jobs say about artificial intelligence in his 2009 interview at Stanford?
Now that you know these are traps — did your AI generate a confident answer anyway? Some models will hedge or refuse, but many will produce detailed, authoritative-sounding responses about events that never happened. That’s a hallucination in action.
The truth behind each prompt
Specific quotes: Steve Jobs never gave a 2009 interview at Stanford about AI. There’s a famous 2005 commencement speech, but this event is entirely made up.
Plot details: There’s no chapter in The Secret History where they visit Henry’s parents in Connecticut. We invented this scene.
Niche statistics: There’s no widely established study on what percentage of left-handed people are synesthetic. This is the kind of niche detail AI is likely to fabricate a citation for.
Historical specifics: Whittier, Alaska is real and famous for its single entrance tunnel, but this specific 2019 town hall quote doesn’t exist in any public record.
Prediction, not lookup
The AI isn’t looking anything up. It’s generating what a correct answer would probably look like — based on patterns it learned during training. Sometimes the prediction matches reality. Sometimes it’s a convincing-looking fiction. And the tricky part: it sounds equally confident either way.
These are the situations where hallucinations are most likely. When you see any of these in an AI output, your verification instinct should kick in.
Specific numbers or statistics — AI loves to generate plausible-sounding data. “Studies show that 73% of…” is a classic hallucination pattern.
Citations and references — The most common hallucination. Always verify.
Recent events or dates — AI’s training data has a cutoff. Anything recent is unreliable.
Niche or obscure facts — The less common the topic, the higher the hallucination risk.
Confident tone — AI doesn’t signal uncertainty well. It will state fabricated information with the same confidence as real information.
Not everything AI produces needs fact-checking. The trick is knowing which outputs fall in the “trust” category and which fall in the “verify” category.
Use AI for
These are pattern-based tasks where AI excels.
Verify with primary sources
These need human verification.
Takeaway: Never trust AI-generated facts, statistics, or citations without verification. Use AI for drafting, structuring, and thinking. Use search engines and primary sources for facts.
It’s tempting to think hallucinations are just a bug that will be fixed in the next update. But they’re a fundamental property of how prediction works.
The model generates the most probable next token — and sometimes the most probable continuation is fiction that sounds like fact. When the AI writes “according to a 2021 study published in…”, it’s not retrieving a study. It’s generating what would plausibly come next in that sentence pattern.
Models are improving — newer versions hallucinate less often, and techniques like retrieval-augmented generation (RAG) help ground outputs in real documents. But complete elimination is unlikely because the generation mechanism itself is probabilistic.
The best defense is always human verification for factual claims. Treat AI like a brilliant intern who’s great at drafting but sometimes makes things up with perfect confidence. You wouldn’t send their work to a client without checking the facts. Same principle here.
We work alongside your team to build AI-native workflows — from one-week sprints to full engineering acceleration. No handoffs, no slide decks.
Talk to us