Intermediate AI Prompting #hallucinations #verification #critical-thinking

Evaluating LLM Outputs

3 exercises — spot hallucinations, verify technical claims, and write targeted correction prompts.

0 / 3 completed
Hallucination red flags in LLM output
  • Specific version numbers or dates without a cited source
  • API names that "sound right" but don't match documentation
  • High confidence on verifiable facts ("introduced in...", "deprecated in...")
  • Named papers, RFCs, or authors — fabricated citations are common
  • Statistics or percentages without a source
  • Code that compiles but has subtle logic errors
1 / 3
An LLM confidently states: "The React useLayoutEffect hook was introduced in React 18." How do you identify and flag this as a potential hallucination?