What is Hallucination?
When an LLM confidently generates content that looks correct but is actually wrong. Like a student who answers everything even when unsure, never saying ‘I don’t know’. Solutions: RAG (let AI check data first), multi-model cross-validation, require AI to cite sources.