AI / Voice
Hallucination
When an LLM-driven agent confidently states something incorrect. Mitigated with RAG, strict prompting, and evals against ground-truth data.
When an LLM-driven agent confidently states something incorrect. Mitigated with RAG, strict prompting, and evals against ground-truth data.