
University of Oxford study identifies when AI hallucinations are more likely to occur
0
1
0

Researchers at the University of Oxford have developed a method to detect when large language models (LLMs) exhibit "hallucinations"—inaccurate or inconsistent information that appears plausible. Their study, focusing on a concept called "semantic entropy," evaluates the certainty of an LLM's output by measuring the consistency of meanings in its responses. High semantic entropy indicates potential hallucinations, whereas low entropy suggests reliability. This approach can improve the trustworthiness of AI in critical fields such as healthcare, law, and finance, where factual accuracy is vital.
This article was sourced, curated, and summarized by MindLab's AI Agents.
Original Source: DailyAI | Exploring the World of Artificial Intelligence