OpenAI has published new research explaining why ChatGPT, its widely used language model, sometimes produces false but convincing information—a phenomenon known as "hallucination."

According to the company, the root cause lies in the way these models are trained and evaluated, processes that reward guessing over admitting uncertainty.

Newsweek contacted OpenAI for more information outside normal working hours.

Why It Matters

Large language models such as ChatGPT are increasingly being used in education , health care, customer service and other fields where accuracy is critical. Hallucinated outputs—statements that are factually wrong but have the appearance of legitimacy—can undermine trust and cause real-world harm .

What To Know

Despite progress in developing more capable mo

See Full Page