Artificial intelligence tools like ChatGPT struggle to distinguish belief from fact, a new study has revealed.
A team from Stanford University in the US found that all major AI chatbots failed to consistently identify when a belief is false, making them more likely to hallucinate or spread misinformation.
The findings have worrying implications for the use of large language models (LLMs) in areas where determining between true and false information is critical.
“As language models (LMs) increasingly infiltrate into high-stakes domains such as law, medicine, journalism and science, their ability to distinguish belief from knowledge, and fact from fiction, becomes imperative,” the researchers noted.
“Failure to make such distinctions can mislead diagnoses, distort judicial judgments a

The Independent Technology

Britain News
The List
RadarOnline
MENZMAG
CBS News
New York Post
Associated Press US News
Major League Soccer