The other day I was brainstorming with ChatGPT and all of a sudden it went into this long, fantasy story that had nothing to do with my queries. It was so ridiculous that it made me laugh. Lately, I haven't seen mistakes like this as often with text prompts, but I still see them pretty regularly with image generation. These random moments when a chatbot strays from the task are known as hallucinations. What's odd is that the chatbot is so confident about the wrong answer it's giving; one of the biggest weakness of today's AI assistants. However, a new study from OpenAI argues these failures aren’t random, but a direct result of how models are trained and evaluated.

Why chatbots keep guessing when they shouldn’t

Research points to a structural issue causing hallucinations; essentially the

See Full Page