If you think scrolling the internet all day is making you dumber, just imagine what it’s doing to large language models that consume a near-endless stream of absolute trash crawled from the web in the name of “training.” A research team recently proposed and tested a theory called “LLM Brain Rot Hypothesis,” which posited that the more junk data is fed into an AI model, the worse its outputs would become. Turns out that is a pretty solid theory, as a preprint paper published to arXiv by the team shows “brain rot” impacts LLMs and results in non-trivial cognitive declines.
To see how LLMs perform on a steady diet of internet sewage, researchers from Texas A&M University, University of Texas at Austin, and Purdue University identified two types of “junk” data : short social media posts th

Gizmodo

Fast Company Lifestyle
Dakota News Now
The Baltimore Sun
Eyewitness News 3
PC World
KPTV Fox 12 Oregon
AlterNet
The Hollywood Reporter Movies