
You’ve probably encountered images in your social media feeds that look like a cross between photographs and computer-generated graphics. Some are fantastical – think Shrimp Jesus – and some are believable at a quick glance – remember the little girl clutching a puppy in a boat during a flood?
These are examples of AI slop, low- to mid-quality content – video, images, audio, text or a mix – created with AI tools, often with little regard for accuracy. It’s fast, easy and inexpensive to make this content. AI slop producers typically place it on social media to exploit the economics of attention on the internet, displacing higher-quality material that could be more helpful.
AI slop has been increasing over the past few years. As the term “slop” indicates, that’s generally not good for people using the internet.
AI slop’s many forms
The Guardian published an analysis in July 2025 examining how AI slop is taking over YouTube’s fastest-growing channels. The journalists found that nine out of the top 100 fastest-growing channels feature AI-generated content like zombie football and cat soap operas.
Listening to Spotify? Be skeptical of that new band, The Velvet Sundown, that appeared on the streaming service with a creative backstory and derivative tracks. It’s AI-generated.
In many cases, people submit AI slop that’s just good enough to attract and keep users’ attention, allowing the submitter to profit from platforms that monetize streaming and view-based content.
The ease of generating content with AI enables people to submit low-quality articles to publications. Clarkesworld, an online science fiction magazine that accepts user submissions and pays contributors, stopped taking new submissions in 2024 because of the flood of AI-generated writing it was getting.
These aren’t the only places where this happens — even Wikipedia is dealing with AI-generated low-quality content that strains its entire community moderation system. If the organization is not successful in removing it, a key information resource people depend on is at risk.
Harms of AI slop
AI-driven slop is making its way upstream into people’s media diets as well. During Hurricane Helene, opponents of President Joe Biden cited AI-generated images of a displaced child clutching a puppy as evidence of the administration’s purported mishandling of the disaster response. Even when it’s apparent that content is AI-generated, it can still be used to spread misinformation by fooling some people who briefly glance at it.
AI slop also harms artists by causing job and financial losses and crowding out content made by real creators. The placement of this lower-quality AI-generated content is often not distinguished by the algorithms that drive social media consumption, and it displace entire classes of creators who previously made their livelihood from online content.
Wherever it’s enabled, you can flag content that’s harmful or problematic. On some platforms, you can add community notes to the content to provide context. For harmful content, you can try to report it.
Along with forcing us to be on guard for deepfakes and “inauthentic” social media accounts, AI is now leading to piles of dreck degrading our media environment. At least there’s a catchy name for it.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Adam Nemeroff, Quinnipiac University
Read more:
- From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
- AI-generated images can exploit how your mind works − here’s why they fool you and how to spot them
- Generative AI is most useful for the things we care about the least
Adam Nemeroff does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.