YouTube plans to make a change to its monetization policy on Tuesday that appears to target "inauthentic" content. The change may be aimed at helping slow down the flood of AI-generated content on the platform to make it easier for viewers to find higher-quality videos.
The company says in a short support post, "On July 15, 2025, YouTube is updating our guidelines to better identify mass-produced and repetitious content."
In a follow-up video message, video creator and YouTube editorial and creator liaison Rene Ritchie downplayed the impact it will have on creators who may be concerned about the changes. "This is a minor update to YouTube's longstanding YPP (YouTube Partner Program) policies to help better identify when content is mass-produced or repetitive," he said.
Ritchie added that kind of content is already demonetized and is what users would call spam.Â
On Thursday, YouTube shared additional information in a support post that begins, "Hi creators, We've seen some confusion around a minor YPP update coming July 15 and wanted to share more information and answer top questions we've seen."
In the post, YouTube reiterates that it's not creating a new policy but making updates to a long-standing "repetitious content" guideline. The post says that this doesn't apply to reused content that adds "significant original commentary, modifications, or educational or entertainment value to the original video."Â
YouTube cites as examples of "mass-produced content" channels that upload narrated stories with only superficial differences between them and channels that upload slideshows that all have the same narration.
What's not mentioned in the post or the video is that YouTube has been battling a problem with AI-generated videos, particularly "AI slop" that is of low value to users but that is inundating social media networks and platforms. YouTube's guidelines don't mention AI slop specifically, but some of its examples of "altered or synthetic content" would seem to include some types of AI-generated videos.Â
The problem of proliferating AI content has gotten so bad that John Oliver recently devoted an entire episode of his HBO show to the rise of AI slop. (You can find it, incidentally, on YouTube.)
'Crosses the line from automation into manipulation'
YouTube's challenge is to detect videos being posted for fraudulent reasons (say, to monetize auto-generated content), versus those that are posted legitimately to inform or entertain, said Akli Adjaoute, author of Inside AI and founder of the venture capital firm Exponion.
"When synthetic media is designed to mimic human-made work it crosses the line from automation into manipulation," Adjaoute said.
Demonetization, he added, is a good step, but, "the deeper challenge isn't just repetitious content. The core issue is that AI can generate thousands of videos while human moderation and nuanced enforcement cannot keep up. Even the most advanced detection systems struggle to distinguish between transformative reuse and automated spam."
Adjaoute was also the founder of Brighterion, a company acquired by Mastercard that specialized in using AI for fraud detection. "Just as we did in financial systems, platforms like YouTube must focus not only on what content looks like, but on the intent and behavior behind it," he said. "Fighting AI slop should not mean banning AI, but requires new discovery algorithms, better recommendation and monetization incentives for new 'human-created' content."
Pushing too hard with broad enforcement policies could punish creators who are adding value to the platform. "If enforcement relies on superficial signals like reused footage or similar narration, the platform may end up demonetizing thoughtful, high-effort content alongside low-value automation," Adjaoute said.


