YouTube is openly lying about a human being on the other side of the screen.
— Enderman (@endermanch) November 7, 2025
You can verify whether it's an actual human from the YouTube team or AI talking to you by looking at the tweet source device. Sprinklr is the AI agent they're using to seed false hope into you. pic.twitter.com/iJAFGIUwjf
YouTube has finally spoken up after a chaotic week in which creators accused the platform of letting automated systems erase channels and reject appeals almost instantly. The long community help post tried to steady the situation, but it didn’t do much to calm creators who spent the past several days posting screenshots of instant rejections, cryptic emails, and support messages apparently routed through automated tools. For many, the responses felt more like system messages than actual reviews.
Reports piled up across X and Reddit showing channels vanishing without warning, followed by an appeal denial arriving within minutes. That fast turnaround became the symbol of the whole controversy. Even after tagging YouTube executives, creators kept getting short, template-like replies.
Some users dug into metadata and concluded that support messages were coming through Sprinklr’s automation backend, reinforcing the belief that humans weren’t really involved at all.
YouTube’s official post says otherwise. The company claims it manually reviewed hundreds of the cases circulating online and upheld most terminations, saying only a small number were overturned. It framed many of the removed channels as mass uploaders of low-effort clips, scraped content, or deceptive formats. It also doubled down on its one-appeal-per-channel policy, explaining that repeated attempts trigger the same template email. For creators who believe their first appeal was barely glanced at, that offer of “one shot only” reads more like a locked door than a safeguard.
Earlier this month, we reported on YouTube’s admission that it uses AI systems to help process creator appeals, and before that, we covered the Enderman case, where a long-time channel was terminated and later restored. Those incidents left the community hyper-alert, so when rejections started arriving in minutes, people connected the dots quickly.
And now, it’s clear that YouTube is in damage control mode. The company’s liaison on social media posted a video, pretty much talking about the exact same things highlighted in the FAQ. So if you aren’t ready to read the post, you can watch this instead and make up your mind about whether YouTube’s explanation makes sense or not (spoiler – it doesn’t):
I’m being tagged in a lot of questions and comments about content moderation right now, and so is @TeamYouTube. We see it, we feel it, we take it super seriously, and we want to address it: pic.twitter.com/FqyzDd078Q
— YouTube Liaison (@YouTubeInsider) November 13, 2025
Amidst all this, a new layer is forming around the legal consequences. A post from creator Caleb C is gaining traction, accusing YouTube of not only mishandling moderation but potentially crossing into deceptive practice territory. He claims millions of channels were wrongly terminated by automated systems, that his own appeals were rejected in under a minute, and that YouTube misled users about whether AI was involved.
He cites Section 5 of the US FTC Act, which prohibits deceptive practices, and argues that if the FTC finds YouTube’s public statements inaccurate, fines could reach more than fifty thousand dollars per violation. The post has sparked a wave of replies from creators saying they experienced the same pattern of instant denials and shifting explanations. But you should take this with a grain of salt because even if there’s a potential for legal troubles, it could take months or even years before we get any final verdict.
The backlash shows no sign of slowing. Replies under Neal Mohan’s recent posts are filled with people demanding their channels back, urging YouTube to fix the automated review process, and asking why human oversight seems to disappear the moment a strike is issued.
YouTube says clearer policy explanations and better communication are coming. Creators say none of that matters until the review process feels like it passes through real hands. Right now, the community is louder than ever, and it’s not letting this go.
