YouTube is openly lying about a human being on the other side of the screen.
— Enderman (@endermanch) November 7, 2025
You can verify whether it's an actual human from the YouTube team or AI talking to you by looking at the tweet source device. Sprinklr is the AI agent they're using to seed false hope into you. pic.twitter.com/iJAFGIUwjf
YouTube is currently facing intense fire from its creator community over claims that the company is relying on an “out of control” AI system to not only moderate content and terminate channels but also to handle creator support, often providing canned, automated responses that offer zero help.
The accusations, driven by viral posts on X (formerly Twitter), allege that the support offered by the official @TeamYouTube account is merely a veneer of human interaction, managed by an AI automation tool called Sprinklr Care.
Numerous creators are reporting seemingly false channel terminations and immediate rejections of their appeals. As one creator detailed, they were allegedly sent a survey for Sprinklr after a false termination, only to receive an automated reply rejecting their appeal without sending any new messages.
This instantaneous rejection process leads creators to firmly believe that the “manual review” process YouTube promises is an outright fabrication, with AI algorithms managing the entire appeal lifecycle from submission to rejection. As one user put it, YouTube is “openly lying” about manual review of appeals.
The most compelling technical evidence against YouTube’s current support model was brought to light by X user and software engineer, Enderman. It’s worth noting that Enderman recently lost their YouTube channel over an alleged link to a Japanese account, but was later reinstated. Still, his latest viral post, which has since garnered hundreds of thousands of views, showed that by checking the source device of replies from the official @TeamYouTube handle, users could trace the messages back to “Sprinklr Care.” This platform is identified as a third-party AI-powered messaging tool.
The inference from creators is immediate and damning: If the support response is coming from a known automation and AI seeding tool, then the response itself is likely automated, or at best, a quick template fill by a low-level agent relying entirely on pre-written responses. Enderman also points out the severe lack of access to real human help, stating that the only consistent way to reach a person is through a large Multi-Channel Network (MCN).
Further adding to the bizarre evidence, another creator, @peachy, highlighted a “human here” response from Team YouTube that was immediately followed by an em dash (—). Given the common observation that large language models and other generative AI tools frequently default to using em dashes, this was quickly flagged by the community as a major “red flag” proving bot communication.

YouTube’s response and defense
YouTube has publicly acknowledged using the Sprinklr tool but has been quick to defend its use, arguing that the tool is not the source of the intelligence.
The company’s defense centers on the idea that Sprinklr is primarily used as a Customer Relationship Management (CRM) and message routing system. In this framework, Sprinklr aggregates and manages the massive influx of messages from X and other platforms as well as providing agents with pre-approved templates or “snippets” to speed up responses to common queries (e.g., “how to appeal,” “where is my payment”).

Essentially, YouTube argues that while the messages pass through an automation system and may use pre-written blocks, a human agent is still on the other side choosing which template to use, or writing a custom response. They claim the tool merely facilitates the conversation, not generates it entirely.
However, creators remain unconvinced, stating the generic nature and the technical evidence of the tweet source device strongly suggest that these human agents are, at best, lightly editing pre-written, AI-informed copy.
The lack of genuine, in-depth support is taking a serious toll, with creators calling out the fact that AI-driven policy enforcement is costing people their livelihoods, only to be met with automated deflection when seeking resolution. While YouTube’s use of AI for moderation isn’t new, the current uproar underscores a growing disconnect between creators and the platform’s support systems. Automated tools may help manage scale, but when those tools appear to replace human judgment entirely, creators feel left without recourse — particularly those not part of larger networks or partnerships.
YouTube’s challenge now lies in restoring trust with creators by offering more transparency around how AI tools like Sprinklr are used, and ensuring that real humans are available to review complex or disputed cases.