YouTube is rolling out a new AI-powered “likeness detection” tool, allowing creators in its Partner Program to find and request takedowns of unauthorized deepfakes, a move. The system is designed to protect creators from impersonation and misleading endorsements by automatically flagging videos that use their face without permission.
Face/off: The system takes its cues from YouTube’s long-standing Content ID, swapping copyright detection for facial recognition. As detailed in a video on its Creator Insider channel, users must opt-in by providing a government ID and a short selfie video for verification. Once approved, a dashboard shows flagged videos, where creators can file for removal or submit a copyright claim.
The Hanks precedent: The launch follows a pilot program with the Creative Artists Agency (CAA) and directly responds to the threat from powerful AI generators. The need for such a tool was underscored by incidents like a persistent deepfake ad of Tom Hanks promoting a fake medical treatment, which ran on the platform for months.
The catch: The tool's availability is limited for now, but TheWrap reports the plan is to have it in all monetized channels worldwide by January 2026. It also currently only analyzes facial likeness, so voice-only clones may still slip through. YouTube admits the tool is a work in progress—the latest move in the escalating cat-and-mouse game between AI generation and detection.
While a necessary step, the tool places the burden of policing deepfakes on creators themselves and represents one more front in the growing battle to distinguish real from synthetic on major platforms.
The new tool arrives as critics point out that Google's own freely available AI models have helped fuel the rise of deepfake content. Meanwhile, YouTube is simultaneously developing its own AI-powered features, including an AI music host to rival Spotify's DJ, highlighting the company's dual role as both a developer and a regulator of artificial intelligence.
