Ben Crump Files FTC Complaint Accusing YouTube and Google of Profiting From AI-Driven Defamation

Civil rights attorney Ben Crump has filed a formal complaint with the Federal Trade Commission (FTC) alleging that Google and its subsidiary YouTube are facilitating — and profiting from — an international misinformation operation that uses artificial intelligence to defame Black public figures and mislead the public.
In astatement and accompanying press remarks,Crump said the complaint was filed on behalf of multiple victims who have been targeted by AI-generated videos and impersonation accounts that falsely present themselves as legitimate Black media outlets. According to the filing, the content is allegedly produced by foreign actors operating from India, Pakistan, and Russia, then monetized through YouTube’s advertising systems.
“This is not accidental misinformation,”Crump said.“This is a multimillion-dollar misinformation economy that survives because major tech platforms continue to monetize content even after it has been proven false.”
The FTC complaint invokes Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive acts or practices in commerce. Crump argues that Google and YouTube’s continued monetization of demonstrably false AI-generated content — particularly when it impersonates real journalists, judges, and public figures — may constitute an unfair practice that causes substantial consumer and reputational harm.
Crump cited cases involving public figures such as Katie Lattimore and Judge Faith, whose likenesses and names were allegedly used in fabricated videos that spread false claims while generating advertising revenue.
Legal experts note that while platforms often rely on Section 230 of the Communications Decency Act to shield themselves from liability for third-party content, the FTC complaint raises a different legal theory: whether monetization and algorithmic amplification of known falsehoods crosses the line into active commercial participation.
“This shifts the question from content moderation to commercial responsibility,”Crump said.“When you are paid to push lies, that is no longer passive hosting.”
View this post on InstagramAI, Defamation, and Platform AccountabilityThe complaint highlights growing legal uncertainty aroundAI-generated defamation, particularly when synthetic media impersonates real people at scale. Unlike traditional defamation cases, the alleged perpetrators are often anonymous, overseas, and judgment-proof — leaving victims with limited legal recourse.
Crump contends that platform companies are uniquely positioned to stop the spread by disabling monetization, enforcing identity verification, and refusing to profit from content that has been flagged and verified as false.
“This is not just about speech,”he said.“It’s about ethical responsibility and consumer protection.”
Potential Regulatory ImpactThe FTC has not publicly commented on the complaint, but the filing comes amid increased scrutiny of Big Tech’s role in AI governance. Regulators in the U.S. and abroad are examining whether existing consumer protection laws are sufficient to address harms caused by synthetic media, impersonation, and algorithmic amplification.
If the FTC determines that monetization of proven false AI content constitutes an unfair practice, it could signal a major shift in platform liability, even without changes to Section 230.
For now, Crump says the complaint is intended to force accountability where traditional litigation has struggled.
“Real people are being harmed in real time,”he said.“Tech companies cannot continue to look the other way while cashing the checks.”


