Robot Head

FTC’s Operation AI Comply Generated in Part by Fear of Scale

FTC’s Operation AI Comply Generated in Part by Fear of Scale

October 24, 2024

FTC’s Operation AI Comply Generated in Part by Fear of Scale

By: Jordan Briggs

With the increased visibility of Artificial Intelligence (“AI”) in our daily lives—in search engines, PDF readers, social media feeds—AI is starting to become commonplace and its use normalized. In its recent crackdown against five businesses advertising AI services to consumers, the FTC put its enforcement authority behind its belief that AI can “turbocharge deception” (“Operation AI Comply”).

But by its very nature, as a machine trained to learn on its, at least at first, human-provided inputs, when it comes to combatting deceptive practices, are the problems we face from AI all that different from the ones we faced already? Of the five businesses the FTC brought actions against, only two are actual use cases of AI. The other three, Ascend Ecom, Ecommerce Empire Builders, and FBA Machine were all ecommerce schemes deceiving their customers with promises of guaranteed income. For decades, these schemes have used whatever buzzword captures the present zeitgeist to attract customers. However, DoNotPay and Rytr, at least as business models, are more dependent on AI. Yet these too are not unfamiliar theories for deceptive practice suits.

DoNotPay advertised itself as an AI-powered lawyer-replacement service that could handle anything from lawsuits to contracts to website compliance reviews. However, the FTC complaint alleges that the company had no attorneys hired or retained and did not conduct product testing to compare its AI output to attorney work product. “None of the Service’s technologies has been trained on a comprehensive and current corpus of federal and state laws, regulations, and judicial decisions or on the application of those laws to fact patterns.” The FTC commissioners voted unanimously to bring this complaint.

In the most unique suit of the five, the FTC complaint against Rytr, brought with a 3-2 vote, addressed the company’s alleged offering of AI services to farm reviews. Review farming usually consists of paying people to write fake reviews for online products or services. Instead of alleging deceptive practices directed at the customers of the company, here the FTC alleges that Rytr “provided the means and instrumentalities for the commission of deceptive acts and practices” by allowing paid subscribers to use AI to generate unlimited, detailed product reviews. Such a service, according to the complaint, “is likely to pollute the marketplace with a glut of fake reviews.”

However, this is just one of the forty-three uses Rytr offers its customers.[1] As each of the dissenting commissioners addresses, there is no allegation in the complaint that the information in the generated reviews was inaccurate or deceptive or that the reviews sampled for the complaint were even posted.[2] Further, since it creates a review based on the information it is given, the review generator is not inherently deceptive, and the FTC has not alleged that the company deceptively marketed this tool as anything other than a review generator.[3] While the other four cases in Operation AI Comply appear to follow basic FTC deceptive practices suits, this one indicates an apprehension, either of AI itself or its unchecked use, that might test the limits of what constitutes “deceptive acts or practices” under Section 5(a) of the FTC Act. So, what is the real difference between AI-generated deception and human-generated deception—and what is the FTC so afraid of?

Scale.

Fake reviews and services that sell them are already clearly deceptive and prohibited by Section 5 of the FTC Act. However, the allegations brought here by the FTC conflate the relationship between AI-generated reviews and fake reviews. Hypothetically a user could input their true experience with a product as bullet points and generate a neat paragraph that accurately captures their experience. The issue that the FTC appears to want to address is how quickly a business could use this product to generate copious amounts of fake reviews and drown out legitimate reviews.

The difference between this case and the other four is further highlighted by asking, “what can be done to fix the deceptive practice here” in a simplified thought experiment.[4] Generally, one way to avoid accusations of deceptive practices is by providing the end user with information that is true and accurate about the product and its limitations. This option is not available in means and instrumentalities cases. For businesses like Ascend Ecom, Ecommerce Empire Builders, and FBA Machine, if they truly created an AI to help with ecommerce, the inference from the present FTC suits is that they could offer this AI and market it with fact-based claims about its efficiency and without the income guarantee statements. For businesses like DoNotPay, the main inference from the FTC suit is that the business would have to support its marketing and its results with legitimate and fact-based comparison to the professional services it is trying to replace. However, for a business like Rytr, its only option to avoid a deceptive practices suit, according to the FTC, is to not offer this service,[5] regardless of the legitimate ways to use it.

Perhaps the FTC would have looked more favorably if Rytr had capped the number of reviews a subscriber could generate, but that begs the question of how this mitigating factor is related to any “deception” under Section 5. A greater scale of potentially false reviews could cause more harm, but the potential of a falsehood without the allegation of anyone being misled or deceived should not create liability.

[1] See Commissioner Holyoak’s Dissenting Statement, available at: https://www.ftc.gov/system/files/ftc_gov/pdf/holyoak-rytr-statement.pdf.

[2] See Commissioner Holyoak’s Dissenting Statement and Commissioner Ferguson’s Dissenting Statement, available at: https://www.ftc.gov/system/files/ftc_gov/pdf/ferguson-rytr-statement.pdf.

[3] See Commissioner Ferguson’s Dissenting Statement at 3-4.

[4] Note, these four businesses likely have other legal and business design issues, and this thought experiment only generally addresses the deceptive practices highlighted by the FTC.

[5] Proposed Order and Decision at 5, available at https://www.ftc.gov/system/files/ftc_gov/pdf/2323052rytracco.pdf.

Jordan Briggs

Jordan Briggs

Jordan Briggs’ experience in government, in-house, and in private practice at one of the country’s most renowned global law firms informs her multi-dimensional approach to risk management and compliance across a broad range of sectors and issues.

A Review of Top-Rated AI Tech Companies Undermines the FTC’s Warning of Deception
FTC Beat |
Mar 22, 2023

A Review of Top-Rated AI Tech Companies Undermines the FTC’s Warning of Deception

By: Abbey Block
The FTC Weighs In: Online Services Must Be Diligent When Kids’ Privacy At Stake
FTC Beat |
Mar 9, 2022

The FTC Weighs In: Online Services Must Be Diligent When Kids’ Privacy At Stake

By: Nicole Kardell
FTC Tells Advertisers – Fake Reviews and Endorsements Are Going to Cost You
FTC Beat |
Oct 14, 2021

FTC Tells Advertisers – Fake Reviews and Endorsements Are Going to Cost You

By: Michelle Cohen
Supreme Court rules that FTC cannot seek disgorgement under Section 13(b) of the FTC Act
FTC Beat |
Jul 12, 2021

Supreme Court rules that FTC cannot seek disgorgement under Section 13(b) of the FTC Act

By: Ifrah Law

Subscribe to Ifrah Law’s Insights