Conceptual background of Artificial intelligence , humans and cyber-business  on programming technology 
element ,3d illustration

A Review of Top-Rated AI Tech Companies Undermines the FTC’s Warning of Deception

A Review of Top-Rated AI Tech Companies Undermines the FTC’s Warning of Deception

March 22, 2023

A Review of Top-Rated AI Tech Companies Undermines the FTC’s Warning of Deception

By: Abbey Block

               Artificial intelligence (“AI”) was once thought of as science fiction – something we could only see on a movie screen or read about in a comic book. But in recent years, the technology has become both accessible and popular, proliferating nearly every sector of society. From healthcare to the legal industry, AI technology has been praised for its ability to cut costs and improve efficiency.

               Despite the technology’s apparent benefits, the Federal Trade Commission (“FTC”) isn’t so eager to jump on the AI bandwagon. In a recent blog post, the FTC warned AI advertisers to keep their AI “claims in check,” and refrain from “overpromise[ing] what [their] algorithm or AI-based tool can deliver.”

               The post characterized “AI technology” as a “hot marketing term” that is ripe for abuse by advertisers. Specifically, the FTC warned that the popularity of AI technology may encourage advertisers to overpromise and under deliver when it comes to the efficacy of their products. In fact, according to the post, “some products with AI claims might not even work as advertised in the first place,” – a particular concern for regulators.

The Legal Framework

               The FTC maintains the power to regulate the advertisement of AI technology under the FTC Act (the “Act”), which authorizes the agency to combat “unfair or deceptive acts or practices” in or affecting commerce.[1] Under Section 5 of the Act, a practice is “unfair,” if it causes or is likely to cause “substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.”[2] Simply put, the FTC can challenge a business practice if it is likely to cause more harm than good to consumers.[3] This authority includes the power to regulate advertising practices which, under the Act, “cannot be deceptive or unfair, and must be evidence based.”[4]

A Warning to Advertisers

               With regard to the advertisement of AI technology, the FTC seems most concerned that advertisers will use the flashy label of “AI technology” to entice consumers to buy products that fail operate as promised. Given the potential for abuse, the FTC outlined four questions that regulators will ask when evaluating the advertisement of products that utilize AI technology:

  • Are you exaggerating what your AI product can do?

               Advertisers must refrain from making exaggerated claims about the capabilities of their technology and the ways in which it can perform. Such performance claims will be considered deceptive if they claim to apply broadly, but in fact, “only apply to certain types of users or under certain conditions.” Further, claims of operational capability generally must be supported by scientific or empirical evidence.[5]

  • Are you promising that your AI product does something better than a non-AI product?

               Advertisers must have “adequate proof” that a product is superior because it utilizes AI technology. This is especially true if the product with AI is offered to a consumer at a higher price than those without the technology. Simply put, the implementation of AI technology does not necessarily make a product better for consumers and advertisers must not falsely claim otherwise.

 

  • Are you aware of the risks?

               Advertisers must know “about the reasonably foreseeable risks and impact” of their AI products before putting them on the market. Of particular concern is the possibility that AI technology may yield flawed or biased results. Ignorance as to how the AI algorithms actually operate will not shield advertisers from liability. Indeed, the FTC’s post specifically warns that a business cannot avoid penalties simply because the AI technology is a “black box” that is difficult to understand or test.

 

  • Does the product actually use AI at all?

               The FTC warns advertisers, “[i]f you think you can get away with baseless claims that your product is AI-enabled, think again.” This is because FTC investigators can “look under the hood” to analyze the product being advertised, ensuring that its performance matches the capabilities being promoted. Advertisers should also be aware that in the eyes of the FTC “merely using an AI tool in the development process [of a product] is not the same as a product having AI in it.”

               Are the FTC’s stern warnings really warranted?  Are businesses and advertisers really exploiting the flashy “AI” label in a way that would mislead or deceive consumers? Answering these questions requires review of the AI advertisements currently on the market. Because the FTC’s blog post did not identify any specific AI advertisement as problematic, this blogger surveyed some of the world’s top AI tech businesses to determine whether AI advertising truly is prone to deception.

IBM

               According to its website, IBM offers various AI-powered products for businesses and IT operations. The company’s website makes specific and measurable claims regarding its technology’s performance, claiming the following:

  • Increase new customer conversion 6x faster;
  • 250% increase in transactions
  • Up to 470% ROI in less than six months.

               IBM links to specific case studies (admittedly, in small-print footnotes) to support these claims – seemingly acting in accordance with the FTC’s requirement that claims of operational capability generally must be supported by scientific evidence. While not a wide-reaching analysis, the case study provides at least one demonstrable example of the company’s technology at work and the results it can produce. For example, the case study explains that the IBM technology “[i]ncreased banks’ new customers conversion rates 6x,” and “Finologee’s tools reduced a time-consuming 15-page paper form process to 8-10 minutes.” Admittedly, the case study isn’t lacking in praise for IBM. Indeed, at some points, the case study reads more like an advertisement than an empirical source of data: “With IBM API Connect software, Finologee developed an off-the-shelf solution for opening banking.” However, it is likely that IBM’s advertisements and accompanying case studies comply with the FTC’s warnings against deception.

Builder.ai

               Builder.ai offers custom AI-powered software to help developers build apps. The company claims that its software is superior to manual coding, explaining that “other development agencies waste time coding [app] components from scratch for each project. More time also means more money.” Builder.ai explains that its AI technology – a bot named “Natasha” – uses “machine learning algorithms” to recommend app features based on the type of app being built. “Natasha also creates an instant prototype,” to help consumers “visualize [their] idea.” Builder.ai claims that this automated process saves consumers money, calling itself the “most cost-effective solution on the market.” Builder.ai boasts that its AI technology is also more effective than traditional coding, claiming that its technology has an “almost 0% failure rate,” versus the “78% [rate of failure which is the] industry standard.”

               The website features “case studies” which provide some statistical support for Builder.ai’s claims of superiority. For example, a case study of a company called Siam Makro provides the following:

Siam Makro are growing fast, but their SaaS solution couldn’t scale with them. We helped them build a new order management system tailored to their business needs – over 5 years it came in at 1.67% of the previous cost.

               The case study provides other statistics including the number of orders processed by the company since 2019 (1.5 million) and that Builder.ai made interface development 98.3 percent cheaper than previous development systems used by Siam Makro.

               Like IBM’s case studies, Builder.ai’s case studies are undoubtedly curated to appeal to potential customers. However, the case studies offer a relatively in-depth discussion of the ways in which Builder.ai improved each customer’s app-building experience, including the app features it helped to develop. Thus, at a minimum, potential customers can review the business’s website to gain insight as to the features offered by the technology.

                The product offered by Builder.ai is not entirely AI-based – a fact that the company does not shy away from. Customers are matched with a human expert who helps to manage the app-building projects. Further, although AI-technology combines components to assembles the app, human designers and developers “tailor” the app’s features to suit the customer’s needs.  Builder.ai seems to comply with the FTC’s warnings given that it does not claim to be entirely AI-powered. To the contrary, the company characterizes its experts as a value-adding feature.

OpenAI

               Most recently in the news, OpenAI offers several AI programs. The most widely publicized is ChatGPT – a chat bot that uses artificial intelligence to write essays, solve math problems, and even mimic human language.  Since the launch of ChatGPT, OpenAI has released a new and improved AI-powered chat bot called GPT-4, which is marketed as OpenAI’s “most advanced system.” According to OpenAI’s website, GPT-4 is “82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses,” than earlier versions of the technology. Like IBM and Builder.ai, OpenAI also offers “customer stories” addressing the various ways in which the technology is being utilized by customers.

               The OpenAi website is unique because it features a separate tab for “Safety.” This section of the website boldly warns that “[a]rtificial intelligence has the potential to benefit nearly every aspect of our lives – so it must be developed and deployed responsibly.” This page features quotes from OpenAI’s Chief Technology Officer, Mira Murati, explaining that the company focuses on ensuring that its technology aligns with “human intentions and values.” In line with these lofty aspirations, the page features blog posts detailing the company’s collaboration with industry leaders and policy makers to “ensure that AI systems are developed in a trustworthy manner.” Similarly, the page links directly to the company’s “Safety Product Standards” and explains that it has developed “risk mitigation tools,” and “best practices for responsible use.”

               OpenAI’s Safety section would likely receive a gold star from the regulators at the FTC given that it acknowledges the risks associated with AI technology and offers solutions for risk mitigation. Indeed, one post even explicitly addresses the company’s initiative to reduce bias in one of its products, DALLž E: “Today we are implementing a new technique so that DALLž E generates images of people that more accurately reflect the diversity of the world’s population.”

               OpenAI’s initiative aligns with a FTC discussion of the ways in which inherent biases can be baked into the AI technology. The post warned that the “apparently ‘neutral’ technology can produce troubling outcomes – including discrimination by race or other legally protected classes.” Thus, the FTC advised that it is essential for businesses to test their algorithms for discrimination and maintain transparency as to the type and source of data being processed. By testing its products for bias, identifying the technology’s limitations, and taking steps to improve the technology, OpenAI should be held out as a prime example of what it means to be aware of “the reasonably foreseeable risks and impact” of its AI products.

Conclusion

               If viewing IBM, Builder.ai, and OpenAI as representative of the larger AI-technology market, it seems that the FTC’s warnings may be slightly alarmist. Each website advertises the business’s AI technology in a way that is generally user-friendly and accessible. Indeed, each provided in-depth explanations of how the technology works and is implemented in the product or service being offered.

                All three businesses offer statistics and case studies to back up their claims of superior performance and capability. Although these case studies are not necessarily objective, peer-reviewed scientific research, at a minimum they provide some evidentiary basis for the companies’ claims through a user-friendly interface. To strengthen compliance with the FTC’s guidance, these companies could employ an objective third-party to analyze their claims. This would provide consumers with an unbiased source of information regarding the product’s operational capabilities and benefits over non-AI products.

               Each website had its own unique strengths. For example, OpenAI’s discussion of risk mitigation and safety strategies most directly addresses the FTC’s concern that AI technology can produce flawed and biased results. Similarly, Builder.AI directly addresses the role that human experts, rather than AI technology, play in its products and services. This transparency complies with the FTC’s directive that companies must not claim that their products utilize AI technology if they do not actually do so. Thus, while no website checked every single box provided by the FTC’s guidance, it appears that AI-technology companies are, at a minimum, attempting to provide their customers with ample information about the products and services being offered and the technology being utilized.

               Perhaps the FTC’s stern warnings can be seen as an attempt to preventatively scare tech companies into compliance. However, the regulator’s “scared straight” approach may be overkill. The three websites examined evidence a trend toward transparency, risk management, and evidence-based advertising. Thus, although “AI-technology” may be a flashy advertising tactic, it seems that modern tech companies are working toward compliance with the principles provided by the FTC Act.

 

[1] 15 U.S.C. § 45 (a)(2); see also F.T.C. v. Sperry & Hutchinson Co., 405 U.S. 233, 242 (1972) (holding that the “Commission has broad powers to declare trade practices unfair.”).

[2] Fed. Trade Comm’n, Commission File No. P221202, Policy Statement Regarding the Scope of Unfair Methods of Competition Under Section 5 of the Federal Trade Commission Act, (Nov. 10, 2022), available at https://www.ftc.gov/system/files/ftc_gov/pdf/P221202Section5PolicyStatement.pdf.

[3] Elisa Jillson, Aiming for truth, fairness, and equity in your company’s use of AI, Fed. Trade Comm’n (Apr. 19, 2021) https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.

[4] Advertising and Marketing, Fed. Trade Comm’n, https://www.ftc.gov/business-guidance/advertising-marketing.

[5] F.T.C. v. Direct Marketing Concepts, Inc., 624 F.3d 1, 8 (1st Cir. 2010) (an advertisement is deceptive as a matter of law when advertisers lack any reasonable basis to substantiate their claims).

Abbey Block

Abbey Block

Abbey Block found her path in law as a journalism major, coupling her passion for advocacy through writing with her litigation experience to create persuasive, effective arguments.

Prior to joining Ifrah Law, Abbey served as a judicial law clerk in Delaware’s Kent County Superior Court, where she was exposed to both trial and appellate court litigation. Her work included analyzing case law, statutes, pleadings, depositions and hearing transcripts to draft bench memoranda and provide recommendations to the judge.

Related Practice(s)
Other Posts
FTC’s Operation AI Comply Generated in Part by Fear of Scale
FTC Beat |
Oct 24, 2024

FTC’s Operation AI Comply Generated in Part by Fear of Scale

By: Jordan Briggs
FTC Adds COPPA Violations to the Growing List of Privacy Concerns While TikTok is on the Clock
Aug 13, 2024

FTC Adds COPPA Violations to the Growing List of Privacy Concerns While TikTok is on the Clock

By: Jordan Briggs
The FTC Kills Noncompetes
FTC Beat |
Apr 30, 2024

The FTC Kills Noncompetes

By: George Calhoun
Got Endorsers? Federal Trade Commission Issues Updated Advertising Guides
FTC Beat |
Jul 10, 2023

Got Endorsers? Federal Trade Commission Issues Updated Advertising Guides

By: Michelle Cohen

Subscribe to Ifrah Law’s Insights