Home

America cracks down on AI advertising tools

The US Federal Trade Commission warned companies on Wednesday it could sue companies for building generative AI tools used to scam consumers, even if their tools were not intended to be abused. 

The potential for deception and fraud from AI is increasing as the technology continues to advance. Commercial software makes it easy for people to generate fake images, text, videos, and voices, and these tools can, and will, be used for illegal activities. Michael Atleson, an attorney for the FTC's division of advertising practices, warned that generative AI is already being used to carry out scams.

"Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals," he said in a blog post.

Large language models like ChatGPT can write convincing phishing emails, content fake websites, and voice-cloning software can pretend to mimic people to extort targets or access accounts for financial fraud. The FTC has cracked down on miscreants misleading people with false advertising using celebrity deepfakes or scamming people with fake dating profiles.

"The FTC Act's prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that's not its intended or sole purpose," Atleson warned.

He urged companies to think about the ways their generative AI tools could be misused for fraud or to inflict harm before they develop them, and if they should even bother making or selling their products if the risks are too severe. 

If companies do go through with building and selling their software, however, they need to take safety precautions before their products are commercially available. Simply issuing a warning about potential risks is not good enough, the FTC said.

"Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors. Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal."

Companies should not rely solely on software designed to detect fake AI-generated media, and urging consumers to check the authenticity of content themselves is unfair, Atleson said. The FTC is also looking into how generative AI harms children, teenagers, and other vulnerable groups.

"Commission staff is tracking those concerns closely as companies continue to rush these products to market and as human-computer interactions keep taking new and possibly dangerous turns," he concluded.

Last month, Atletson warned advertisers against overhyping the capabilities of AI products. ®

Source: The register

Previous

Next