FTC’s rule update targets deepfake threats to consumer safety

The FTC aims to combat deepfakes with an updated regulation, making it illegal for AI platforms to offer products or services that may harm consumers through impersonation.
The FTC aims to combat deepfakes with an updated regulation, making it illegal for AI platforms to offer products or services that may harm consumers through impersonation.

Citing the increasing danger of deepfakes, the United States Federal Trade Commission (FTC) is seeking to update a regulation prohibiting the impersonation of businesses or government agencies by artificial intelligence (AI) to protect all consumers.

The updated regulation — subject to final language and public feedback received by the FTC — could make it illegal for a generative artificial intelligence (GenAI) platform to offer products or services they know may be used to harm consumers through impersonation.

In a press release, FTC Chair Lina Khan said:

“With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever. Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals.”

The FTC’s updated government and business impersonation rule empowers the agency to initiate federal court cases directly to compel scammers to return funds acquired through impersonating government or business entities.

The final rule on government and business impersonation will become effective 30 days after publication in the Federal Register. The public comment period for the supplemental notice of proposed rulemaking will be open for 60 days following the date it is published in the Federal Register, with details on how to share comments.

Deepfakes use AI to create manipulated videos by altering someone’s face or body. While no federal laws address the sharing or creation of deepfake images, some lawmakers are taking steps to address this issue.

Related: EU committee greenlights world’s first AI legislation

Celebrities and individuals who are victims of deepfakes can, in theory, use established legal options like copyright laws, rights related to their likeness and various torts (such as invasion of privacy or intentional infliction of emotional distress) to seek justice. However, pursuing cases under these diverse laws can be lengthy and demanding.

On Jan. 31, the Federal Communications Commission banned AI-generated robocalls by reinterpreting a rule that forbids spam messages made by artificial or pre-recorded voices. This move came just after a phone campaign in New Hampshire that used a deepfake of President Joe Biden to discourage people from voting. Without action from Congress, various states across the country have passed laws making deepfakes illegal.

Magazine: Crypto+AI token picks, AGI will take ‘a long time’, Galaxy AI to 100M phones: AI Eye