US bans AI-generated voices used in scam robocalls after Biden deepfake

The FCC has made AI-generated voices in robocall scams illegal under U.S. telemarketing laws.
The FCC has made AI-generated voices in robocall scams illegal under U.S. telemarketing laws.

Artificial intelligence-generated voices used in unwarranted robocalls — or automated phone calls — are now officially illegal in the United States following a new Federal Communications Commission (FCC) decision.

“Today the Federal Communications Commission announced the unanimous adoption of a Declaratory Ruling that recognizes calls made with AI-generated voices are ‘artificial’ under the Telephone Consumer Protection Act (TCPA),” the agency said in a Feb. 8 statement.

“This would give State Attorneys General across the country new tools to go after bad actors behind these nefarious robocalls."

The FCC’s ban came just weeks after New Hampshire residents received fake voice messages imitating U.S. President Joe Biden advising them against voting in the state’s primary election.

Robocall scams are already illegal under the TCPA — a U.S. law governing telemarketing. The latest ruling will also make “voice cloning technology” used in the scams illegal. The rule will take immediate effect, the FCC said. 

“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” said FCC chair Jessica Rosenworcel.

The FCC first proposed outlawing AI robocalls under the TCPA on Jan. 31, a 1991 law regulating automated political and marketing calls made without the receiver’s consent.

The TCPA’s primary aim is to protect consumers from unwanted and intrusive communications or “junk calls” and restricts telemarketing calls, the use of automatic telephone dialing systems and artificial or pre-recorded voice messages.

FCC rules also require telemarketers to obtain written consent from consumers before robocalling them. The ruling will now ensure that AI-generated voices in calls will also be held to the same standards.

The FCC said in its recent statement that AI-backed calls have escalated in the last few years and warned the technology now has the potential to confuse consumers with misinformation by imitating the voices of celebrities, political candidates and close family members.

It added while law enforcement has been able to target the outcome of an unwanted AI-voice-generated robocall — such as the scam or fraud they are seeking to perpetrate, the new ruling will allow law enforcement to go after scammers just for using AI to generate the voice in robocalls.

Related: Security researchers unveil deepfake AI audio attack that hijacks live conversations

Meanwhile, the alleged scammer behind the Biden robocalls in mid-January has been traced back to a Texas-based firm named Life Corporation and an individual named Walter Monk.

The Election Law Unit issued a cease-and-desist order to Life Corporation for violating the 2022 New Hampshire Revised Statutes Title LXIII on bribes, intimidation and suppression.

The order demands immediate compliance, and the unit reserves the right to take additional enforcement actions based on prior conduct.

Magazine: $830M fraud arrests, Nobody’s 3,000% premium, Binance snitches get riches: Asia Express