The Federal Communications Commission (FCC) made a significant announcement on Thursday, declaring calls made with AI-generated voices as illegal.
FCC Chair Jessica Rosenworcel emphasized the need to crack down on such deceptive practices, stating that bad actors are exploiting AI-generated voices in unsolicited robocalls to manipulate vulnerable individuals, impersonate celebrities, and spread misinformation. The ruling provides state attorneys general with enhanced tools to pursue legal action against the perpetrators behind these fraudulent calls.
Previously, state attorneys general could only target the outcomes of unwanted AI-voice-generated robocalls. However, this latest action from the FCC explicitly prohibits the use of AI to generate voices in robocalls, making the practice itself illegal.
The fake Biden robocall, which has been traced back to Texas-based Life Corp., has prompted swift action from authorities. New Hampshire Attorney General John Formella revealed that a cease-and-desist letter has been issued to the company, led by Walter Monk, and a criminal investigation is underway.
Democratic FCC Commissioner Geoffrey Starks highlighted the grave implications of voice cloning, noting its potential to deceive recipients into taking actions they would not otherwise consider.
This move by the FCC follows previous enforcement actions, including a $5.1 million fine imposed on conservative activists for making over 1,100 illegal robocalls ahead of the 2020 U.S. election. These calls aimed to discourage voting by spreading false information about the consequences of voting by mail.
Despite a recent increase in robocall volumes, with U.S. consumers receiving nearly 4.3 billion robocalls in January, there has been a decline compared to previous years. The figures, provided by YouMail, a robocall blocking app and call protection service, indicate a 5.2 percent decrease from January 2023, signaling ongoing efforts to combat this pervasive issue.