Is $7 mn fine enough for AI-powered robocalls mimicking President Biden

The Federal Communications Commission (FCC) on Thursday finalized a $6 million fine against political consultant Steven Kramer for disseminating fake robocalls that mimicked President Joe Biden’s voice. The calls urged New Hampshire voters to skip the state’s Democratic primary, which drew significant attention for their deceptive nature.

US FCC Chair Jessica Rosenworcel
US FCC Chair Jessica Rosenworcel

Earlier, FCC informed that Lingo Telecom, which facilitated the robocalls, has agreed in August to a $1 million settlement and will implement a compliance plan to adhere to FCC caller ID authentication rules.

FCC says the carrier already paid $1 million and created policies to stop fake robo calls. The total fine will be $7 million in this case. Since this robocall had the potential to alter election result, the $7 million penalty is too low.

Steven Kramer, a Louisiana-based Democratic consultant, had been indicted in May for these robo calls, which falsely appeared to feature Biden instructing voters to wait until November to cast their ballots.

At the time, Steven Kramer was working for Biden’s primary challenger, Representative Dean Phillips, who quickly condemned the robocalls. Steven Kramer later claimed he paid $500 to have the calls sent to voters as a way to raise awareness about the risks of artificial intelligence in political campaigns.

The FCC determined that the calls used AI-generated deepfake technology to mimic Biden’s voice, violating regulations that prohibit the transmission of misleading caller ID information. Steven Kramer has been ordered to pay the fine within 30 days, or the case will be referred to the Justice Department for further action.

FCC Chair Jessica Rosenworcel highlighted the dangers posed by AI in elections, saying, “It is now cheap and easy to use Artificial Intelligence to clone voices and flood us with fake sounds and images. This technology can illegally interfere with elections, and we must act swiftly to stop this fraud.”

“I firmly believe that the FCC’s enforcement action today will send a strong deterrent signal to anyone who might consider interfering with elections, whether through the use of unlawful robocalls, artificial intelligence, or any other means,” Jessica Rosenworcel said.

The FCC is also considering a proposal that would require political advertisements on broadcast radio and television to disclose whether AI was used to generate content, though the rule is still pending approval.

How Common Is AI Fraud?

The use of AI in fraudulent schemes is on the rise, with deepfake fraud alone expected to cost $250 million annually by 2027, according to a report by Gartner. Financial institutions, social media companies, and law enforcement agencies are increasingly reporting cases of AI-driven fraud across sectors.

Cybersecurity firm Pindrop found that voice AI fraud grew 350 percent from 2021 to 2022, signaling a rapid escalation in the use of these techniques.

FBI and Europol have both issued warnings about the increasing prevalence of AI in fraud, particularly in areas like identity theft, deepfake fraud, and financial crimes. As AI technologies become more accessible, the number of criminals using these tools for scams and fraud is expected to grow exponentially.

Latest

More like this
Related

T-Mobile offers $5 device connection plan for customers

Starting October 17, T-Mobile is launching a new offer...

Airtel launches Green 5G initiative focusing on energy-efficiency

Bharti Airtel has announced an innovative collaboration with Nokia...

India Mobile Congress 2024 to present 900 start-ups

The India Mobile Congress (IMC) 2024 said more than...

Vodafone and Google ink 10-year partnership

Vodafone and Google announced a 10-year partnership to bring...