Using AI-Generated Voices In Robocalls Now Illegal In US, Rules FCC

The move clarifies where these calls fall under an existing law that limits junk calls.


Francesca Benson


Francesca Benson

Copy Editor and Staff Writer

Francesca Benson is a Copy Editor and Staff Writer with a MSci in Biochemistry from the University of Birmingham.

Copy Editor and Staff Writer

Edited by Laura Simmons
Laura Simmons - Editor and Staff Writer

Laura Simmons

Editor and Staff Writer

Laura is an editor and staff writer at IFLScience. She obtained her Master's in Experimental Neuroscience from Imperial College London.

3D rendering of a white humanoid android typing on a laptop and wearing a headset, staring at the camera

Did you see what I did there? It's a scam.

Image credit: Phonlamai Photo/

Worried about AI-generated voices being used to con you or your loved ones over the phone? The United States Federal Communications Commission (FCC) ruled on February 8, effective immediately, that robocalls using AI voices fall under restrictions on “artificial or prerecorded voice” under the Telephone Consumer Protection Act (TCPA), which also restricts telemarketing calls and automatic telephone dialing systems.

“It seems like something from the far-off future, but it is already here. Artificial Intelligence-generated voice cloning and image creating tools are now more accessible and more likely to be used for fraud,” FCC Chairwoman Jessica Rosenworcel said in a statement.


Callers using AI voices must first get consent before making a non-emergency call, and a real person selecting the AI-generated messages to be played “does not negate the clear statutory prohibition against initiating a call using a prerecorded or artificial voice,” the declaratory ruling reads. As CNET notes, these calls can be reported using a form on the FCC’s website.

These calls must provide identification and disclosure information about who is making the call. If the call falls under the umbrella of telemarketing or advertising, it must also offer an opt-out method.

“This technology can confuse us when we listen, view, and click, because it can trick us into thinking all kinds of fake stuff is legitimate,” Rosenworcel continued. “Already we see this happening with Tom Hanks hawking dental plans online, a vile video featuring Taylor Swift, and calls from candidates for political office that are designed to confuse us about where and when to vote.”

“We also have stories of grandparents who are led to believe that it is really their grandchild on the phone begging for funds, only to learn later it was a bad actor preying on their willingness to forward money to family.”


“In January, potential primary voters in New Hampshire received a call, purportedly from President Biden, telling them to stay home and ‘save your vote’ by skipping the state’s primary. The voice on the call sounded like the President’s, but of course it wasn’t. Those were voice cloning calls,” added Commissioner Geoffrey Starks. “The use of generative AI has brought a fresh threat to voter suppression schemes and the campaign season with the heightened believability of fake robocalls.”

This move follows a Notice of Inquiry launched in November 2023, which sought to better understand the impact AI could have when it comes to protecting consumers from “unwanted and illegal telephone calls and text messages under the TCPA”. As it turns out, using AI isn’t reserved for the bad guys: Rosenworcel explained that AI could help with pattern recognition to recognize AI robocalls.

“Responsible and ethical implementation of AI technologies is crucial to strike a balance, ensuring that the benefits of AI are harnessed to protect consumers from harm rather than amplify the risks they face in an increasingly digital landscape,” explained Commissioner Anna M. Gomez.

“Now, with this Declaratory Ruling, we will have another tool to go after voice cloning scams and get this junk off the line,” concluded Rosenworcel.


  • tag
  • artificial intelligence,

  • AI,

  • politics,

  • policy,

  • spam,

  • deepfakes,

  • science and society