Home » States Consider Bills to Fight AI Calls During Elections
Featured Global News News Politics Technology US

States Consider Bills to Fight AI Calls During Elections


In January, more than 20,000 New Hampshire voters received a call from President Joe Biden asking them to skip the state’s presidential primary.

One problem: It wasn’t Biden speaking. Instead, his voice was doctored using artificial intelligence.

Now, New Hampshire legislators are proposing a law that would prohibit deepfake phone calls within 90 days of an election, unless they are accompanied by a disclosure that AI was used, Roll Call reported Wednesday. The legislation has passed the state House and next heads to the Senate.

The Granite State is one of 39 states considering legislation to require transparency on AI-generated deepfake ads or calls, Roll Call reported.

Wisconsin recently signed into law similar legislation. Failure to comply will result in a $1,000 fine per violation.

In Florida, legislation was passed that would result in criminal charges if an AI-enabled message is not disclosed, Roll Call reported. The bill has yet to be signed into law by Republican Gov. DeSantis.

Legislation is being weighed in Arizona that would require disclaimers 90 days before an election, with repeated failures resulting in a felony charge.

Several bipartisan bills being worked on in Congress would ban the use of AI-generated material targeting a candidate for federal office.

Technology companies also said they will do their part to help fight AI. In February, a group of companies signed a pact to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.

Tech executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok signed the accord, announcing a new voluntary framework for how they will respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies — including Elon Musk’s X — are also signing the accord.

The symbolic accord outlines methods the companies will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. Companies will share best practices with each other and provide “swift and proportionate responses” when that content starts to spread.

Source: Newsmax

Translate