Election 2024: Arizona and Michigan Train Clerks To Report AI Deepfakes To Law Enforcement

0
363

by Didi Rankovic, Reclaim The Net:

The AI (and specifically, deepfakes) panic is playing a prominent role in this US election campaign, with the states of Arizona and Michigan introducing a scheme to train election clerks in identifying content branded as such.

Arizona Secretary of State Adrian Fontes and Michigan and Minnesota counterparts Jocelyn Benson and Steve Simon, all three Democrats, are among those pushing an initiative called the Artificial Intelligence Task Force, launched by the NewDEAL Forum.

TRUTH LIVES on at https://sgtreport.tv/

NewDEAL Forum is a Washington-based NGO whose board is populated by Democrat-associated figures, and which states it set out to “defend democracy” by developing tools and methods to help election officials and voters not only identify but also flag “malicious AI-generated activity” like deepfakes and “misinformation.”

Arizona and Michigan are considered to be swing states and there this effort is happening in the form of tabletop exercises that teach participants how to inform law enforcement and first responders about flagged content.

That’s not the only recently launched “project:” there’s liberal voting rights and media Democracy Docket platform, which is quoting Jocelyn Benson as saying that Michigan now has a law making “knowingly distributing materially-deceptive deep fakes” a felony.

But this applies only if this activity is seen as intending to harm a candidate’s reputation or chance at success, the Michigan secretary of state explained. However, it wasn’t immediately clear how transparent and precise the rules around determining the intent behind a deep fake are.

If applied arbitrarily, such legislation could catch a lot of things in its net – like satire and parody.

And it’s not an insignificant distinction when talking about AI, and deepfakes for that matter, since both have been around for a while, the latter notably in the entertainment industry.

Yet, when trying to explain why this focus on finding, flagging, and reporting content seen as harmful AI to law enforcement is an urgent problem, those promoting the policy speak about it being “nearly impossible” to distinguish authentic from generated video/audio material – as if this is something new.

Read More @ ReclaimTheNet.org