by Didi Rankovic, Reclaim The Net:
There is a new legislative proposal, the DEEPFAKES Accountability Act, re-introduced by a Democrat, that seeks to criminalize the use of a certain type of generative AI content.
Creators targeted here would be those who fail to label their work as required by the bill, namely, as “malicious deepfakes,” while all content of this kind would have to be labeled regardless.
TRUTH LIVES on at https://sgtreport.tv/
But terms like “malicious” and “extremely harmful” are vague enough – not to mention requiring the arbiter of “deepfake maliciousness and extreme harmfulness” – that the whole thing could turn into yet another tool of censorship, handy to those who go after memes or parody and happen to forget to label them.
After all, although they are now vilified as the scourge of the internet used only by scammers, those who have political deception or sexual abuse and the like in mind, deepfakes have been around for a long time in entertainment and creative industries in general.
The author of the bill, Congresswoman Yvette Clarke, already tried to get the same proposal through Congress but failed back in 2019. Now, she is speaking about “weaponized deception” and the need to “discern who is intending to harm us.”
Some reports about this proposal note that these days, creating a deepfake does not require much, if any technical skill and can therefore be done by anyone using an app or a website.
(One wonders, will such apps and websites be the next target in the “war on deepfakes.”)
ABC claims, propping up its case by citing a Berkeley computer science professor, that political groups are already frequently using generative AI to “harm” opponents.
But an example of this practice the outlet gave is clearly a joke: a photo of supposedly Joe Biden in a Republican ad. This kind of interpretation of “harmful” deepfakes makes it clear that meme or parody creators would indeed have plenty to worry about, should legislation like that proposed by Clarke become law.