Election misinformation & AI disclaimer wording

Deepfakes & Elections

As Generative AI models become more advanced, they also empower those who seek to spread outrageous deepfake content online. Not all deepfakes are harmful—many are created as parody, so outlandish that they are (1) clearly fake and (2) intended purely for comedic effect—these lighthearted creations provide a harmless laugh. However, not all deepfakes fall into this category. Instead of bringing humor, AI has given individuals with malicious intent a powerful tool to easily create and spread harmful deepfake content, often targeting public figures and posing significant risks to their reputations and credibility. This type of content becomes especially dangerous during critical moments, such as elections, when it can significantly influence public opinion and damage the reputations of candidates.

A recent study by Kapwing offers remarkable insight into how easily high-profile individuals can be deepfaked. Kapwing aimed to identify the public figures most frequently targeted by deepfakes in 2024, analyzing data from a popular AI video Discord channel that focused on 500 influential figures in American culture. Unsurprisingly, Donald Trump topped the list as the most deepfaked celebrity by a large margin, with Joe Biden ranking fourth. Interestingly, Kamala Harris had significantly fewer deepfakes compared to both Trump and Biden, which highlights how certain figures are disproportionately targeted.

So why does this matter? Well, many of these deepfakes are distributed throughout social media platforms, such as Instagram, Facebook, X, and YouTube, which allows them to reach a large audience. While many users of these media platforms can identify that they are viewing a deepfake, many cannot, as not all deepfakes are obvious satire. Social media platforms are not required to include any indication that a post is AI generated, which leaves that determination to the viewer.

With the U.S. general election just a couple of weeks away [Editor’s note – this article originally published on Josh Kubicki’s Brainyacts, October18, 2024. Josh is also a law professor and the authors are his students], it may be worthwhile to reflect on the impact generative AI has had on how we view this year’s major candidates: Donald Trump and Kamala Harris. Deepfakes of both candidates have swept the internet- some positive, some negative.

The following deepfake depicted Donald Trump in knee-deep murky water helping victims of Hurricane Helene:

This AI generated photo was reposted all over Facebook, Instagram, and other forums, with many users oblivious to its non authenticity. The original poster captioned the photo with “I don’t think [Facebook] wants this picture on [Facebook]. They have been deleting it.” Some users asserted that the photo was real while other users pointed out that they could tell it was AI because the hands are distorted. The post has since been shared over 166,000 times.

On the other side of the political aisle, an almost two-minute long audio deepfake depicted Kamala Harris calling herself the “ultimate diversity hire.” The AI generated audio created by the user “Mr Reagan” was put over a real campaign video, blurring the lines between reality and fiction for many voters.

 

This deepfake was reposted by Elon Musk with the caption “This is amazing [laughing emoji]”. His post has since been shared over 217,200 times.

These fake images were created without the subject’s consent, but what if the candidates used generative AI to their benefit? Donald Trump reposted a deepfake photo of himself kneeling in prayer. With a large amount of his supporters being Evangelicals, this AI generated photo may elicit a favorable response. Does it matter to these voters whether the photo is real? Or does it only matter that the photo is in line with these voter’s expectations of Donald Trump?

Generative AI has been used in politics worldwide, not just in the U.S. A political candidate in India used generative AI to create a video of his deceased father endorsing him. This use of generative AI was more obvious because the subject of the deepfake had been deceased for several years; deepfakes of live people, however, are harder to distinguish.

AI Disclaimer Wording

Take a moment to ask yourself: In future elections, would you trust a political candidate that discloses the use of generative AI in their advertisements more, or less, than one that doesn’t? A recent study from the NYU Center on Tech Policy tells us that the answer is less. The following graph demonstrates the findings:

 

🅰️ This video has been manipulated by technical means and depicts speech or conduct that did not occur. [Michigan’s required label.] 

🅱️ This video was created in whole or in part with the use of generative artificial intelligence. [Florida’s required label.]

A possible explanation for the discrepancy between the Michigan and Florida values is the wording of the disclaimer. Michigan’s disclaimer uses the word “manipulated” and the phrase “speech or conduct that did not occur”, whereas Florida’s disclaimer uses neutral words to explain that it was created “in whole or in part with generative artificial intelligence.” Skepticism is normal when new technology comes along, but neutrally-worded disclaimers and responsible outlets may be the key to the public becoming more comfortable with generative AI.

Deepfake software created by Kapwing provides a simple watermark stating “AI Generated on Kapwing” with the aim to encourage responsible deepfake creation and use. HeyGen has implemented similar safeguards. For example, users must submit a brief video message from the person being deepfaked, in which they explicitly consent to the use of their image and voice for creating the deepfake material. Without this verification video, users may not go forward with the deepfake.

HeyGen offers candidates an efficient way to spread their message. Instead of spending hours or days filming political ads, it quickly generates an authentic image and voice of the candidate, allowing candidates to focus more on policy and public appearances rather than ad production.

While bad actors may misuse generative AI for political purposes, this doesn’t mean all generative AI is harmful; when used responsibly, it can offer safe and efficient ways for candidates to engage with voters during an election. As AI continues to prove its potential, it becomes a powerful tool that promotes efficiency, security, and alleviates creative fatigue.

Click here for a brief message from Mark Zuckerberg.

Posted in: AI, Legal Education, Legal Profession, Legal Research, Social Media