At the start of the year, concerns about the potential misuse of generative AI in global elections were widespread. However, as we reflect on the past 12 months, it appears that these fears did not come to fruition, at least not on Meta’s platforms. In a recent blog post, the company claims that its technology had a limited impact across Facebook, Instagram, and Threads.
Meta’s Findings Based on Major Elections
The company’s findings are based on content related to major elections in several countries, including the U.S., Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the U.K., South Africa, Mexico, and Brazil. According to Meta, while there were instances of confirmed or suspected use of AI in this manner, the volumes remained low.
Existing Policies Proved Sufficient
Meta notes that its existing policies and processes proved sufficient to reduce the risk around generative AI content. The company reports that ratings on AI content related to elections, politics, and social topics represented less than 1% of all fact-checked misinformation during major election periods.
Imagine AI Image Generator Rejected Thousands of Requests
The company also notes that its Imagine AI image generator rejected over 590,000 requests to create images of President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden in the month leading up to election day. This move aimed to prevent people from creating election-related deepfakes.
Coordinated Networks’ Limited Use of Generative AI
Meta found that coordinated networks of accounts attempting to spread propaganda or disinformation "made only incremental productivity and content-generation gains using generative AI." The company notes that this limited use did not impede its ability to take down these covert influence campaigns.
Focus on Behavioral Analysis, Not Content
Meta emphasizes that it focuses on the behaviors of these accounts, rather than the content they post, regardless of whether or not it was created with AI. This approach allowed the company to effectively detect and disrupt such operations.
20 New Covert Influence Operations Disrupted Worldwide
In addition to its efforts to limit the use of generative AI in elections, Meta reports that it took down around 20 new covert influence operations worldwide to prevent foreign interference. The majority of these networks did not have authentic audiences, with some using fake likes and followers to appear more popular than they actually were.
Finger Pointing at Other Platforms
Meta points out that false videos about the U.S. election linked to Russian-based influence operations were often posted on X and Telegram. This highlights the importance of cooperation between platforms in preventing such activities.
Review of Policies and Future Changes
In conclusion, Meta notes that it will continue to review its policies and announce any changes in the months ahead. As we reflect on the past year, it’s clear that the use of generative AI in global elections remains a pressing concern. However, with continued innovation and cooperation between platforms, there is hope for mitigating these risks.