Artificial intelligence (AI) usage in social media has been targeted as a potential threat to impact or sway voter sentiment in the upcoming 2024 presidential elections in the United States.
Major tech companies and U.S. governmental entities have been actively monitoring the situation surrounding disinformation. On Sept. 7 the Microsoft research unit called “Microsoft Threat Analysis Center” (MTAC) published research that observed “China-affiliated actors” leveraging the technology.
The report said these actors utilized AI-generated visual media in what it called a “broad campaign” that had a heavy emphasis on “politically divisive topics, such as gun violence, and denigrating U.S. political figures and symbols.”
It said it anticipates that China “will continue to hone this technology over time,” and remains to be seen how it will be deployed at scale for such purposes.
On the other hand, AI is also being employed to help detect such disinformation. On Aug. 29 Accrete AI deployed AI software to be used for real-time disinformation threat prediction from social media as contracted by the U.S. Special Operations Command (USSOCOM).
Prashant Bhuyan, the founder and CEO of Accrete said that these deep fakes and other “social media-based applications of AI” pose a serious threat.
“Social media is widely recognized as an unregulated environment where adversaries routinely exploit reasoning vulnerabilities and manipulate behavior through the intentional spread of disinformation.”
In the previous U.S. election in 2020, troll farms were reported to have reached 140 million Americans each month, according to an MIT report.
Troll farms are an “institutionalized group” of internet trolls with the intent to interfere with political opinions and decision-making.
Related: Meta’s assault on privacy should serve as a warning against AI
Already, regulators in the U.S. have been looking at ways to regulate deep fakes ahead of the election.
On Aug. 10 the U.S. Federal Election Commission concluded with a unanimous vote to advance a petition which would regulate political ads using AI. One of the members of the commission behind the petition called deep fakes a “significant threat to democracy.”
Google announced on Sept. 7 that it will be updating its political content policy in mid-November 2023 which will now make AI disclosure mandatory for political campaign ads.
It said the disclosures will be required where there is “synthetic content that inauthentically depicts real or realistic-looking people or events.”
Magazine: Should we ban ransomware payments? It’s an attractive but dangerous idea