in ,

Political Manipulation with Massive AI Model-driven Misinformation and Microtargeting

Applying generative AI, bad actors could tailor disinformation campaigns to affect election outcomes on a massive scale with relatively little effort.

In today’s digitally connected world, political messaging and misinformation are becoming increasingly sophisticated. Political campaigns and misinformation efforts, particularly those that are well-funded, have significant societal impacts. These campaigns have historically exploited political and ideological views to resonate with people, convince them to act, or even lure them into scams.

Generative AI technologies such as large language models (LLMs) and large image models will likely transform the domain. Generative AI offers tools for creating sophisticated, individualized content at scale, which was previously difficult and labor-intensive. With these capabilities, the risk posed by malicious actors can reach new heights.

We have already observed a variety of actors abusing generative AI as part of ongoing fraud campaigns, including the use of generative text to send messages to scam victims, generative AI images to create deceptive social media and “deepfake” video and voice created by AI to aid social engineering of victims. These same tools have been used as part of political misinformation and deception campaigns on social media.

Given the relevance of these subjects due to ongoing elections worldwide, understanding the effect of new technology on political misinformation is particularly consequential. In this analysis, we explore one of the greatest emerging threats from malicious use of generative AI: tailored misinformation. If someone includes intentional misinformation in a bulk email, people who don’t agree with that misinformation will be turned away from the campaign. But in the method we explored in our research, misinformation is only added to the email when that specific individual is likely to agree with it. The ability to do this can completely change the scale at which misinformation can propagate.

In the research we document in this report, we aimed to uncover potential methods in which adversaries could apply generative AI tools to make impactful changes in the political sphere. These methods use current generative AI technologies in a way that can be executed at very low cost by a wide range of potential actors who wish to influence politics on a small or large scale.

This effort was based on research we’ve already conducted, in which we developed a tool that can automatically launch an e-commerce scam campaign based on AI-generated text, images, and audio to create diverse and convincing fraudulent webstores. An example of one these websites is shown in Figure 1; a full description of the research can be found here.

Written by dotdailydose

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Loading…

0

realme unveils dedicated Camera Control button ahead of its global competitors

Sun Life Unveils New Global Tech Funds