top of page

Disrupting Deceptive AI: OpenAI's Fight Against Covert Influence Operations

6/8/24

Editorial team at Bits with Brains

Over the past three months, OpenAI has identified and disrupted at least five covert influence operations that employed AI to manipulate public opinion.

Over the past three months, OpenAI has identified and disrupted at least five covert influence operations that employed AI to manipulate public opinion. These operations, originating from countries such as Russia, China, Iran, and Israel, exploited OpenAI's models to generate and disseminate misleading content. 


One of the primary uses of AI by threat actors was to generate large volumes of text with high coherence and fewer grammatical errors. This capability allowed them to produce content at an unprecedented scale, significantly enhancing their productivity. However, the authenticity and effectiveness of these AI-generated texts varied, often failing to engage the target audience meaningfully.


Threat actors combined AI-generated content with traditional methods of influence. This hybrid approach aimed to create a more credible and persuasive narrative. For example, AI was used to translate articles into multiple languages and generate social media comments, which were then disseminated through established influence networks. Despite these efforts, the operations often lacked the nuanced understanding required to sustain long-term engagement.


AI tools were also employed to simulate engagement, such as generating fake likes, shares, and comments on social media platforms. While these activities created an illusion of popularity, they rarely translated into genuine interaction or influence. The superficial metrics did not fool the more discerning audience, highlighting a key limitation in the threat actors' strategy.


Nevertheless, the integration of AI tools resulted in significant productivity gains for these covert operations. AI-assisted content generation, translation, and management allowed threat actors to operate more efficiently. These gains were offset by the limitations of either the state actors’ inability to effectively prime the AI or the AI's ability to fully understand and manipulate human behavior, which is essential for successful influence operations.


OpenAI's (and others’) models incorporate safety systems designed to prevent the generation of harmful content. These systems were crucial in mitigating the impact of the identified covert operations. By implementing rigorous defensive design principles, OpenAI was able to limit the potential misuse of its technologies.


AI tools played a pivotal role in accelerating investigations into covert influence operations. The ability to quickly analyze large datasets and identify patterns reduced the time required for threat analysis. This expedited response was instrumental in disrupting the operations before they could achieve significant impact.


Despite the sophisticated techniques employed by threat actors, the widespread distribution of AI-generated content often failed to engage substantial audiences. This finding underscores the importance of distribution channels and the inherent challenges in achieving genuine influence through automated means alone.


Collaboration with industry peers enhanced OpenAI's efforts to disrupt covert influence operations. By sharing information and best practices, organizations can collectively strengthen their defenses against AI-enabled threats. This cooperative approach is essential for addressing the evolving landscape of digital deception.


Despite the advanced tools at their disposal, threat actors were not immune to human errors. Mistakes in strategy, execution, and content generation often undermined their efforts. This persistent human element highlights the limitations of AI and underscores the need for comprehensive strategies that combine technology with human oversight.


The dual role of AI in both enabling and combating misinformation presents significant implications for organizations seeking to implement AI responsibly, and the insights derived from OpenAI's experiences offer valuable lessons for enhancing AI governance and safeguarding against misuse.


For organizations adopting AI, it is crucial to integrate robust safety measures into their systems. This includes implementing defensive design principles that prevent the generation of harmful content and leveraging AI tools for enhanced threat analysis. By prioritizing safety and security, organizations can mitigate the risks associated with AI misuse.


Also, collaboration across sectors is essential for addressing the complex challenges posed by AI-enabled threats. By sharing information, best practices, and resources, organizations can collectively enhance their defenses. Industry-wide cooperation is key to staying ahead of sophisticated threat actors and ensuring a secure digital environment.


The persistent human element in covert influence operations underscores the importance of combining technology with human oversight. AI can enhance productivity and efficiency, but human judgment seems essential for understanding and influencing human behavior – a likely limitation with the threat actors. Organizations must adopt a balanced approach that leverages AI's capabilities while maintaining rigorous human oversight.


As AI’s rapid evolution continues, so do the threats associated with its misuse. Organizations must remain proactive and adaptable, continuously updating their strategies to address emerging risks. This proactive approach is essential for staying ahead of threat actors and ensuring the responsible use of AI technologies.


Sources:

[1] https://openai.com/index/disrupting-deceptive-uses-of-AI-by-covert-influence-operations/

[2] https://www.technologyreview.com/2024/06/08/1093356/propagandists-are-using-ai-too-and-companies-need-to-be-open-about-it/

[3] https://www.scmagazine.com/news/openai-report-reveals-threat-actors-using-chatgpt-in-influence-operations

[4] https://www.wuot.org/2024-05-31/in-a-first-openai-removes-influence-operations-tied-to-russia-china-and-israel

[5] https://www.darkreading.com/threat-intelligence/openai-disrupts-5-ai-powered-state-backed-influence-ops

[6] https://www.reuters.com/technology/cybersecurity/openai-has-stopped-five-attempts-misuse-its-ai-deceptive-activity-2024-05-30/

[7] https://www.aljazeera.com/economy/2024/5/31/openai-says-it-disrupted-chinese-russian-israeli-influence-campaigns

[8] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11117051/

[9] https://www.itic.org/policy/ITI_AIContentAuthorizationPolicy_122123.pdf

[10] https://www.digitalocean.com/resources/article/ai-fraud-detection

[11] https://www.brennancenter.org/our-work/research-reports/how-detect-and-guard-against-deceptive-ai-generated-election-information

[12] https://duo.com/decipher/new-initiative-seeks-to-bring-collaboration-to-ai-security

[13] https://www.spiceworks.com/tech/artificial-intelligence/guest-article/effective-ai-cybersecurity-cross-collaboration-and-proactivity/

[14] https://cyber.ee/resources/case-studies/AI-and-cybersec/

[15] https://www.dhs.gov/news/2024/04/29/fact-sheet-dhs-facilitates-safe-and-responsible-deployment-and-use-artificial

[16] https://ai.google/static/documents/building-a-responsible-regulatory-framework-for-ai.pdf

[17] https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson10

[18] https://www.techuk.org/resource/ai-regulation-a-framework-for-responsible-artificial-intelligence.html

[19] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10930608/

[20] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai


Sources

bottom of page