OpenAI — Influence and cyber operations

Akilnath Bodipudi
3 min readOct 15, 2024

--

As of October 2024, the landscape of cybersecurity is increasingly influenced by the growing implementation of artificial intelligence (AI) in both defensive and offensive operations, as well as in influence campaigns. Our latest report highlights how state-affiliated actors and criminal organizations exploit AI models, summarizing the key trends, challenges, and disruptions observed throughout the year.

AI in Elections
With more than 2 billion voters anticipated to participate in elections across 50 countries this year, concerns regarding the misuse of AI in election-related influence operations have intensified. The report reveals that organizations like OpenAI are actively working to identify and thwart these activities. Despite the potential risks, attempts at AI-driven election manipulation have not gained significant traction or viral reach. Operations noted in Rwanda, the United States, and Europe have struggled to make a substantial impact, suggesting that AI is not yet a major player in election interference. Continuous vigilance is vital to protect democratic processes.

AI in Cyber Operations
Threat actors are increasingly using AI models to enhance various phases of cyber operations. For example, the China-based adversary known as “SweetSpecter” attempted to leverage AI for reconnaissance, malware development, and evasion during phishing attacks targeting OpenAI employees. Although these attacks were ultimately unsuccessful, they illustrate how AI can aid offensive cyber efforts, even if it does not yet provide capabilities beyond what traditional tools can achieve.

Another notable case involves the Iranian group “CyberAv3ngers,” affiliated with the Islamic Revolutionary Guard Corps (IRGC), which employed AI for reconnaissance and scripting in attacks on industrial control systems (ICS) and programmable logic controllers (PLCs) within critical infrastructure. Their attempts to exploit vulnerabilities in water and energy systems highlight the potential risks of AI in the hands of advanced adversaries.

Covert Influence Operations
Several covert influence campaigns utilizing AI-generated content have been disrupted, including those originating from Russia, Iran, and other entities. A prominent example is the Russian network “Stop News,” which targeted audiences in West Africa and the UK with AI-generated articles, images, and comments. Despite its extensive output, the operation failed to achieve significant engagement, underscoring the challenges that threat actors face even when using AI-enhanced content creation.

Another operation, “A2Z,” aimed to praise Azerbaijan while disparaging political opponents in multiple languages across social media. Although these AI-generated comments were sophisticated, they similarly failed to gain meaningful traction, evidenced by low engagement metrics.

Single-Platform Influence Campaigns
Additional smaller-scale influence efforts have been detected, such as networks generating comments to criticize the Anti-Corruption Foundation in Russia, alongside accounts spamming gambling links through direct messages on X (formerly Twitter). These operations illustrate the diverse ways in which threat actors experiment with AI to further their agendas, whether for political manipulation or financial gain.

The Role of AI in the Information Ecosystem
AI’s role within the broader information ecosystem is becoming increasingly critical, especially as threat actors utilize AI models in intermediary phases of their operations — such as creating personas, generating content, and refining attack strategies. The report emphasizes that while AI enhances operational efficiency, it has yet to produce groundbreaking advancements in malware creation or viral disinformation campaigns. Instead, it contributes to incremental improvements in adversarial tactics, techniques, and procedures (TTPs).

On the defensive front, AI companies are also making strides. New AI-powered tools have enabled investigators to condense complex analytical tasks from days to mere minutes, enhancing their capacity to detect, analyze, and disrupt malicious activities. This capability is becoming crucial as threat actors continue to adapt their use of AI in cyber and influence operations.

Future Outlook
Looking ahead, the report underscores the need for sustained investment in AI-driven defenses, collaboration across industry sectors, and proactive disruption strategies. Although AI’s role in cyber operations and influence campaigns is still evolving and has not drastically transformed the threat landscape, its potential remains substantial. Companies like OpenAI are committed to advancing their detection, investigation, and disruption capabilities to stay ahead of these emerging threats.

In conclusion, AI represents a double-edged sword in the realm of cybersecurity. While it equips defenders with advanced tools, it simultaneously offers adversaries new methods to enhance their operations. This report serves as a timely reminder that navigating these trends requires ongoing vigilance, collaboration, and innovation in both AI and cybersecurity practices.

--

--

Akilnath Bodipudi
Akilnath Bodipudi

Written by Akilnath Bodipudi

CyberPunk who always wanted to explore a new horizons over cyber space. Doing pen testing into my own network systems for detecting the vunerabilities .

No responses yet