On Thursday, OpenAI published its inaugural report detailing the usage of its artificial intelligence tools in covert influence operations.
The report highlights OpenAI's successful efforts in thwarting disinformation campaigns originating from various countries, including Russia, China, Israel, and Iran.
OpenAI Responds to Misuse of AI for Deceptive Content
People used the company's generative AI models to produce and distribute misleading content on various social media platforms, as well as to translate their messages into multiple languages.
According to the report, none of the campaigns were able to gain traction or reach large audiences.
With the rise of generative AI, there has been growing apprehension among researchers and lawmakers regarding its impact on the proliferation of online disinformation.
OpenAI's extensive 39-page document provides a comprehensive account of how its software has been utilized for propaganda purposes, setting it apart from other artificial intelligence companies, The Guardian reported.
OpenAI recently announced that it has taken action against several accounts involved in covert influence operations.
These operations, conducted by a combination of state and private actors, were discovered and banned by OpenAI researchers over the course of the past three months.
READ NEXT : OpenAI's Sam Altman Accused of Toxic Leadership, Psychological Abuse by Ex-Board Members
OpenAI Report Exposes Russian, Chinese Disinformation Operations
OpenAI disclosed in its report that a number of widely recognized state-affiliated disinformation actors had been utilizing its tools. There was a Russian operation called Doppelganger that was first discovered in 2022.
According to Financial Times, its main goal is to undermine support for Ukraine. Additionally, there is a Chinese network known as Spamouflage that promotes Beijing's interests outside of China.
Both campaigns utilized their models to generate text or comments in various languages before posting on platforms like Elon Musk's X.
The system detected a previously undisclosed Russian operation called Bad Grammar.
It revealed that OpenAI models were used to troubleshoot code for a Telegram bot and generate concise political comments in both Russian and English, which were subsequently shared on the messaging platform Telegram.
Join the Conversation