Microsoft revealed on Wednesday that state-backed hackers from Russia, China, North Korea, and Iran have been using tools provided by its partner OpenAI to enhance their hacking techniques and deceive their targets.
Microsoft Says Hackers From Russia, China, and Iran Use OpenAI Tools
According to Reuters, Microsoft had tracked hacking groups associated with Russia's military intelligence, Iran's Revolutionary Guard, and the Chinese and North Korean governments as they tried to refine their hacking strategies using large language models (LLMs), which are a type of artificial intelligence (AI) algorithm.
This specialized class of AI model analyzes vast amounts of text data to produce humanlike responses. In response to these findings, Microsoft implemented a blanket ban on state-backed hacking groups utilizing its AI products.
"Independent of whether there's any violation of the law or any violation of terms of service, we just don't want those actors that we've identified... we track and know are threat actors of various kinds... to have access to this technology," Microsoft Vice President for Customer Security Tom Burt told Reuters.
Diplomatic officials from Russia, North Korea, and Iran have yet to comment on the allegations. However, China's US embassy spokesperson Liu Pengyu denounced the allegations, telling Reuters they were "groundless smears and accusations against China" and recommended AI technology's "safe, reliable and controllable" deployment to "enhance the common well-being of all mankind."
Abuse of AI Tools by State-Backed Hackers
State-backed hackers caught using AI tools to help boost their spying capabilities are no longer new. Since last year, senior cybersecurity officials in the West have warned that malicious actors were abusing such tools.
However, Bob Rotsted, who leads OpenAI's cybersecurity threat intelligence, told Reuters that this "is one of the first, if not the first, instances of an AI company... discussing publicly how cybersecurity threat actors use AI technologies."
OpenAI and Microsoft characterized the hackers' use of their AI tools as "early-stage" and "incremental." According to Burt, neither company had observed cyber spies make any breakthroughs. The report described how the hacking groups used the LLMs differently.
Microsoft said hackers, allegedly working on behalf of Russia's military spy agency, used the models to research "various satellite and radar technologies that may pertain to conventional military operations in Ukraine."
On the other hand, North Korean hackers were found generating content "that would likely be for use in spear-phishing campaigns" against regional experts, while Iranian hackers used the models to write persuasive emails, like trying to attract "prominent feminists" to a fake and booby-trapped website.
Microsoft said Chinese state-backed hackers used LLMs to ask questions on cybersecurity issues, rival intelligence agencies, and "notable individuals."
Join the Conversation