In the last year, experts have witnessed an increase in the speed, scale and sophistication of cyberattacks, coincidentally a phenomenon that has gone hand in hand with the rapid development and adoption of Artificial Intelligence. Or not so coincidentally, because Microsoft and OpenAI today revealed joint research that reveals that hackers are already They are using AI models (such as ChatGPT itself) to optimize their cyber attack strategies.
Thus, they have detected attempts by groups backed by Russia, North Korea, Iran and China that use tools such as ChatGPT to investigate targets, improve scripts and help develop scam and social engineering strategies.
“Cybercriminal groups and threat actors backed by nation-states are exploring and testing different AI technologies as they are released, in an attempt to understand […] security controls they may need to bypass.”
‘Sgroogled.com’: when MICROSOFT launched ANTI-GOOGLE ads
The ‘cyber-axis of evil’
The group Strontiumlinked to Russian military intelligence (it has been very active during the Russia-Ukraine war), has been detected using language models to try “understand satellite communication protocolsradar imaging technologies and specific technical parameters, [así como para realizar] basic scripting tasks, including file manipulation or data selection.”
It is not very different from the use that North Korean hackers have given it Thalliumalthough they have also been using AIs to investigate public vulnerabilities and to write content for campaigns phishing.
Meanwhile, the Iranians of the Curium group has also been using LLMs to generate code capable of avoid detection by anti-malware applications.
Lastly, Charcoal Typhoon (a hacker group linked to the Chinese regime) have also used this type of AI to investigate platforms and vulnerabilities, as well as to improve translations in your social engineering operations.
Fighting smart cyber attacks
Microsoft and OpenAI have not yet detected “significant attacks” based on the use of LLMs, but both companies have been closing all accounts and assets associated with these hacking groups:
“[Creemos que es importante] expose your moves in these initial stages and […] and share information with the Defense community about how we are countering them.”
Likewise, Microsoft warns about possible future malicious applications of AI, such as its use in voice spoofing:
“AI-driven fraud is another critical concern. Speech synthesis is an example of this, where a three-second voice sample can train a model to sound like anyone. Even something as innocuous as your email greeting voice can be used to obtain a sufficient sample.
Microsoft is committed to fighting fire with fire and using AI to respond to AI-powered attacks. To do this, it is developing a ‘Security Copilot’a new AI assistant designed for cybersecurity professionals capable of identifying attacks and better understanding the enormous amount of data that is generated every day through cybersecurity tools.
Via | OpenAI & Microsoft
Image | Marcos Merino through AI
In Genbeta | There is a tech profession with a great future: red hackers, essential for AI, and which more and more companies integrate