Thursday 8 August 2024
How Threat Actors are Manipulating Generative AI
Generative AI has enhanced organisations by automating through supportive actions such as improving productivity, writing code and analysing data.
However, threat actors are also making use of Generative AI. By applying it to their malicious attacks, it has become powerful new tool at their disposal. Misuse and exploitation of Generative AI features and functions allows attackers to bypass the security limitations that are meant to enforce ethical and legal use of the tools.
- Prompt injection is used by cyber criminals to trick AI models into outputting unauthorised responses with methods such as reverse psychology, encoded text, or hidden characters.
- Jailbreaking can allow attackers to bypass safety restrictions with crafted prompts that alter the AI’s behaviour.
Threats Leveraging Generative AI
Phishing Attacks
Generative AI allows quick creation of both targeted and generic phishing emails that lack grammatical and spelling errors, which are usually warning indicators for targets. This makes these types of phishing emails harder to detect by users, and automatic email filtering tools struggle too, due to their added complexity. Using AI also speeds up the creation of phishing emails, which allows threat actors to carry out more campaigns. The added complexity and extra speed dramatically increases the chances of the phishing campaign succeeding.
Identity Impersonation
Deepfake technology is used to generate images and audio based off real people. Generative AI has vastly improved the accuracy and speed of deepfakes, and attackers have begun to exploit this in social engineering, scams, and spreading misinformation (such as impersonating political candidates). These attacks still require a large amount of time and effort to set up. As such, they usually impersonate high profile people like celebrities, politicians and company executives, and create assets that attackers can reuse in multiple attacks. In 2019, scammers received €220,000 from an energy company CEO after impersonating the voice of his boss from the parent company on a phone call.
Exploit and Malware Creation
With AI excelling at code creation, Generative AI is perfect for crafting exploits and malware. Hackers can provide jailbroken AI models with prompts or example code, which they can then use and adapt into malware almost instantly. Due to the speed of creating these exploits, the quickly evolving malware could override typical signature-based detection and could even render it obsolete. Attackers are able to feed information they have gathered about a target’s systems into AI along with a malware example to tailor the attack which could enhance the impacts and scale of the attack.
Data Exfiltration
Threat actors may steal the training data that is used to create generative AI models. Proven exploits are available online to execute this attack. Although much of the training data is publicly available online, some of the data used may contain sensitive information or require a payment to view. The biggest threat this poses is when companies are using Generative AI for customer support chatbots that have been trained using real customer interactions. This data could contain customer personal information, which threat actors could sell online or use to craft further attacks.