Natural language processing (NLP, a form of machine learning) is being used to write spearphishing emails at scale. Normally an unappealingly labour-intensive task, researchers have shown that using the algorithm outstrips the capabilities of any team of human cybercriminals, and proves that putting all the work in is worth the effort.
In a limited test, Singapore’s Government Technology Agency sent targeted hand-crafted phishing emails and others generated by an AI-as-a-service platform to 200 colleagues. Both messages contained non-malicious clickbait links, with the surprise being that more people clicked the AI-generated links, and by a significant margin.
“But once you put it on AI-as-a-service it costs a couple of cents and it’s really easy to use—just text in, text out. You don’t even have to run code, you just give it a prompt and it will give you output. So that lowers the barrier of entry to a much bigger audience and increases the potential targets for spearphishing. Suddenly every single email on a mass scale can be personalized for each recipient.”Eugene Lim, cybersecurity specialist, Singapore Government Technology Agency
The researchers used OpenAI’s GPT-3 platform and other AI-as-a-service products that focus on personality analysis to predict an individual’s interests and trigger points to generate phishing emails tailored to their colleagues’ backgrounds and traits.
“Misuse of language models is an industry-wide issue that we take very seriously as part of our commitment to the safe and responsible deployment of AI, We grant access to GPT-3 through our API, and we review every production use of GPT-3 before it goes live. We impose technical measures, such as rate limits, to reduce the likelihood and impact of malicious use by API users. Our active monitoring systems and audits are designed to surface potential evidence of misuse at the earliest possible stage, and we are continually working to improve the accuracy and effectiveness of our safety tools.”OpenAI
By using deep learning language models like OpenAI’s GPT-3 to develop a framework that can differentiate AI generated text from that composed by humans, the idea is to build mechanisms that can flag synthetic media in emails, making it easier to catch possible AI-generated phishing messages.