Day 0 has already passed which marked the watershed between security we knew then, and security that is now, and going forward.
We have seen shifts in the information- and communication technology (ICT) security arena that took decades to transform security practices and controls. From networked perimeter defences to identity-based zero (assumed) trust strategies for the establishment of both resistance and resilience against threat actors. With the enormous developments of GenAI capabilities over the last 24 months, a new ICT security world has dawned. Engineering of future outcomes is underway for both the good, and the not so good. I am going to build further on the blog post on “Demystifying AI” by SEB Security CTO, Ulf Larsson, in mid-February and elaborate on the cybersecurity part of the post. Come join me for a moment on this thrilling journey to new territories.
Related blog post: Demystifying AI
The risk outlook
Whenever we find ourselves in new territory, there will be a lot of FUD (fear, uncertainty, and doubt) that floats around the digital landscape. GenAI is not an exception to that rule. On the contrary. It is good practice to dissect the FUD and do proper research to have a clear understanding of the risks as well as the benefits of GenAI. Some of the potential risks that we have researched are:
- Automation of attacks and “compliance extortion” – The automation levels with a set of special-purpose AI agents can potentially equip the threat actors with sophisticated capabilities in the whole attack chain. Depending on the motivation(s), the threat actor can carry out extensive reconnaissance and target a specific victim with well-defined spear phishing. Another moneymaking activity would be to scan a target’s exposed APIs for vulnerabilities in the back end and send an “consulting” invoice to not disclose breaches to regulations the target victim is subject to.
- Malware development – Leveraging on automation, the threat actors can leverage GenAI for the development of malicious software (malware) that evades the detection of current defences.
- Disinformation campaigns – Threat actors can scale up their disinformation campaigns by leveraging GenAI to get a widespread manipulation of a victim.
- Deepfakes – Producing highly realistic images, videos, and audio recordings can be a part of campaigns to strengthen the social- and/or political manipulation. Innovative ways within this field can be leveraged in either automated attacks, disinformation campaigns, or an end to its means on itself.
A research paper, “Review of Generative AI Methods in Cybersecurity”, by several security researchers is a really good source for deeper investigation into the challenges of GenAI, as well as sound defences.
The possibilities outlook
GenAI holds vast potential that can be tapped for the benefit of pattern recognitions and extensive data set analysis. This potential can be translated into several practical applications of GenAI within the ICT- and cybersecurity context. Some of the studied areas include:
- Threat intelligence – GenAI can assist in overcoming the overwhelming amounts of data to be sanitized and analysed. By leveraging GenAI capabilities data can be analysed by LLMs (large-language models) and augment the learnings for the benefit of defenders.
- Cyber-attack identification – Pre-trained LLMs can be invoked for anomaly detection and evidence searching based on a thorough artefact understanding. Over time, this approach can lead to a lower level of false positives and a richer set of forensic data to aid security analysts in incident response(s).
- Code security – Areas like code generation, code reviews, and vulnerability detection are all areas that can leverage LLMs.
- Automated remediations – When the pre-trained models have matured over time and have a low level of hallucinations (false positives), there will be a higher level of automated responses to identified anomalies.
Societal developments
Areas that investigate GenAI from legal-, social-, and ethical aspects have existed for a while and are expanding in scope. Large companies like IBM, Google and Microsoft have implemented responsible AI programmes and principles. OpenAI has just released a preliminary draft called “model spec,” where the company outlines the principles as a guiding model for responsible AI development. The European Union is due to implement the AI Act, which aims to establish a common regulatory and legal framework for the European union. Canada, UK, Japan, USA, Korea, and other countries have all sector specific regulations either proposed or adopted. OECD has defined principles for trustworthy AI with recommended value-based principles, as well as recommendations for policy makers.
We can expect these developments to increase in breadth and depth over time. One thing is sure: The innovation pace will continue to reshape the field of ICT- and cybersecurity.
SEB position
We don't recognize GenAI to be either the doomsday or nirvana for the security practices. Our approach is balanced and we recognize both sides of the GenAI equation when leading our security resistance and security resilience programmes. The governance and management of AI technologies is an integral part, as well as a cross-sector cooperation for the further advancement of good security practices. Raised awareness and training on the capabilities and benefits of AI technologies, as well as the associated risks, is paramount. This instils the new area to the security culture that we all live and breathe. The right level of innovative spirit, security awareness, and the learning mindset will be our guide on the continued journey where we will continue to be the responsible actor and custodian of customer trust.
Author: Predrag Mitrovic, Information Security Officer