When AI moves into the control plane, security must change
Welcome back to the tech blog. This time I will look into how AI, and eventually AGI, Artificial General Intelligence, are changing the foundations of cybersecurity, and what that means for protecting customer data, trust, and resilience.
Much of the current discussion around AI focuses on productivity, automation, and efficiency. Those aspects are important, but they are not the most consequential from a security perspective. What matters more is where AI is being embedded. Increasingly, it is becoming part of the control layers of the platforms we depend on, including identity, access, analytics, decision support, and security tooling itself. This shift fundamentally changes how trust must be established and maintained.
From data protection to control protection
Cybersecurity has traditionally focused on protecting data. Where it resides, who can access it, and how it is encrypted. These concerns remain essential, but they are no longer sufficient on their own.
AI systems do more than process information. They infer, optimise, prioritise, and influence behaviour. They determine which signals are considered relevant, which actions are taken, and how systems adapt over time. As AI becomes embedded into security-relevant platforms, the central question shifts from where data is stored to who controls system behaviour.
From a cybersecurity perspective, control equals trust. If control over how systems behave, update, or make decisions is lost or obscured, then a security risk exists, even if no data has been exfiltrated.
AI, AGI, and the limits of static trust
As AI capabilities advance, several long-standing security assumptions begin to break down. Static trust models, where systems are approved once and assumed to behave predictably, no longer reflect reality.
Advanced AI systems change behaviour through frequent updates, operate across platforms and jurisdictions, influence access decisions and prioritisation, and increasingly act autonomously or semi-autonomously.
As we move towards more general AI capabilities, this trend accelerates. Artificial General Intelligence refers to AI systems that are not limited to narrow tasks, but can reason, learn, and adapt across a wide range of domains. While AGI may still be some distance away, the direction of change already challenges how we think about control, accountability, and assurance models.
In this environment, trust cannot be implicit. It must be continuously established, continuously verified, and continuously monitored.
Protecting customer data means protecting the whole system
Customer data does not exist in isolation. It flows through identities, platforms, APIs, analytics pipelines, and increasingly through AI-driven components. When AI influences these flows, protecting customer data requires more than access controls and encryption.
This requires strong identity and access assurance, clear accountability for automated decisions, transparency into system behaviour, the ability to intervene, constrain, or disengage when necessary, and resilience when dependencies fail or change.
Customers trust us not only to keep their data secure, but to handle it responsibly and predictably. Cybersecurity therefore becomes a question of system integrity, controllability, and resilience, not just confidentiality.
How SEB approaches AI and cybersecurity
At SEB, we approach AI with both ambition and discipline. We recognise its potential to strengthen security, resilience, and customer experience, while also acknowledging the responsibilities that come with increased automation and intelligence.
Our approach is grounded in strong control, continuous verification, and resilience by design. As intelligence moves deeper into platforms, maintaining transparency, accountability, and operational predictability becomes increasingly important. This is not about limiting innovation but about ensuring it reinforces trust.
My concluding thoughts
AI does not reduce our responsibility for cybersecurity. It increases it. As systems become more capable, the consequences of losing control grow more significant. That is why cybersecurity must evolve from static assurance to continuous verification.
I am optimistic that with the right focus on control, transparency, and trust, AI can strengthen security rather than weaken it. However, that outcome is not automatic. It must be deliberately designed and actively maintained.
The real question is not whether AI will change cybersecurity. It already has. The question is whether we are prepared for what that change truly means.
Author
Ulf Larsson
SEB Group Security CTO