Shadow AI

Welcome back to the SEB tech blog! This time, I will explore a growing concern for security, compliance, and governance teams: Shadow AI.
We have long dealt with Shadow IT, tools and systems adopted without official oversight. Now, as advanced technologies become more accessible, we are seeing a new challenge emerge unapproved or unmanaged use of artificial intelligence across organisations. This new frontier brings greater potential, and greater risk.
What Is Shadow AI?
Shadow AI refers to AI tools, models, or services used inside organisations without proper oversight or approval. It could be as simple as someone uploading internal data into ChatGPT or as advanced as teams building machine learning models using third-party platforms. And it is growing fast.
With tools like GPT-4, Claude, and open-source LLMs more accessible than ever, it takes truly little to start building models, but the risks can be significant.
Why It Matters
Shadow AI introduces real risk to the business:
- Data exposure; sensitive data entered into AI tools may be stored or shared in ways you cannot control
- Lack of accountability; if an AI-generated recommendation causes harm, who is responsible?
- No traceability; if a model influences decisions, can we even explain how it got there?
Regulators are taking notice, and the implications are serious.
The AI Act & Compliance Pressure
The upcoming EU AI Act is one of the first major efforts to regulate AI based on risk levels. It emphasizes:
- Transparency and explainability
- Data governance and traceability
- Registration of high-risk AI systems
Shadow AI stands in direct conflict with these principles, and could expose businesses to non-compliance, fines, or reputational damage.
Why Data Lineage Is Critical
At the heart of trustworthy AI lies data lineage, the ability to track where data came from, how it was used, and by whom. Without it, you risk:
- Legal exposure from using sensitive or restricted data
- Biases or hallucinations that go undetected
- Broken decisions based on flawed or outdated data
Shadow AI lacks this visibility, and that is a major problem.
The Rise of AI Agents
Making things more complex is the rise of AI agents, tools that can take actions, connect systems, and make decisions on their own.
These agents are powerful, but without oversight they become autonomous risk engines, operating outside governance, on unchecked data, and without human-in-the-loop control.
What Can We Do?
To manage Shadow AI without stifling innovation, organisations can:
- Know what is out there; keep track of how and where AI is being used internally
- Set clear guardrails; define what is okay, what is not, and provide secure alternatives
- Support responsible use, educate teams, and promote transparency
- Build in visibility; establish processes for tracking data, models, and usage
- Stay ahead of regulation; start aligning with AI Act principles now, not later
SEB’s Perspective
At SEB, we are excited by the possibilities AI brings, and we are seeing strong interest across the organisation. While we take the risks seriously, we believe that with the right awareness and support, AI can be used in a way that is both innovative and responsible.
Shadow AI is part of the learning curve, and to us, it signals a healthy appetite for experimentation and progress.
Ulf Larsson, SEB Group Security CTO