The AI governance problem just landed on your desk
The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 with phased applicability through 2026 and 2027. The first wave of obligations — banning unacceptable-risk AI practices, transparency for general-purpose AI models — has already applied since February 2025. High-risk AI systems requirements take full effect from August 2026.
If you are a CISO, you are now the de facto owner of AI governance in most organizations. Not because the regulation says so, but because:
- You already manage risk classification frameworks.
- You already run technical assessments of systems before deployment.
- You already have vendor risk management for SaaS providers.
- You already produce evidence for auditors.
These are exactly the operational primitives the AI Act expects organizations to apply to AI systems. AI governance is mostly an extension of existing security and risk practice — provided you do it deliberately rather than by accident.
The risk-tier model
The AI Act classifies AI systems into four tiers, with very different obligations:
- Unacceptable risk — banned outright. Examples: social scoring by public authorities, real-time biometric identification in public spaces (with narrow exceptions), emotion recognition in workplaces and schools, AI manipulating behavior through subliminal techniques.
- High risk — significant obligations. Examples: AI in critical infrastructure, education and vocational training, employment and worker management, law enforcement, migration and border control, administration of justice, certain biometric categorization, certain medical devices.
- Limited risk — transparency obligations. Examples: chatbots, emotion recognition systems (where allowed), AI-generated content (deepfakes must be labeled).
- Minimal risk — no specific obligations. Most internal productivity AI usage falls here.
The first practical task for any CISO is to classify the AI systems in use across the organization. Most CISOs are surprised to discover that their organization is using considerably more AI than the C-suite realizes — including embedded AI in SaaS products that may now be in scope.
High-risk obligations (the part that drives spending)
For high-risk AI systems, the Act imposes a substantial control set:
- Risk management system throughout the AI lifecycle.
- Data governance — training, validation, and testing data must meet quality criteria.
- Technical documentation demonstrating compliance.
- Record-keeping — automatic logging of events.
- Transparency and information to deployers.
- Human oversight — measures to enable effective human supervision.
- Accuracy, robustness, and cybersecurity — appropriate level for intended purpose.
- Conformity assessment — internal control or third-party assessment depending on AI category.
- CE marking for products before placement on the market.
- Post-market monitoring system.
Many of these obligations map onto existing capabilities — risk management, technical documentation, logging, monitoring — extended to AI. The genuinely new obligations are conformity assessment, CE marking (for AI providers), and the AI-specific data governance requirements.
ISO 42001 — the management-system answer
ISO/IEC 42001:2023 is the management-system standard for AI. Published in late 2023, it is the AI equivalent of ISO 27001 — a framework for systematically governing AI development and deployment, with certification possible through accredited bodies.
For most security organizations, the strategic question is: do we extend ISO 27001 to cover AI, or do we adopt ISO 42001 as a separate management system?
The answer depends on how AI-intensive the organization is:
- AI is incidental to operations: extend the existing ISO 27001 ISMS to cover AI as a category of information system. Add AI-specific risks to the risk register, AI-specific clauses to relevant policies. No separate management system needed.
- AI is central to product or operations: stand up an ISO 42001 management system in parallel with ISO 27001. They share most foundational elements but have distinct scopes, controls, and audiences. Certification carries credibility with EU regulators and increasingly with enterprise customers.
The on-premise AI pattern
The most operationally relevant CISO decision in 2026 is where AI processes data.
Public cloud AI services — OpenAI, Anthropic, Google, AWS Bedrock — are extraordinarily capable but introduce three issues:
- Data egress. Sensitive content (contracts, policies, customer data, source code) leaves your control boundary. This intersects GDPR (Article 28 data processing), DORA (third-party ICT risk), NIS2 (supply-chain security), and AI Act transparency obligations.
- Provider lock-in and concentration risk. Many regulated entities are now formally tracking concentration risk on AI providers, treating them as critical ICT third parties.
- Auditability. When an external API processes your data, the audit trail is partial and lives at the provider.
The alternative — operationalizing on-premise or air-gapped LLMs through frameworks like Ollama or vLLM — has matured rapidly. Open-weight models (Llama 3.x, Qwen, DeepSeek) have closed much of the capability gap with frontier closed models for most enterprise use cases: document generation, gap analysis, contract review, summarization. For highly capable general reasoning, the closed models still lead — but the gap is narrowing each quarter.
A practical 2026 architecture for many regulated CISOs:
- On-premise AI for any process touching sensitive data — policies, contracts, compliance evidence, customer support content, internal documents.
- Public AI (with vendor agreements and DPIA) for non-sensitive workflows — research, code assistance with appropriate filters, marketing.
- Outright prohibitions on public AI for specific data categories — clearly enumerated in policy and enforced through DLP.
This pattern is increasingly required for entities under DORA and NIS2, and is favorable under the AI Act because it gives you the auditability and human oversight obligations natively.
How to operationalize without slowing down
Common CISO failure modes around AI governance:
- Block everything until policy exists. Slows innovation, drives shadow AI usage.
- Approve everything because business demands it. Risk accumulates silently, regulator surprise.
- Build extensive process for low-risk usage. Burns resources on the wrong end of the risk curve.
The pragmatic alternative — proportionate to risk tier:
- For minimal-risk internal usage (writing assistance, basic productivity): publish an acceptable-use policy, train staff, do not gate.
- For limited-risk customer-facing usage (chatbots, content generation): require transparency, basic logging, and a documented model card.
- For high-risk usage as defined by the Act: require full risk-management process, technical documentation, human oversight design, and pre-deployment review.
- For unacceptable-risk usage: prohibit, log, and monitor for shadow attempts.
What an inspector or board will ask
The questions both supervisors and audit committees are asking in 2026:
- Do you have an inventory of AI systems in use, classified by AI Act risk tier?
- Do you have an AI governance policy, board-approved within the last 12 months?
- Have you assessed third-party AI providers as ICT third parties (DORA / NIS2)?
- Where do high-risk AI systems process personal data, and how does that comply with GDPR?
- What is your incident response procedure for AI-specific failures (hallucination causing harm, model drift, prompt injection)?
- How do you ensure human oversight where required?
- What is your training programme for staff using AI?
If you can confidently answer all seven, you are in good standing for both the August 2026 high-risk obligations and the broader regulatory direction of travel.
Closing thought
The EU AI Act is not the existential compliance burden it is sometimes presented as. For minimal- and limited-risk AI — which is the vast majority of organizational AI usage — the obligations are modest and largely covered by existing security practice. For high-risk AI, the obligations are real but operationally tractable when treated as an extension of an existing ISMS rather than a separate compliance project.
The CISOs who are struggling with AI governance in 2026 are those who never started. The ones who are succeeding started with two simple steps in 2024: an AI inventory, and a written acceptable-use policy. Everything else has been incremental from there.
