GRCEye
All articles
RegulationEU
March 18, 2026
10 min read

EU AI Act for Security Teams: AI Governance Without Slowing Down

The AI Act imposes obligations on the deployers and providers of high-risk AI systems — and security teams are increasingly the function that has to operationalize them. A CISO playbook for AI governance that does not throttle product velocity.

GT

GRCEye Team

GRCEye Team

The AI governance problem just landed on your desk

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 with phased applicability through 2026 and 2027. The first wave of obligations — banning unacceptable-risk AI practices, transparency for general-purpose AI models — has already applied since February 2025. High-risk AI systems requirements take full effect from August 2026.

If you are a CISO, you are now the de facto owner of AI governance in most organizations. Not because the regulation says so, but because:

  • You already manage risk classification frameworks.
  • You already run technical assessments of systems before deployment.
  • You already have vendor risk management for SaaS providers.
  • You already produce evidence for auditors.

These are exactly the operational primitives the AI Act expects organizations to apply to AI systems. AI governance is mostly an extension of existing security and risk practice — provided you do it deliberately rather than by accident.

The risk-tier model

The AI Act classifies AI systems into four tiers, with very different obligations:

  • Unacceptable risk — banned outright. Examples: social scoring by public authorities, real-time biometric identification in public spaces (with narrow exceptions), emotion recognition in workplaces and schools, AI manipulating behavior through subliminal techniques.
  • High risk — significant obligations. Examples: AI in critical infrastructure, education and vocational training, employment and worker management, law enforcement, migration and border control, administration of justice, certain biometric categorization, certain medical devices.
  • Limited risk — transparency obligations. Examples: chatbots, emotion recognition systems (where allowed), AI-generated content (deepfakes must be labeled).
  • Minimal risk — no specific obligations. Most internal productivity AI usage falls here.

The first practical task for any CISO is to classify the AI systems in use across the organization. Most CISOs are surprised to discover that their organization is using considerably more AI than the C-suite realizes — including embedded AI in SaaS products that may now be in scope.

High-risk obligations (the part that drives spending)

For high-risk AI systems, the Act imposes a substantial control set:

  • Risk management system throughout the AI lifecycle.
  • Data governance — training, validation, and testing data must meet quality criteria.
  • Technical documentation demonstrating compliance.
  • Record-keeping — automatic logging of events.
  • Transparency and information to deployers.
  • Human oversight — measures to enable effective human supervision.
  • Accuracy, robustness, and cybersecurity — appropriate level for intended purpose.
  • Conformity assessment — internal control or third-party assessment depending on AI category.
  • CE marking for products before placement on the market.
  • Post-market monitoring system.

Many of these obligations map onto existing capabilities — risk management, technical documentation, logging, monitoring — extended to AI. The genuinely new obligations are conformity assessment, CE marking (for AI providers), and the AI-specific data governance requirements.

ISO 42001 — the management-system answer

ISO/IEC 42001:2023 is the management-system standard for AI. Published in late 2023, it is the AI equivalent of ISO 27001 — a framework for systematically governing AI development and deployment, with certification possible through accredited bodies.

For most security organizations, the strategic question is: do we extend ISO 27001 to cover AI, or do we adopt ISO 42001 as a separate management system?

The answer depends on how AI-intensive the organization is:

  • AI is incidental to operations: extend the existing ISO 27001 ISMS to cover AI as a category of information system. Add AI-specific risks to the risk register, AI-specific clauses to relevant policies. No separate management system needed.
  • AI is central to product or operations: stand up an ISO 42001 management system in parallel with ISO 27001. They share most foundational elements but have distinct scopes, controls, and audiences. Certification carries credibility with EU regulators and increasingly with enterprise customers.

The on-premise AI pattern

The most operationally relevant CISO decision in 2026 is where AI processes data.

Public cloud AI services — OpenAI, Anthropic, Google, AWS Bedrock — are extraordinarily capable but introduce three issues:

  1. Data egress. Sensitive content (contracts, policies, customer data, source code) leaves your control boundary. This intersects GDPR (Article 28 data processing), DORA (third-party ICT risk), NIS2 (supply-chain security), and AI Act transparency obligations.
  2. Provider lock-in and concentration risk. Many regulated entities are now formally tracking concentration risk on AI providers, treating them as critical ICT third parties.
  3. Auditability. When an external API processes your data, the audit trail is partial and lives at the provider.

The alternative — operationalizing on-premise or air-gapped LLMs through frameworks like Ollama or vLLM — has matured rapidly. Open-weight models (Llama 3.x, Qwen, DeepSeek) have closed much of the capability gap with frontier closed models for most enterprise use cases: document generation, gap analysis, contract review, summarization. For highly capable general reasoning, the closed models still lead — but the gap is narrowing each quarter.

A practical 2026 architecture for many regulated CISOs:

  • On-premise AI for any process touching sensitive data — policies, contracts, compliance evidence, customer support content, internal documents.
  • Public AI (with vendor agreements and DPIA) for non-sensitive workflows — research, code assistance with appropriate filters, marketing.
  • Outright prohibitions on public AI for specific data categories — clearly enumerated in policy and enforced through DLP.

This pattern is increasingly required for entities under DORA and NIS2, and is favorable under the AI Act because it gives you the auditability and human oversight obligations natively.

How to operationalize without slowing down

Common CISO failure modes around AI governance:

  • Block everything until policy exists. Slows innovation, drives shadow AI usage.
  • Approve everything because business demands it. Risk accumulates silently, regulator surprise.
  • Build extensive process for low-risk usage. Burns resources on the wrong end of the risk curve.

The pragmatic alternative — proportionate to risk tier:

  • For minimal-risk internal usage (writing assistance, basic productivity): publish an acceptable-use policy, train staff, do not gate.
  • For limited-risk customer-facing usage (chatbots, content generation): require transparency, basic logging, and a documented model card.
  • For high-risk usage as defined by the Act: require full risk-management process, technical documentation, human oversight design, and pre-deployment review.
  • For unacceptable-risk usage: prohibit, log, and monitor for shadow attempts.

What an inspector or board will ask

The questions both supervisors and audit committees are asking in 2026:

  1. Do you have an inventory of AI systems in use, classified by AI Act risk tier?
  2. Do you have an AI governance policy, board-approved within the last 12 months?
  3. Have you assessed third-party AI providers as ICT third parties (DORA / NIS2)?
  4. Where do high-risk AI systems process personal data, and how does that comply with GDPR?
  5. What is your incident response procedure for AI-specific failures (hallucination causing harm, model drift, prompt injection)?
  6. How do you ensure human oversight where required?
  7. What is your training programme for staff using AI?

If you can confidently answer all seven, you are in good standing for both the August 2026 high-risk obligations and the broader regulatory direction of travel.

Closing thought

The EU AI Act is not the existential compliance burden it is sometimes presented as. For minimal- and limited-risk AI — which is the vast majority of organizational AI usage — the obligations are modest and largely covered by existing security practice. For high-risk AI, the obligations are real but operationally tractable when treated as an extension of an existing ISMS rather than a separate compliance project.

The CISOs who are struggling with AI governance in 2026 are those who never started. The ones who are succeeding started with two simple steps in 2024: an AI inventory, and a written acceptable-use policy. Everything else has been incremental from there.

Frequently asked questions

When does the EU AI Act apply?

The Act entered into force on 1 August 2024. Prohibitions on unacceptable-risk AI practices apply from 2 February 2025. Obligations for general-purpose AI models apply from 2 August 2025. Most other obligations, including those for high-risk AI systems, apply from 2 August 2026. A small set of high-risk system obligations (those embedded in regulated products) apply from 2 August 2027.

What is ISO 42001?

ISO/IEC 42001:2023 is the international management-system standard for artificial intelligence. It provides a framework for systematically governing AI development and deployment, analogous to ISO 27001 for information security. Certification is available through accredited bodies and is increasingly recognized by EU regulators and enterprise customers.

Does the AI Act apply to companies outside the EU?

Yes, in two main cases: (1) when an AI system is placed on the EU market or its output is used in the EU, regardless of where the provider or deployer is established; (2) when a non-EU provider exports AI products to EU customers. The territorial scope is similar to GDPR — what matters is whether outputs are used in the EU, not where you operate.

Is on-premise AI required for compliance?

No, but it is increasingly preferred for sensitive use cases. Public AI services can be used compliantly with appropriate vendor agreements, DPIAs, and contractual safeguards. However, for entities under DORA, NIS2, or GDPR processing sensitive data, on-premise or self-hosted AI simplifies third-party risk, data egress, and auditability obligations significantly.

Who is responsible for AI governance — security, legal, or product?

Optimally a cross-functional structure: security leads risk classification and technical controls; legal leads regulatory interpretation and contracts; product leads use-case decisions and human oversight design; data privacy leads DPIAs and data governance. In practice the CISO often becomes the orchestrating function because the operational primitives (risk frameworks, vendor management, audit evidence) sit there.

Ready to operationalize this?

GRCEye gives security teams a single platform for risk, compliance, audit, vendor risk, and policy — with AI that runs on your own infrastructure.