AI Governance and EU AI Act
Practical governance steps for teams shipping AI products in Europe.
Why governance matters before launch, not after
AI governance is not a legal exercise. It is an operational one. The teams that get burned are the ones that treat compliance as a document produced after launch — not as a set of decisions that shape the product architecture from day one. The EU AI Act formalizes this: by August 2026, most covered systems face binding obligations that depend on choices made at design time, not retrofitted later.
The Act classifies AI systems by risk — unacceptable, high, limited, and minimal. Most B2B AI products land in limited or high risk. The difference between those two categories is expensive. High-risk classification pulls in conformity assessments, documentation obligations, post-market monitoring, and human-oversight requirements. Limited-risk systems get transparency obligations only. Knowing which category applies to your product is the first governance decision you make.
Even if you ship outside the EU, the Act is worth treating as the baseline. Most enterprise buyers procuring AI in 2026 will ask for EU AI Act alignment in their vendor questionnaires. "We're not in scope" is not a competitive answer.
EU AI Act timeline
Operational checklist
- Assign one accountable AI owner and one legal owner.
- Inventory AI use-cases by risk and business impact.
- Document training data origin, model version, and deployment scope.
- Establish human-override rules for high-impact decisions.
- Create audit logs for prompts, outputs, and user actions.
- Define incident response for harmful outputs and policy breaches.
- Review every deployment against legal and internal policy checklists.
Understanding the risk classification
The EU AI Act sorts systems into four tiers. You need to know which one applies before you do anything else.
Unacceptable risk
Banned outright. Includes social scoring, manipulative systems that exploit vulnerabilities, and certain biometric categorization use-cases.
High risk
Allowed with strict obligations. Covers AI in hiring, credit scoring, critical infrastructure, medical devices, education, and law enforcement. Triggers conformity assessments and human-oversight requirements.
Limited risk
Transparency obligations only. Chatbots, generative AI systems, and deepfakes must disclose that users are interacting with AI.
Minimal risk
No specific obligations. Covers most common AI use-cases: spam filters, recommendation engines, most productivity tools.
What to do this quarter
If you haven't started, these are the three moves worth making before the August 2026 deadline.
- Run a use-case inventory: list every AI system in production, including third-party models embedded in SaaS tools. Classify each one against the four-tier risk structure.
- Assign owners: each AI use-case gets one accountable owner on the business side and one on the legal or compliance side. Unassigned systems are the ones that create incidents.
- Document upstream: for any system using a foundation model, capture the model version, vendor, data-handling terms, and deployment scope. This is the documentation regulators and enterprise buyers will ask for.
FAQ
Does the EU AI Act apply to non-EU companies?
Yes, if your AI system is placed on the EU market, used by people in the EU, or produces outputs used in the EU. This mirrors the GDPR extraterritorial pattern. Most global SaaS vendors are in scope.
What's the penalty for non-compliance?
Fines scale with the violation. For prohibited AI practices, up to €35 million or 7% of global annual turnover. For other violations, up to €15 million or 3%. These are enforcement ceilings, not starting points — but they set the tone.
Do I need a Chief AI Officer?
Not required. What is required is one named accountable person per AI use-case. That can be a product lead, a legal counsel, or an existing risk officer — the title matters less than the accountability being unambiguous.