On August 2, 2026, enforcement of the EU AI Act begins for high-risk AI system operators. Fines for non-compliance reach 7% of global annual turnover — for a company generating $100 million in revenue, that is a $7 million exposure. For a company at $1 billion, it is $70 million.
Finance teams at companies deploying AI in regulated use cases have approximately six months to get their documentation in order. The majority of them have not started. A 2025 Gartner survey found that 68% of organizations deploying AI in Europe had no formal AI governance documentation in place, and 41% had not yet assessed which of their AI systems qualified as high-risk under the Act.
This is a practical guide for the finance team — not the legal team, not the AI team — on what the EU AI Act requires, what documentation you need to produce, and how continuous AI vendor monitoring is not optional in a compliant architecture.
What the EU AI Act Actually Regulates
The EU AI Act is a risk-based regulation. It does not prohibit AI outright. It establishes four risk tiers and applies different compliance requirements to each. Unacceptable risk systems — facial recognition in public spaces for law enforcement — are banned. High-risk systems — AI used in employment decisions, credit scoring, insurance underwriting, healthcare, and critical infrastructure — face the heaviest compliance burden. Limited risk systems face transparency requirements. Minimal risk systems face no specific requirements.
For finance teams, the critical question is whether your AI applications fall into the high-risk category. The Act defines high-risk AI systems in Annex III. The categories most relevant to finance include: AI used in credit and insurance decisions affecting individuals, AI used in employment screening and HR decisions, and AI used in critical digital infrastructure where failure could have significant economic consequences.
If your company uses AI to assist in credit decisions, to screen job applications, to perform document analysis for regulated transactions, or to automate any process where the output affects a person's access to financial services, you are almost certainly operating high-risk AI systems under the Act's definition.
The Documentation Requirements
For high-risk AI systems, the EU AI Act requires operators to maintain technical documentation that includes: a description of the AI system and its intended purpose, the design choices and logic underlying the system, the data used for training and validation, the performance metrics and limitations, the human oversight mechanisms in place, and the cybersecurity measures applied.
Most companies building on foundation model APIs — OpenAI, Anthropic, Google — are not training their own models. They are operators of AI systems built on third-party foundation models. This creates a specific documentation challenge: you need to document not just your own system, but the characteristics of the foundation model you depend on. You need to document the model version, its known limitations, its performance characteristics on your specific use case, and any relevant policies governing its use.
Critically, this documentation must reflect the current state of the system. If the model version changes — because the vendor deprecates the version you documented and you migrate to a successor — the documentation must be updated. If the vendor's policies change in ways that affect how the model is used — as Anthropic's did in December 2025 — the documentation must be updated.
The compliance burden is not a one-time documentation exercise. It is a continuous documentation practice that must track the ongoing state of every AI vendor you depend on.
The Change Audit Trail Requirement
Article 12 of the EU AI Act requires high-risk AI system operators to maintain audit logs that enable post-hoc reconstruction of the system's operation. For finance teams, this requirement has a direct implication for AI vendor management: you need a documented record of every significant change to the AI systems you operate — including changes made by your vendors, not just changes you make yourself.
Consider what this means in practice. If OpenAI releases a new model checkpoint that you deploy to production in February, you need a record of that deployment with the old and new model versions, the date of change, and the evaluation results that supported the decision to migrate. If Anthropic updates its data retention policy in a way that affects how you process customer data through its API, you need a record of that policy change and your organization's response to it.
Most companies do not have this audit trail. They know what their applications do today. They do not have a timestamped record of every AI vendor change that affected their production systems over the past 12 months, classified by type and severity, with documentation of their response.
The Vendor Risk Assessment Requirement
The EU AI Act requires operators of high-risk AI systems to conduct ongoing conformity assessments. For companies using foundation model APIs, this includes assessing the risks posed by the foundation model provider — their reliability, their policy stability, and their governance practices.
This is not a static assessment. The KPMG EU AI Act compliance guide notes that conformity assessments must be updated whenever there is a substantial modification to the AI system. A model version change qualifies as a substantial modification. A significant policy change by the provider may qualify as a substantial modification. A material change in the provider's performance characteristics qualifies.
For companies with multiple AI vendor dependencies — using OpenAI for one application, Google for another, Anthropic for a third — the assessment burden multiplies. You need current risk assessments for each vendor, updated whenever those vendors make material changes.
What Mardii Provides for Compliance
Mardii's monitoring infrastructure was designed to support exactly the documentation and audit trail requirements the EU AI Act imposes.
Every change Mardii detects — pricing updates, model deprecations, ToS amendments, API policy changes, operational incidents — is timestamped, classified by severity, and stored with a full record of what changed and when. This creates the continuous vendor change audit trail that Article 12 requires. You do not need to build it yourself.
The AVRS scores Mardii computes provide a structured, quantitative basis for the vendor risk assessments that conformity assessments require. When your compliance team asks "how did we assess the risk of our OpenAI dependency in Q4 2025?" — you have a documented, methodologically consistent answer: the AVRS score on the assessment date, the five component scores, and the specific incidents that affected the score.
And because Mardii sends alerts within minutes of detecting changes, your team's response to each vendor change is timestamped against a detected event — which is the kind of evidence that demonstrates active governance to a regulator.
Where to Start
If your finance team is looking at August 2026 and trying to figure out where to begin, the answer is inventory. Before you can document your AI systems, assess your vendors, or build an audit trail, you need to know exactly which AI systems you are operating and which vendors those systems depend on.
Start with a complete inventory of every AI API integration your organization uses — not just the ones the AI team is aware of, but the ones embedded in procurement software, HR tools, customer service platforms, and financial analysis applications that your vendors have quietly added AI features to. The EU AI Act applies to operators of high-risk AI systems regardless of whether those systems were built in-house or purchased as a feature inside a third-party product.
Once you have the inventory, set up continuous monitoring for every vendor it contains. The six months between now and August 2026 will see additional AI vendor changes — model retirements, policy updates, pricing adjustments — and each one is a compliance event for teams that need to maintain current documentation.
Mardii monitors every major AI provider with the granularity that EU AI Act compliance requires: timestamped change detection, severity classification, and full audit trail. Start free at mardii.com.