How to turn AI governance into a single control fabric

How to turn AI governance into a single control fabric

13 October 2025 Consultancy.eu
How to turn AI governance into a single control fabric

As European banks move to comply with the EU AI Act, many are grappling with growing regulatory overlap. Jakob Tjurlik and Iris Wuisman of ACE + Company explain how building a unified governance framework can not only streamline AI compliance, but also turn regulatory complexity into a source of competitive advantage.

Banks today are navigating an increasingly complex web of digital regulations, spanning data protection, operational resilience, and now, AI-specific risk management. While regulatory overlap is nothing new, the arrival of the AI Act brings a long-standing challenge into sharper focus: What new governance elements must be added on top of existing frameworks? And just as crucially, which obligations require entirely new controls or adjustments to current ones?

For European financial institutions, answering these questions is now central to aligning compliance with evolving technological realities.

Multiple frameworks, one problem

The problem isn’t that regulations are unclear, but that each comes with its own workstreams, owners, and audits, even where requirements overlap. Few areas make this clearer than AI governance, where the AI Act brings new obligations that intersect directly with existing GDPR and DORA controls.

AI governance faces unclear standards and looming deadlines

The AI Act categorises AI systems by risk, with high-risk financial applications facing strict obligations around risk assessment, data quality, and traceability. Depending on the role of the institution in relation to the AI system a particular set of obligations applies. These requirements go beyond traditional software governance, demanding transparency, data lineage, and continuous monitoring.

The challenge becomes more acute when considering that the AI Act’s detailed implementation guidance remains in development. While the legislation entered into force in August 2024, harmonised standards providing concrete compliance pathways are still being finalised by European Standardisation Organisations, now expected to be delayed until 2026.

This timing gap creates a strategic dilemma: move too quickly and risk misaligned controls; wait too long and risk missing the August 2026 enforcement deadline for the high-risk AI.

AI Act, GDPR, and DORA demand the same foundations

What many compliance teams encounter is that the AI Act does not exist in isolation but rather intersects significantly with existing regulatory frameworks already consuming substantial institutional resources. Banks have spent considerable effort mapping GDPR requirements for data protection and are simultaneously implementing DORA’s ICT resilience controls, creating a layering of overlapping obligations.

Many of the same or similar procedural and strategic requirements now appear across all three regimes.

Logging, monitoring, and incident reporting:
Across the AI Act, GDPR, and DORA banks are required to monitor system behaviour, retain traceable logs, and notify stakeholders when serious issues occur. While the definitions vary – technical incidents under DORA, personal data breaches under GDPR, and AI malfunctions under the AI Act – the underlying building blocks often overlap.

Yet many firms still maintain separate owners, systems, and audit trails for each regime.

Data governance and quality:
The AI Act requires high-quality datasets to minimise risks of discriminatory outcomes, GDPR enforces accurate and up-to-date personal data, and DORA mandates data integrity to ensure operational resilience. All three regimes demand a robust data-governance framework. Addressing these requirements in silos not only duplicates effort but also risks inconsistent standards across the organisation.

How to turn AI governance into a single control fabric

Authors Jakob Tjurlik and Iris Wuisman are consultants at ACE + Company

Control silos stall innovation

The fragmented approach can lead to ‘compliance silos’, where isolated ownership structures and parallel mitigation actions may result in duplication of controls, overstretching of resources, and – most critically – delay or stalling of AI pilots that could deliver real value.

The uncertainty surrounding the final details of the AI Act adds to the challenge. Only 11% of European banks feel prepared for the AI Act, while 70% admit they are partially ready. Earlier in June, banks were among the signatories of a cross-industry call for a delay to the AI Act, citing fragmented standards and unclear requirements.

And it is not just AI: overlapping EU regulations impose an estimated €150 billion of annual compliance costs across industries, according to date from the European Commission.

With harmonised standards still in development, locking in controls too early may lead to misalignment with future requirements. On the other hand, waiting for complete clarity can push delivery timelines dangerously close to regulatory deadlines.

Compliance teams find themselves squeezed between premature implementation and regulatory inertia, all while juggling requirements that could be far more efficient if tackled in a unified way.

So, how can banks move quickly and confidently in this overlapping, uncertain regulatory environment? More specifically: How do you design AI governance today, when the rules are still evolving, without creating a silo next to GDPR, DORA, and else? How do you avoid duplicating control efforts, and instead build one scalable framework that serves all regimes?

Integrated governance fabric for AI and beyond

The answer lies in adopting an approach that treats regulatory convergence as an opportunity rather than a burden, with technology as an enabler. Treating the AI Act as a test case for broader governance improvement lets banks solve two problems at once: meeting imminent AI obligations, and reducing the duplication already introduced by the likes of GDPR and DORA.

Here’s a four-step approach to build that governance fabric; starting with the AI Act, but designed to scale across frameworks:

Step 1: Position yourself against the AI Act
Start by mapping your current and planned AI applications against the AI Act’s risk categories (limited, high-risk, prohibited), and clarify your organisation’s role, as each carries different obligations. Link the applicable articles to affected business functions and processes. Design lightweight classification processes that enable rapid assessment of new AI initiatives.

Identify which use cases require immediate attention vs. those that can evolve with emerging standards.

Step 2: Assess gaps and overlaps with existing controls
Evaluate which AI Act requirements are already covered by existing controls and where clear gaps remain. Pay special attention to implementation overlaps, ownership conflicts, and areas of regulatory convergence like data classification, incident response, and third-party risk. This step helps determine where new controls are genuinely needed, and where existing ones can be re-used or extended.

Step 3: Draft a provisional AI control set
Develop targeted controls only where real gaps exist. These AI-specific measures (such as dataset bias testing, retraining frequency, or explainability thresholds) should be treated as “version 0.9” controls, designed for flexibility as harmonised standards evolve. When the CEN-CENELEC guidance is finalised, only minor updates to these baseline controls will be required.

Step 4: Rationalise controls and monitor
Merge overlapping controls into a single, traceable framework that maintains regulatory lineage and connects requirements to specific evidence locations.

For instance, consolidate the logging requirements from the earlier “Logging, monitoring, and incident reporting” example into a single “Event logging” control:

Or to cover the overlapping regimes in data governance, a potential “Data governance and integrity” control could be formulated in a similar way. These are starting points. The same ontology of controls and evidence can – and should – be expanded to cover frameworks like NIS2 or ISO 27001, creating a foundation for managing evolving digital risk and compliance without re-inventing your controls every time a new regulation arrives.

Turn regulatory complexity into competitive advantage

This integrated approach transforms regulatory compliance from a cost burden into a competitive advantage, enabling faster AI deployment, more efficient resource allocation, and stronger risk management capabilities. By rationalising controls first, and extending governance to connect roles, processes, and evidence, banks address the AI Act without creating yet another compliance silo.

The AI Act is just the latest trigger. Banks that build a unified governance fabric today will be equipped to adapt faster, smarter, and with less duplication when the next wave of regulation arrives. Ready to move beyond fragmented compliance? Contact our experts to discuss your specific AI governance challenges or arrange a RegAI demonstration to see harmonised control rationalisation in action.

More on: ACE + Company
Europe
Company profile
ACE + Company is not a Europe partner of Consultancy.org