Home Cybersecurity Disaster Recovery Identity Security AI Governance Sectors IT Services About Insights Contact
Strategic Insight

Why UK Boards Can't Wait for AI Legislation to Start Governing AI Risk

30 March 2026 · 9 min read

← All insights

Why UK Boards Can't Wait for AI Legislation to Start Governing AI Risk

Whilst the UK government continues to refine its approach to AI regulation—with a comprehensive AI Bill unlikely before late 2026—British businesses are facing an urgent governance reality. <cite index="1-21">92% of UK boards now receive regular briefings on AI governance and ethics, compared to just 28% in 2022</cite>, yet <cite index="14-18">only 28% of organizations said the CEO takes direct responsibility for AI governance oversight, while just 17% report that their board does</cite>.

This disconnect reveals a critical gap between awareness and accountability at the highest levels of UK enterprises. <cite index="11-13">Adoption of AI is currently still modest</cite> according to the government's latest research, yet <cite index="20-6,20-7,20-8">39% of UK businesses are already using AI in some way. Another 31% are seriously considering it. That puts total interest – usage or intent – at nearly 70%</cite>.

For UK business leaders, the question is no longer whether to engage with AI governance, but how quickly they can establish frameworks that protect the organisation whilst enabling competitive advantage.

The Legislative Vacuum Creating Governance Urgency

<cite index="21-7,21-10">As most jurisdictions are grappling with the question of whether to regulate AI, an anticipated UK AI bill did not materialise during 2025. Whether a dedicated AI bill will appear in 2026 remains uncertain and currently seems unlikely</cite>. This regulatory uncertainty is creating a governance imperative for forward-thinking boards.

<cite index="24-9,24-10,24-11">The UK government has indicated that a comprehensive AI Bill could be introduced in 2026, drawing on lessons from the EU's AI Act and insights from international AI summits held in South Korea (2024) and France (2025). Anticipated priorities include establishing accountability mechanisms for general-purpose and foundation models, improving coordination among sectoral regulators, enhancing consumer redress and liability frameworks, and introducing more rigorous testing requirements for high-risk and frontier AI systems. Until such legislation materializes, the UK will continue operating as a principles-first, law-later jurisdiction</cite>.

<cite index="1-4">In 2025, British organisations face a complex regulatory landscape that balances innovation with accountability, requiring careful navigation of guidance from the Centre for Data Ethics and Innovation (CDEI), the Information Commissioner's Office (ICO), and emerging AI safety frameworks</cite>.

This principles-based approach creates both opportunity and risk. Organisations that establish robust governance now will be better positioned when formal legislation arrives, whilst those that delay face the prospect of reactive compliance under regulatory pressure.

The Hidden Scale of AI Adoption in UK Enterprises

Recent government research reveals that AI adoption is more widespread than many boards realise. <cite index="11-7">A limitation of this approach is that the survey will not provide insights into shadow AI adoption</cite>, suggesting the true scale of AI use across UK organisations may be significantly higher than reported figures.

<cite index="3-11">Shadow AI was involved in one in five breaches, adding USD $670,000 to average breach costs</cite>. This statistic from global data underscores the risk exposure that UK boards face when AI deployment occurs without proper oversight.

The sector-by-sector analysis shows significant variation in adoption maturity. <cite index="20-30,20-31">Topping the list is IT & Telecoms, with a huge 93% of businesses either fully embracing or selectively using AI. Unsurprising, perhaps, this is a sector where digital adoption is part of everyday operations, and teams often have the skills to build or integrate AI tools internally</cite>.

However, adoption patterns reveal a concerning trend: <cite index="20-13,20-14,20-15">Only 28% said they're fully embracing AI across their organisation. For most, the approach is more selective: 40% told us they're adopting AI in specific areas, while continuing to evaluate where else it could add value. Another 20% are in the early stages of adoption, using AI minimally so far</cite>.

Board Accountability Gap: Awareness Without Ownership

<cite index="14-19">This data indicates a governance gap at the highest leadership levels, which correlates with slower value creation from GenAI programs</cite>. The disparity between board briefings and CEO accountability suggests that whilst boards are becoming educated about AI risks, they're not yet taking the structured oversight role that effective governance demands.

<cite index="12-21">Enterprises where senior leadership actively shapes AI governance achieve significantly greater business value than those delegating the work to technical teams alone</cite>. This finding aligns with broader governance principles: technology initiatives succeed when they're driven by business strategy rather than technical capability alone.

<cite index="1-16,1-17">Board-level oversight of AI systems is now expected, with senior executives accountable for ethical AI deployment. This has driven significant changes in corporate governance structures, with many FTSE companies establishing dedicated AI ethics committees</cite>.

The accountability gap is particularly concerning given the rapid pace of AI development. <cite index="12-17">Agentic AI usage is poised to rise sharply in the next two years, but oversight is lagging: Only one in five companies has a mature model for governance of autonomous AI agents</cite>.

Five Pillars of Effective AI Governance for UK Boards

<cite index="23-16,23-17,23-18">Good AI governance does not require a 100-page policy document. It requires clarity, ownership and proportionate controls. First, AI should appear explicitly on the organisation's risk register</cite>.

Successful AI governance frameworks typically encompass five interconnected elements:

1. Risk-Based Classification and Controls <cite index="23-31,23-32,23-33,23-34">Third, AI use cases should be tiered by risk. Not all AI applications carry the same exposure. Internal productivity tools pose different risks to automated decision-making systems or public-facing AI services. A tiered approach allows governance controls to scale appropriately, rather than applying blanket restrictions that slow innovation unnecessarily</cite>.

2. Data Governance Integration <cite index="23-22,23-23,23-24">AI governance is inseparable from data governance. Many AI-related risks stem not from the technology itself, but from weak data foundations. Poor access controls, unclear data ownership, fragmented systems and legacy permissions can all be amplified when AI tools are layered on top</cite>.

3. Transparency and Explainability Requirements <cite index="1-7,1-8">Transparency has become the watchword of UK AI governance, with 2025 seeing significant evolution in expectations around algorithmic explainability. The concept extends far beyond technical documentation, encompassing stakeholder communication, external auditing, and public accountability mechanisms</cite>.

4. Human Oversight and Accountability Structures <cite index="23-37,23-38">Explainability and accountability are particularly important in regulated environments. If AI influences decisions affecting customers, citizens or employees, there must be clarity on how those decisions are reviewed and, where necessary, challenged</cite>.

5. Continuous Monitoring and Audit Capabilities <cite index="23-39,23-40">Boards should also expect evidence of monitoring, not just policy. Written guidance on acceptable AI use is important, but without logging, review processes and escalation pathways, it provides limited protection</cite>.

The Cost of Delayed Governance

The financial implications of poor AI governance are becoming increasingly apparent. <cite index="1-21">£2.3 million average financial penalty for data protection violations involving AI systems, significantly higher than the £890,000 average for non-AI violations</cite> demonstrates that regulators are treating AI-related breaches more seriously.

<cite index="11-21,11-22">When asked to rate the significance of each barrier they faced, the barrier seen as most significant was ethical concerns, with 8 in 10 citing this (80%). The next most significant barriers were high costs (76%) and the regulation being unclear or uncertain (72%)</cite>.

<cite index="18-5,18-6">Another notable spending trend is the allocation toward AI governance, explainability, and risk management. Companies are recognizing that alongside building AI capabilities, they must invest in making AI responsible and compliant</cite>.

<cite index="3-19,3-20">USD $492 million in 2026 and surpass USD $1 billion by 2030. Those that delay building AI governance will face increasingly costly remediation under regulatory pressure</cite>.

Preparing for the UK AI Bill: Strategic Positioning

<cite index="24-21,24-22,24-23,24-24,24-25">Under the UK's principles-based model, organizations are expected to take a proactive and structured approach to responsible AI governance. This includes mapping how the five core principles, safety, transparency, fairness, accountability, and contestability - apply across the AI lifecycle from design to deployment. Companies should monitor sector-specific guidance issued by regulators such as the ICO, Ofcom, and CMA, and implement governance frameworks aligned with international standards like ISO/IEC 42001 or the NIST AI Risk Management Framework. As the forthcoming AI Bill moves toward formal legislation, organizations should prepare for future compliance obligations and actively participate in public consultations shaping rules for general-purpose and frontier AI systems. Adopting these steps can demonstrate readiness to regulators and investors while reducing compliance risk ahead of statutory change</cite>.

The current regulatory environment provides a window of opportunity for organisations to establish governance frameworks voluntarily, rather than reactively. <cite index="27-2,27-3">Recent public reporting suggests that the government may delay AI regulation while it prepares a more comprehensive, government-backed AI bill, likely to address issues including safety, copyright, transparency, and broader governance. The decision to delay may push such a comprehensive bill into the next parliamentary session, possibly not until 2026 or later</cite>.

This delay creates strategic advantage for early movers. Organisations that establish robust governance frameworks now will be positioned as responsible AI leaders when formal regulation arrives, potentially influencing how sector-specific guidance develops.

International Context: Learning from Global AI Governance

The UK's approach sits within a rapidly evolving international landscape. <cite index="8-4,8-5,8-6">EU AI Act (2024-2026) The European Union's AI Act, adopted in 2024 and being phased in through 2026, represents the world's first comprehensive AI regulation. It introduces a risk-based classification system where AI applications are categorized as minimal, limited, high, or unacceptable risk. High-risk AI systems—those affecting health, safety, fundamental rights, employment, law enforcement, or critical infrastructure—face strict obligations around... Non-compliance can result in fines up to €35 million or 7% of global annual turnover, whichever is higher</cite>.

<cite index="28-2,28-3">Operating across multiple jurisdictions presents specific challenges: divergent standards (the EU's mandatory high-risk obligations contrast with the UK's lighter principles); data transfers (AI systems often depend on cross-border flows of personal data, requiring compliance with transfer regimes); and supplier contracts (vendors may need to warrant compliance with the strictest applicable regime to cover multiple markets). For general counsel and compliance officers, these are not theoretical issues; they directly impact procurement, due diligence, and risk management</cite>.

For UK businesses operating internationally, establishing governance frameworks that can adapt to multiple regulatory regimes provides competitive advantage and reduces compliance complexity.

Moving from Principles to Practice

<cite index="8-7">The question for enterprises in 2025 isn't whether to implement AI governance, but how quickly they can establish effective frameworks that balance innovation with responsibility</cite>.

<cite index="3-17,3-18">The critical insight is that governance is not a separate workstream competing with feature delivery, it's integral to building AI that is fit for enterprise use. Organisations that embed governance early will move faster, not slower, because they avoid the regulatory rework and trust failures that derail ungoverned deployments</cite>.

Successful implementation requires a structured approach that moves beyond policy documents to operational reality. <cite index="15-1,15-2,15-3">This governance gap is not unique to AI platforms. As explored later in this report, enterprises are finding that legacy governance frameworks are cracking under pressure in the AI era. While platform guardrails are emerging, a full resolution requires rethinking governance models, operating structures and data practices</cite>.

Security Implications for Your Organisation

The governance gap at board level creates three immediate security implications for UK enterprises:

Shadow AI Proliferation: Without clear governance frameworks, AI adoption will continue organically across departments, creating ungoverned risk exposure that could cost an average of $670,000 per incident

Regulatory Exposure: As the ICO and other sector regulators develop AI-specific enforcement approaches, organisations without documented governance frameworks face higher penalty risks, with AI-related violations commanding 2.6x higher fines than traditional data breaches

Competitive Disadvantage: Whilst 70% of UK businesses express AI interest, only 28% have senior leadership accountability—creating competitive advantage for organisations that establish mature governance frameworks now

A comprehensive AI governance assessment should evaluate your organisation's current AI adoption, governance maturity, and regulatory exposure across all five pillars outlined above.

Contact our AI governance specialists to understand how these regulatory developments affect your organisation's risk profile and competitive positioning.

Mohammad Ali Khan
Director, Pacific Technology Group · LinkedIn ↗

Strengthen your organisation's security posture

Take the PTG Cyber Assessment Speak With Our Advisory Team

Ready to strengthen your cyber resilience?

Talk to our team about protecting your organisation against evolving threats.

Get in Touch