Home Cybersecurity Disaster Recovery Identity Security AI Governance Sectors IT Services About Insights Contact
AI & Technology

AI Agent Hacked McKinsey's Internal Chatbot in Two Hours

10 March 2026 · 3 min read

← All insights

Security researchers from CodeWall successfully compromised McKinsey's internal AI chatbot, Lilli, within two hours of gaining access. The attack exposed fundamental security flaws that demonstrate how even premier global consultancies fail to secure their AI systems properly.

The Attack Vector

The researchers exploited Lilli through prompt poisoning techniques, tricking the AI into executing SQL injection attacks against McKinsey's backend databases. By crafting specific prompts, they bypassed the chatbot's safety guardrails and gained unauthorised access to sensitive internal data including client information and proprietary methodologies.

The vulnerability stemmed from inadequate input sanitisation between the AI's natural language processing and database queries. When Lilli translated user requests into database commands, it failed to properly validate or escape the input, creating a direct pathway for SQL injection.

Why This Matters for UK Businesses

McKinsey's failure illuminates a critical blind spot affecting UK organisations deploying AI systems. Most businesses implementing ChatGPT wrappers, custom chatbots, or AI-powered customer service tools are making identical mistakes.

The rush to deploy AI capabilities often bypasses fundamental security practices. Development teams treat AI interfaces as trusted internal systems rather than potential attack vectors. They focus on user experience and AI training whilst neglecting input validation, database security, and privilege separation.

For UK businesses handling personal data under GDPR, such vulnerabilities create immediate compliance risks. The ICO has already signalled that AI systems must meet the same data protection standards as traditional applications. A successful attack exposing customer data could trigger maximum fines of £17.5 million or 4% of turnover.

The Technical Reality

AI chatbots represent a new class of security challenge. Unlike traditional web applications with predictable input patterns, AI systems process natural language that can contain embedded commands, hidden instructions, or malicious payloads disguised as innocent questions.

Standard web application firewalls and input validation rules prove inadequate against sophisticated prompt injection attacks. The AI's ability to interpret context and generate dynamic responses creates unpredictable code paths that traditional security testing misses.

Most concerningly, AI systems often run with elevated database privileges to access diverse information sources. When compromised, they provide attackers with broader access than typical web application vulnerabilities.

Immediate Board Actions

Boards should immediately audit any AI systems already deployed within their organisations. This includes customer-facing chatbots, internal AI assistants, and any applications integrating with ChatGPT, Claude, or similar services.

The audit must specifically examine database connectivity, user privilege levels, and input sanitisation processes. Many UK businesses have implemented AI solutions without updating their security frameworks to address prompt injection risks.

For organisations planning AI deployments, establish AI security requirements before development begins. This includes implementing proper input validation, database query parameterisation, principle of least privilege for AI system access, and regular penetration testing specifically designed for AI attack vectors.

The McKinsey incident proves that AI security cannot be an afterthought. When global consulting firms with unlimited resources fail to secure their AI systems properly, UK businesses must recognise they face the same fundamental challenges and take proactive measures accordingly.

Mohammad Ali Khan
Director, Pacific Technology Group · LinkedIn ↗

Related Reading

SQL Server Zero-Days Hand Attackers Database Kingdom Keys — Microsoft's SQL Server CVE-2026-21262 vulnerability allows attackers to bypass authentication and gain sysadmin privileg

Microsoft Just Made Passkeys Mandatory. Here Is What That Means. — Microsoft is auto-enabling passkeys across Entra ID tenants. UK businesses must prepare for mandatory passwordless authe

Strengthen your organisation's security posture

Take the PTG Cyber Assessment Speak With Our Advisory Team

Ready to strengthen your cyber resilience?

Talk to our team about protecting your organisation against evolving threats.

Get in Touch