The $25 Million Wake-Up Call
In January 2024, a finance employee in Hong Kong joined what he believed was a routine video conference with his CFO and several colleagues. The discussion was professional, the faces familiar, the voices exactly right. By the end of the call, he had authorized wire transfers totaling $25 million to criminals who had used deepfake technology to impersonate every person on that call.
I've spent over a decade advising businesses on risk management, and nothing has shifted the threat calculus faster than AI. That Hong Kong incident wasn't an isolated case—it was a preview of where we are now. Deepfake-related losses exceeded $1 billion in 2025 alone, according to recent research. And here's what keeps me up at night: most businesses still carry cyber insurance policies written before this threat existed.
If you're reading this because you're worried about protecting your company from AI-driven cyber threats, you're asking the right question. Let me walk you through exactly what's changed, what your current coverage probably doesn't include, and the specific steps you need to take before your next renewal.
The AI Threat Landscape Has Fundamentally Changed
Traditional cyber threats—data breaches, ransomware, DDoS attacks—haven't gone away. But AI has supercharged them while creating entirely new attack categories that didn't exist three years ago.
Deepfake Fraud: From Celebrities to Your CFO
The progression was predictable in hindsight. First, attackers targeted politicians. Then celebrities. Now they've moved downstream to business executives—where the real money is. Deepfake fraud cases surged 1,740% in North America between 2022 and 2023, and the first half of 2025 alone saw $410 million in documented losses.
The engineering firm Arup wasn't unique. Fraudsters have attempted to impersonate Ferrari's CEO using AI-cloned voice calls that perfectly replicated his southern Italian accent. Similar attempts have targeted executives at WPP and dozens of other companies. These aren't theoretical risks anymore. The Financial Services Information Sharing and Analysis Center has characterized this as "a fundamental shift from disrupting democratic processes to directly attacking business operations."
What makes this particularly dangerous: humans correctly identify high-quality deepfakes only about 24.5% of the time. We're essentially flipping coins when trying to detect these attacks with our eyes and ears alone.
AI-Powered Phishing: The End of the Grammar Test
Remember when you could spot phishing emails by their broken English and weird formatting? That era is over.
Generative AI now produces phishing emails that are grammatically perfect, contextually relevant, and personalized using data scraped from LinkedIn, company websites, and previous breach databases. According to threat intelligence reports, 67.4% of all phishing attacks in 2024 utilized some form of AI. The Anti-Phishing Working Group tracked over 1 million phishing attacks in just the first quarter of 2025—the highest quarterly total since late 2023.
Business Email Compromise (BEC), where attackers impersonate executives or vendors to authorize fraudulent payments, has reached crisis levels. The FBI recorded $2.77 billion in BEC losses in 2024. By mid-2024, an estimated 40% of BEC phishing emails were AI-generated. The average BEC wire transfer request hit $24,586 at the start of 2025.
Here's what's changed operationally: spammers now save 95% on campaign costs using large language models. The barrier to entry for sophisticated attacks has collapsed. A convincing spear-phishing campaign that would have cost thousands to execute a few years ago can now be generated for $50.
Adversarial Attacks on Your AI Systems
If you've deployed AI tools in your business—chatbots, fraud detection, decision-making systems—you've created new attack surfaces. Data poisoning, where malicious actors inject harmful data into AI training datasets, can corrupt your models and lead to systematically wrong decisions. Adversarial attacks manipulate input data to deceive your AI systems into incorrect outputs.
A 2025 IBM study found that 13% of surveyed companies had experienced a breach of an AI model or application. More alarming: 97% of those breached companies did not have access controls in place for their AI systems. Only 20% of companies are confident in their ability to secure their generative AI models against cyber risks.
Why Your Current Cyber Policy Probably Isn't Enough
Here's the uncomfortable reality: most cyber insurance policies were designed for a pre-AI threat environment. They cover traditional data breaches and network intrusions reasonably well. But many exclude or narrowly define losses involving AI systems.
Common Coverage Gaps
Deepfake and social engineering losses: Many policies have sublimits or exclusions for "voluntary parting" of funds—meaning if an employee was tricked into authorizing a payment (even by a deepfake), coverage may be limited or denied. Some insurers are just now beginning to add explicit deepfake coverage. Coalition, for example, started offering deepfake-related incident coverage in December 2025.
AI system failures: Your policy likely doesn't cover failures or errors in AI-generated content or decision-making tools. If your AI chatbot gives incorrect information that leads to a lawsuit (as happened to Air Canada), or if your AI system makes a biased decision that triggers regulatory action, you may be uninsured.
Model manipulation and data poisoning: Coverage for unauthorized access, manipulation, or poisoning of machine learning models is rarely explicit in standard policies. If attackers corrupt your AI training data and you make business decisions based on compromised models, the resulting losses may fall outside your coverage.
AI-specific liability: As state legislatures pass new AI liability laws—and they're doing so rapidly—you may face novel legal exposures for algorithmic bias, automated decision-making errors, or AI-generated content that harms third parties. These emerging liability categories often aren't contemplated in policies written even two years ago.
The Underwriting Shift You Need to Know About
Insurers have noticed the AI threat explosion. Their response has been twofold: tightening underwriting requirements and shifting more risk back to policyholders.
Several trends are reshaping the market:
Evidence over attestation: Carriers have moved away from accepting self-reported security practices. You'll need to prove your controls work as described, with documentation and potentially third-party verification. One cyber insurance expert put it bluntly: "Cyber and AI insurance are increasingly conditional products, and without proof, the policy may not respond or pay out as expected when it matters most."
AI governance requirements: Some insurers are beginning to evaluate whether companies have basic AI governance policies, data handling protocols when AI is used, and employee training around AI misuse and social engineering. This isn't yet standardized across the industry, but it's clearly directional.
New exclusions: Nation-state attacks, systemic supply-chain events, and certain AI-related incidents are increasingly excluded or carved out. The 2024 CrowdStrike outage—which wasn't even a malicious attack but still caused massive business interruption—highlighted how single points of failure can create correlated losses that insurers want to exclude.
Higher deductibles and sublimits: Even where coverage exists for social engineering or AI-related incidents, expect higher retention levels and lower sublimits compared to traditional cyber coverage.
The Nine Security Controls Insurers Now Expect
If you want to obtain or renew cyber coverage at reasonable terms—or get claims paid when incidents occur—you need to demonstrate specific controls. Here's what underwriters are scrutinizing:
1. Multi-Factor Authentication Everywhere
MFA is no longer optional; it's table stakes. The Change Healthcare breach in 2024, which affected millions of records and spawned dozens of lawsuits, occurred because the company wasn't using multi-factor authentication. Insurers have taken notice. Expect MFA requirements for all critical systems, email, VPN access, and administrative accounts.
2. Endpoint Detection and Response (EDR)
Traditional antivirus isn't sufficient. Approximately 65% of insurers now expect organizations to have EDR solutions that can detect and respond to threats in real time. EDR significantly reduces breach impact and aligns with compliance standards like CIS Controls.
3. Offline or Air-Gapped Backups
Ransomware remains the lion's share of claims involving recovery expense losses—about 81%. A third of insurers now require offline or air-gapped backups that malware can't encrypt. If you don't have copies of your data that attackers can't reach, you're at risk of paying ransom to get it back—which is expensive for insurers and increasingly may not be covered.
4. Documented Incident Response Plan
You need a written plan that outlines how your team will detect, respond to, and recover from an attack. This isn't just a compliance checkbox—organizations that extensively utilize AI and automation in their security operations have detected and contained breaches nearly 100 days faster, resulting in average cost reductions of $2.2 million per breach.
5. Employee Security Awareness Training
More than half of company leaders say their employees haven't had any training on identifying or addressing deepfake attacks. That's a problem when 60% of recipients fall for AI-generated phishing emails. Regular training, including simulated phishing exercises at least quarterly, is becoming a standard requirement.
6. Email Authentication Protocols
SPF, DKIM, and DMARC aren't just technical acronyms—they're essential defenses against email spoofing. With AI making phishing emails more convincing, these protocols help prevent attackers from impersonating your domain. Implement them fully, not just in monitoring mode.
7. Privileged Access Management
Limit who has administrative access to critical systems and implement the principle of least privilege. Monitor privileged accounts for unusual activity. In a zero-trust world, nobody gets a free pass—not even insiders at your own firm.
8. Third-Party Risk Management
Vendor Email Compromise (VEC) attacks rose 66% in the first half of 2024, with attackers exploiting supply chain relationships. Insurers want to see robust third-party risk management including strong contractual language, cybersecurity certifications from vendors, and requirements for vendors to carry their own cyber insurance.
9. Payment Verification Protocols
Given the rise of deepfake-enabled payment fraud, you need documented procedures for verifying any request to change payment details or authorize large transfers. Out-of-band verification—using a separate communication channel to confirm requests—should be standard practice for any significant financial transaction.
What Your Policy Should Cover in 2026
When you review your cyber insurance at renewal, here's what to specifically discuss with your broker:
First-Party Coverage Essentials
Business interruption: Ensure coverage extends to AI system failures, not just traditional network outages. Clarify how waiting periods and indemnity periods apply to different incident types.
Data restoration: Confirm coverage for restoring data compromised by AI-related attacks, including model retraining costs if your AI systems are poisoned.
Ransomware and extortion: Understand any sublimits, whether ransom payments are covered (some policies now exclude them), and what response costs are included.
Crisis management: Deepfake incidents often require specialized response including forensic analysis, takedown services, and crisis communications. Some policies now include these services explicitly for AI-related incidents.
Third-Party Liability Considerations
Regulatory defense and fines: With California's amended Consumer Privacy Act requiring annual cybersecurity audits effective January 2026, and states increasingly passing AI-specific liability laws, ensure your policy covers regulatory investigations and, where insurable, resulting fines.
Privacy class actions: Data privacy class actions continue to surge following high-profile incidents. Confirm your policy covers defense and indemnity for statutory damages under privacy laws.
AI decision liability: If you use AI for decisions that affect customers—credit decisions, hiring, pricing—you may face liability for biased or incorrect outputs. Ask whether errors and omissions coverage extends to AI-generated decisions.
Media and reputational harm: Deepfake incidents can cause significant reputational damage even if no data is stolen. Some newer policies include coverage for reputational harm arising from AI-related incidents.
Specific Language to Request
Work with your broker to address these questions:
Does the policy explicitly cover social engineering losses, including deepfake impersonation? What are the sublimits?
Are "voluntary parting" exclusions modified to account for sophisticated impersonation techniques?
Does coverage extend to failures of AI systems you deploy, including chatbots and automated decision tools?
Is your preferred legal counsel (ideally one with AI incident experience) listed as an approved provider?
How are nation-state attacks and systemic events defined, and what exclusions apply?
Are there coverage gaps between your cyber policy and your D&O, E&O, or general liability policies for AI-related incidents?
Building an AI-Resilient Defense Strategy
Insurance is a backstop, not a substitute for security. Here's how to build practical defenses against AI-enabled threats:
Technical Controls for AI Threats
Deploy AI-powered email security: Traditional secure email gateways can't keep pace with AI-generated phishing. Modern solutions use natural language processing and behavioral analysis to detect anomalies in tone, content, and sender patterns. Organizations report that AI-enhanced filters catch targeted attacks that bypass ordinary gateways.
Implement deepfake detection: For high-risk scenarios—video calls involving financial decisions, executive communications—consider deepfake detection tools. The technology is improving rapidly, though attackers are also adapting.
Establish verification protocols for AI-enabled channels: Any communication channel that could be compromised by AI impersonation needs a verification layer. For financial transactions, this means out-of-band confirmation through a separate, pre-established channel.
Secure your AI systems: If you deploy AI tools, implement access controls, monitor for unusual behavior, and maintain audit trails. Version control your models and training data so you can detect and roll back compromises.
Human Layer Defenses
Train for AI-specific threats: Generic security awareness isn't enough. Employees need specific training on deepfake recognition, AI-powered phishing tactics, and verification procedures. Simulations should include realistic scenarios—voice calls from impersonated executives, convincing emails requesting urgent action.
Establish challenge protocols: Create a culture where employees feel empowered to verify unusual requests, even from apparent executives. The Ferrari attack was stopped because an executive asked a personal question only the real CEO could answer. Build similar verification mechanisms into your processes.
Update payment authorization procedures: No single person should be able to authorize significant payments based solely on email or video call instructions. Require dual authorization, established channels, and callback verification for any payment changes.
Governance and Preparation
Document your AI governance: Even if insurers aren't yet requiring formal AI governance policies, having them demonstrates maturity and may improve your terms. Include AI usage policies, data handling procedures, and oversight mechanisms.
Conduct tabletop exercises: Run through AI-specific incident scenarios with your leadership and response teams. What happens if a deepfake of your CEO authorizes a fraudulent transfer? How do you respond to an AI-generated disinformation campaign about your company?
Build relationships with specialists: Identify legal counsel, forensics firms, and crisis communications specialists with AI incident experience before you need them. Some insurers now provide access to panels of approved vendors—ensure they have relevant capabilities.
What's Coming: Regulatory and Market Shifts
The cyber insurance market for AI risks is still maturing. Here's what to watch:
State AI liability laws: Multiple states are considering or have passed laws creating new liability pathways for AI-related harms. New York, Virginia, Alaska, and others have bills addressing synthetic media, AI companion applications, and automated decision-making liability. These create exposures that didn't exist when your current policy was written.
Increasing CISO personal liability: Some carriers now offer standalone policies covering chief information security officer personal liability. Recent regulatory enforcement has put security leaders in the crosshairs, making this coverage increasingly relevant.
Integration of security and insurance: Multiple carriers have started offering cybersecurity tools directly, positioning cyber insurance as a backstop integrated with active protection. This trend toward "active insurance" will continue, potentially with premium discounts for using carrier-provided security tools.
Alternative risk transfer: As AI creates potential for correlated losses that challenge traditional insurance models, expect growth in alternative mechanisms including cyber catastrophe bonds and industry mutual arrangements.
Your Action Plan for the Next 90 Days
Don't wait until your renewal to address AI-related cyber risks. Here's what to do now:
Week 1-2: Audit your current cyber policy for AI-related exclusions and sublimits. Request your policy's definitions section and exclusions schedule if you don't have them easily accessible.
Week 3-4: Assess your current controls against the nine requirements insurers expect. Identify gaps, particularly around MFA, EDR, backup isolation, and employee training.
Week 5-6: Review your payment authorization procedures specifically for deepfake vulnerability. Implement out-of-band verification requirements for any transaction above a defined threshold.
Week 7-8: Conduct at least one tabletop exercise involving an AI-specific scenario—deepfake executive impersonation is a good starting point. Document lessons learned.
Week 9-12: Engage your broker (or find one with specific cyber and AI expertise) to discuss your renewal strategy. Come prepared with documentation of your security controls and specific questions about AI coverage.
The AI threat landscape will continue evolving faster than insurance products can adapt. The businesses that will navigate this environment successfully are those treating cyber insurance as one component of a comprehensive risk strategy—not as a substitute for strong security and governance.
Your next renewal isn't just a transaction. It's an opportunity to ensure your coverage matches the threats you actually face today. Don't let it pass without asking the hard questions.