Skip to main content
Close up of a person finger pointed at a screen.

AI Cybersecurity Risks: Deepfakes, Speed, & Threat Evolution

Learn how AI is accelerating cyberthreats and how your organization can better prepare.

Deepfake fraud may be the most visible artificial intelligence (AI)-related cyber risk today, but it’s only the first clear sign of a broader shift. It gets attention because it’s easy to picture. A fake executive call. A rushed payment request. A trusted voice that’s not real. For boards and executive teams, that makes the risk easier to grasp than other AI-related threats, but it also points to a larger trend. If AI can make impersonation more convincing, it also can generate faster, cheaper attacks that are harder to dismiss as niche or highly specialized.

Some leaders still think of AI as a chatbot. Today, however, it’s much more than that. AI can help attackers move through research, targeting, and execution at much greater speed. The technology enables more convincing social engineering, processes larger volumes of information, and helps attackers find weak spots that might otherwise stay buried.1 The real issue is not whether AI can produce better text or mimic a voice, but how it’s lowering the time, effort, and technical lift required to conduct more serious attacks.

With more powerful agentic AI models entering the market, AI cybersecurity risks are accelerating faster than many organizations expect. In April 2026, Anthropic described Claude Mythos Preview as a “watershed moment for cybersecurity” and said the examples it disclosed reflected a substantial leap in next-generation model capability.2 At the same time, the company launched Project Glasswing on the premise that defenders need a head start before similar capabilities spread more broadly with less stringent safeguards. Leadership teams should recognize that the gap between current threats and near-future threats is shrinking.

For years, many organizations treated the highest-end cyberthreat as a nation-state problem. Over time, that concern widened to include a broader pool of technically capable attackers who did not have nation-state funding, infrastructure, or persistence but still had the skills to do real damage. AI is now pushing this shift further. It’s widening access to attack methods that previously required more time, expertise, and resources. The threat landscape is becoming more accessible.

Far from a distant theory, public reporting shows that AI is already being used for more than experimentation. Recent examples3 point to AI-supported campaigns involving reconnaissance, exploit development, credential harvesting, and social engineering all under an autonomous agent. Whether an organization is focused on fraud, resilience, or broader cyber risk, the takeaway is the same: assumptions built around a slower, more manual threat model no longer hold.

What AI Is Changing in Practical Terms

  • Trust: Deepfake-enabled fraud shows how quickly traditional assumptions about identity can break down. In a high-stakes moment, a familiar voice or realistic video call may now be enough to trigger action if verification controls are weak. That makes the situation not just a cyber issue, but a business process and governance issue.
  • Speed and scale: AI helps attackers work faster across multiple steps of an attack chain. Once access is gained, the technology gives attackers faster research, targeting, and movement. For leaders, the implication is straightforward, as attacks now move faster than most organizations are historically prepared to respond.
  • Depth: AI helps attackers sift through large amounts of information more efficiently. That increases the odds of finding hidden secrets, overlooked file shares, weak access controls, or forgotten assumptions inside the environment. In many cases, the most serious weakness is not the obvious one, but the one that sits quietly in the background for years.

Taken together, these changes point to a broader shift in attacker economics. AI is already improving phishing and making impersonation more realistic. What matters more now is whether organizations are prepared for how these same tools are making advanced attacks easier to execute in the real world.

Three Questions Leaders Should Ask Now

  1. Does our current testing reflect how modern attacks unfold? Periodic testing is still valuable, but if AI is helping attackers move faster, search with more depth, and adapt more easily, organizations should ask whether point-in-time testing still matches the way risk can change between these events.
  2. Are we prepared for trust-based attacks, not just technical ones? Deepfake-enabled fraud shows that some of the most immediate risks sit at the intersection of security, operations, and decision making. If a convincing voice, video call, or executive request can prompt action, verification controls matter as much as technical controls.
  3. Are we testing for what is buried, not just what is obvious? Some of the most serious exposures are not flashy. They are the long-forgotten credentials, stale permissions, broken access paths, and quiet assumptions that remain in place until someone finds them.

What Buried Credentials Reveal About Modern Reconnaissance Risk

In one continuous testing engagement, Forvis Mazars identified a 16-year-old plaintext password hidden in a seven-level file share. The credential was tied to privileged access that could have created broad exposure if it had been found by an attacker.

What makes this notable is that it sat exposed for years without being flagged during regular testing methods. In a traditional, point-in-time penetration test, this kind of buried credential is easy to miss when discovery depends on a human manually sifting through file shares. More autonomous, continuous techniques can reduce that manual effort while reviewing far more content at scale.

Hidden exposure can sit in an environment for years without drawing attention. Modern attackers are becoming better equipped to search for exactly this kind of weakness.

Why This Matters for Testing

Most organizations do not need more fear. They need a realistic view of how attacks are changing and if their defenses are sufficient.

Deepfake fraud is drawing attention first because it’s easy to explain in the boardroom. However, the bigger issue is what it signals about the next stage of the threat landscape.

The latest model disclosures make that point into reality rather than science fiction. When frontier models begin changing what top‑tier defenders and researchers can do, they also raise the expectations placed on security leaders. That is a direct signal for boards and executive teams, even if they never follow AI news hype.

That is where testing becomes more valuable, not less. The organizations that benefit most will be the ones that pressure-test critical assumptions before the next wave of capability becomes more widely available. This is exactly where penetration testing as a service, realistic social engineering simulation, and broader adversarial validation help create a practical head start.

How Forvis Mazars Can Help

Forvis Mazars helps organizations test for this changing reality in practical terms. The goal is not simply to prove that a weakness exists, but to help organizations test in ways that better reflect the current threat environment, prioritize what matters most, and reduce the kinds of exposure that could remain buried until they are found the hard way.

That matters not only for today’s threats, but also for the direction of industry. As attacker capability becomes faster, more automated, and more accessible, organizations need strategies to help them get ahead of likely attack paths before those methods become more common.

Penetration Testing as a Service

Penetration testing as a service is designed for environments where risk does not stand still. Rather than relying only on an annual snapshot in time, this approach supports ongoing adversarial testing that can help uncover hidden weaknesses, validate exposure as conditions change, and identify issues that may be missed in a limited assessment cycle. It’s especially relevant for organizations that want a realistic view of how modern attackers may work through access paths, stale credentials, and overlooked internal exposure over time. It can help create a more useful testing rhythm for a future state in which new agentic model capabilities will compress the time between exposure and impact.

AI-Deepfake Voice Cloning Simulation

AI-deepfake voice cloning simulation helps organizations prepare for the growing risk of executive impersonation and AI-enabled social engineering. This offering from the IT Risk & Compliance team at Forvis Mazars is aimed at testing how well people and processes hold when the deception is more realistic than a standard vishing exercise. It brings a practical lens to approval workflows, verification practices, and high-trust communication channels that may now be more vulnerable than many organizations realize.

These capabilities complement the broader penetration testing services from Forvis Mazars, which include web application testing, cloud environment testing, social engineering, and physical security assessments.

The threat landscape is becoming more sophisticated and easier to exploit. AI has not introduced a new category of cyber risk. It has accelerated and widened the risks organizations already face.

Deepfake fraud is one visible example. Leaders should consider whether their current testing cadence and scope reflect the modern attack landscape, and whether previous testing results will remain useful as those capabilities continue to advance.

For more information, reach out to a professional at Forvis Mazars.

  • 1“Anthropic’s Claude Mythos Finds Thousands of Zero-Day Flaws Across Major Systems,” thehackernews.com, April 8, 2026.
  • 2“What is Mythos AI and why could it be a threat to global cybersecurity?,” theguardian.com, April 22, 2026.
  • 3“Disrupting the first reported AI-orchestrated cyber espionage campaign,” anthropic.com, November 13, 2025.

Related FORsights

Like what you see?
Subscribe to receive tailored insights directly to your inbox.