Healthcare organizations are connecting artificial intelligence (AI) tools across systems and data pipelines at an accelerating pace. According to research from Snowflake, an AI data cloud company, 85% of healthcare leaders say improving data sharing and interoperability is a higher priority today than it was two years ago, and 77% of organizations have invested or plan to invest in generative or agentic AI.1 The findings also show that top priorities include administrative workflow automation (60%), clinical documentation (50%), and revenue cycle operations (47%).2
That momentum reflects real operational need. Financial pressure is intensifying across the continuum of care due to reimbursement uncertainty, payor behavior, workforce shortages, and regulatory change. As noted in the “Mindsets 2026 Healthcare Executive Leadership Report” from Forvis Mazars, healthcare executives entering 2026 are cautiously optimistic but more guarded than in 2025, and cost cutting alone is no longer sufficient. AI interoperability in healthcare—the ability of different AI software tools to connect, share data, and act in concert—is positioned as a core enabler of efficiency, scale, and resilience.
Yet, using AI-powered workflows also creates new cybersecurity vulnerabilities that can cascade across an organization. The governance aspects needed to manage those risks are battling to keep pace with adoption, whether authorized or unauthorized, i.e., “shadow AI.”
In “AI Governance in Healthcare: What Leaders Should Ask,” Forvis Mazars explored the governance questions healthcare leaders should be raising as AI moves from pilots to enterprise use. This article highlights the specific cybersecurity risks that emerge when AI software tools become interoperable and what organizations should do to help address them.
How Connected AI Tools Expand the Attack Surface
AI interoperability typically involves linking tools through model context protocol (MCP) servers, application programming interfaces (APIs), or access to command line interfaces (CLI). This architecture and the emerging practice of harness engineering create a need for organization awareness about AI multi-agent orchestration and introduce several risk categories healthcare leaders should understand.
- Cascade across systems. A single compromised dependency, such as an API or plugin, can give attackers lateral access to broader, more sensitive environments.
- Propagate through supply chains. A vulnerability in one shared AI platform or data set can spread across every organization that relies on it.
- Introduce backdoors through third-party extensions. Plug-in and open-source dependencies in vendor products expand functionality but often lack adequate security visibility and rigor to assess if they’re ready for production use.
Data Privacy & Leakage Risks Multiply
When AI tools share data, the potential for sensitive information exposure increases significantly. Here’s how risks can multiply.
- Leak data through prompt manipulation. An attacker that gains access to a large language model (LLM) integrated with other applications can expose sensitive data across connected systems through adversarial prompting techniques.
- Create blind spots through shadow AI. Unauthorized AI tools connecting to corporate data sources can introduce data leakage that security teams may have difficulty monitoring.
- Pollute downstream models. Data poisoning can occur if malicious data is injected into one pipeline and may compromise each data lake or structure that an AI model leverages when responding to queries.
- Increase regulatory risks. In healthcare settings, entering information into AI systems creates heightened regulatory risk, as the inclusion of protected health information (PHI) may constitute impermissible processing at ingestion and a reportable breach if exposed through cyber incidents or model leakage.
How Autonomous AI Raises the Stakes of Errors
Production systems increasingly involve agentic AI where agents can act autonomously, change permissions, or process transactions. While generative AI can produce text or files, agentic AI can take action. It can be designed to automate billing, trigger prior authorizations, and generate clinical documentation that feeds downstream decisions without human intervention, unless designed to incorporate human review.
This shift changes the consequence of an AI error from “a clinician reads something wrong” to “an automated system executes something wrong, at an expedited level of speed and scale, potentially without a human reviewing it.”
Here are examples.
- “Rogue” agents. Without proper governance and identity access control, an AI agent can expand its reach by copying entitlements, creating accounts, and acquiring access to systems it was not intended to use.
- Unchecked automation. Connected systems may execute high-stakes operations based on hallucinated or false inputs from another AI tool, which could lead to significant operational or financial damage.
- Inactive agents. AI agents or bots that retain access to systems even if they’re no longer being used can still be hijacked by attackers.
- Identity challenges. Legacy security tools may be ill-equipped to track AI agents as identities that can change tactics. Attackers can forge or spoof agent identities to bypass defenses.
When an AI system moves from suggesting to acting, the consequence of a hallucination shifts from static misinformation to unintended operational outcomes.
Complexity Compounds Risk
The “black-box” nature of advanced AI models generally makes it difficult to understand how they arrive at decisions. That opacity is magnified when multiple models are connected. Attackers can craft inputs that trick AI-based security tools into misclassifying malicious activity as benign. Interconnected systems also may produce high volumes of false alerts, overwhelming security teams and causing them to miss genuine threats.
These cybersecurity risks do not exist in isolation. These risks tie back to the base triad of confidentiality, integrity, and availability of information. They also intersect with a broader set of structural challenges. Heavy-handed security can stifle innovation and effectiveness of systems, as this approach may be expensive or time-consuming if evaluating every prompt, user input, and response that is passed through a model.
This trade-off of reduced friction to innovate and move faster is pitched against the security of a system. While oftentimes difficult, this balance is a requirement that organizations must strike through proper governance and security control design.
In practice, AI models do not actually know anything. They will use confident language when hallucinating incorrect information, resulting in risky outputs that sound authoritative. Hallucinations are a statistical property of how language models work, not an engineering flaw to be patched.3 Organizations exploring AI usage and application should prioritize end-user education and upskilling, so they understand how AI systems function. The upside is that hallucinations can decrease and become less disruptive as model capabilities grow and organizations build better infrastructure for models to inform their response through retrieval augmented generation (RAG), context engineering, or emerging methods. Because of this, both model evaluation for the likelihood of hallucination and retrieval and memory architecture design become paramount in building effective, safe AI systems.
Meanwhile, the governance gap persists. Organizations are measuring the upside of digital transformation and AI, with efficiency, productivity, and growth topping the agenda for both healthcare and broader C-suite leaders. However, few are systematically tracking downside risks, such as hallucination exposure, accountability gaps, or auditability.
Instead, leaders report that barriers to realizing AI’s value are shifting away from technology limitations and toward challenges in data quality, governance, talent readiness, and compliance frameworks.
- In the “Mindsets 2026 Healthcare Executive Leadership Report,” healthcare executives cite operational efficiency and AI as top strategic priorities for the next three to five years, but progress toward AI data governance and capability maturity remains mixed.
- Across the “C-Suite Barometer: Executive Leadership Insights in the US,” 64% of U.S. executives deem generative AI will have a major impact, and 88% report using AI for internal processes, yet data quality, overall data strategy, and compliance with data protection laws are the biggest priorities for data management and governance investment.
- “C-Suite Barometer: Executive Leadership Insights in the US” findings show that regulatory compliance, speed and complexity of implementation, and confidence in return on investment are the main barriers to achieving digital transformation objectives.
Organizations are measuring AI’s upside but not systematically measuring its downside risk. That asymmetry is where real danger can exist.
Actions to Consider
As leaders embark upon AI interoperability in healthcare, it’s important to be aware of the cascading risks and how to help address them. Here are some critical steps to consider.
- Establish an AI Steering Committee. Guide your organization through decisions concerning AI risk, responsible use, and security.
- Adopt a Defense in Depth posture for AI. Require consistent or unique verification of identity and behavior across connected systems.
- Deploy AI security posture management. Use specialized tools to gain visibility into model behavior, training data integrity, and API connections.
- Apply rigorous input validation. Enforce strict sanitization to filter malicious data, particularly in RAG systems.
- Strengthen API security. Enforce strong authentication, rate limiting, and continuous monitoring on all API endpoints connecting AI systems.
- Retain human-in-the-loop oversight. Maintain human review for high-risk actions to help prevent automated systems from executing consequential decisions without review.
- Inventory and govern AI agent identities. Track agentic AI permissions, model details, and usage metrics, and decommission and revoke credentials for inactive agents and bots.
- Perform a privacy risk assessment. Identify and evaluate risks to the confidentiality; integrity; and permitted use of PHI across people, processes, systems, and vendors.
- Measure downside risk, not just upside. Build governance metrics around hallucination exposure, data provenance, auditability, and accountability.
How Forvis Mazars Can Help
Forvis Mazars helps healthcare organizations take practical, structured first steps toward effective AI governance. We begin with focused discovery and use-case identification to clarify where AI can deliver meaningful value, which use cases introduce elevated risk, what data each use case depends on, and where governance capabilities may need to mature. From there, we help develop a strategy road map that sequences governance alongside AI adoption, not after it.
That road map can address priority use cases, risk tiers, policy considerations, architectural implications, control expectations, and assurance activities. Establishing this go-forward strategy before implementation can help reduce friction, align stakeholders, and support more confident decision making as AI initiatives scale.
Professionals at Forvis Mazars bring a risk and technology-aware perspective that can help healthcare organizations move forward purposefully and strategically. Our team can help provide a clear path to capturing AI’s potential while managing the associated complexity and risk.
To discuss how Forvis Mazars can help your organization with AI interoperability in healthcare, connect with us today.
Related Reading
- AI Governance in Healthcare: What Leaders Should Ask
- Building Responsible AI: Readiness, Governance, & Practical Steps
- When Agentic AI Browsers Outrun Governance
- Disruption-Focused Cyberattacks & Operational Resilience
- Cybersecurity in 2026: Responsible AI Defense
- Mindsets 2026 Healthcare Executive Leadership Report
- C-Suite Barometer: Executive Leadership Insights in the US