Skip to main content
A hand touching a glowing blue screen.

How the Rise of Artificial Intelligence Affects Patient Data

Safeguard patient data at your healthcare organization as AI technology becomes more prevalent.

Healthcare organizations are increasingly integrating artificial intelligence (AI) tools and applications into clinical, operational, and administrative workflows. From data management and revenue cycle operations to patient care and engagement, AI is transforming the business of healthcare. At the same time, this shift also increases the need for robust data privacy governance and creates a responsibility for organizations to better understand the evolving regulatory landscape, strengthen governance, and create strategic approaches for responsible AI adoption.

Understanding the AI Regulatory Landscape

AI and data privacy regulations that may impact healthcare organizations are both complex and quickly evolving. In the U.S., there is still no all-encompassing federal law governing AI. Instead, healthcare organizations must navigate a patchwork of agency guidance, executive orders, state-level laws, and existing regulations such as HIPAA and the Health Information Technology for Economic and Clinical Health Act (HITECH).

In the past two years, federal agencies introduced significant initiatives aimed at making sure AI technologies in healthcare are deployed safely, equitably, and transparently.

In addition to federal actions such as Executive Order 14179: Removing Barriers to American Leadership in AI (2025), the National AI Initiative Act (2020), and the NIST AI Risk Management Framework (AI RMF), state governments have been proactive in regulating AI in healthcare, with more than 250 AI-related bills introduced across 46 states this past year.1

  • The Colorado AI Act, set to take effect in June 2026 (originally February 2026), imposes governance and disclosure requirements on entities deploying high-risk AI systems, particularly those involved in consequential decisions affecting healthcare services.
  • Other states like Utah, New York, and Nevada have enacted laws regulating mental health chatbots, requiring clear disclosures and protocols for detecting suicidal ideation.
  • California, Texas, and Illinois have passed laws focusing on transparency, human oversight in AI decisions, and restrictions on AI use in clinical care.

These state-level regulations indicate a growing emphasis on ethical AI deployment and patient protection in healthcare, yet the patchwork of state regulations creates compliance complexity for healthcare organizations operating across multiple jurisdictions. To stay ahead, organizations should build cross-functional teams to monitor legislation, engage in advocacy, and adopt robust risk management frameworks.

Strengthening Governance

As healthcare organizations adopt AI systems, they face increasing cybersecurity, privacy, and ethical vulnerabilities that demand robust governance. Cybersecurity threats are particularly pressing, with AI-powered attacks such as ransomware and phishing exploiting system weaknesses and outdated infrastructure. Privacy risks are equally significant: AI often requires access to large volumes of sensitive patient data, heightening the risk of unauthorized exposure of electronic protected health information (ePHI). The use of third-party AI tools adds another challenge, since vendors may not always uphold the same privacy and security standards.

To help reduce these risks, healthcare organizations need detailed governance strategies. The first step is establishing a multidisciplinary AI governance team that brings together stakeholders from clinical, legal, compliance, data science, and ethics domains. This team should oversee the entire AI lifecycle from pre-deployment validation to post-deployment monitoring to help keep systems safe, compliant, and effective. Vendor due diligence is also necessary. Contracts must explicitly address data usage, breach response protocols, and audit rights to help safeguard against third-party vulnerabilities. In addition, continuous auditing and compliance monitoring should be built into governance frameworks, helping organizations to detect and respond to privacy incidents or performance issues in real time.

Finally, governance must prioritize patient outcomes. AI tools should deliver measurable benefits to care delivery without compromising safety, privacy, or trust. By including these principles in their governance, healthcare organizations can responsibly leverage AI’s potential while protecting the patients they serve.

Developing Strategic Approaches for Responsible AI Adoption

From streamlining administrative tasks to improving diagnostic accuracy, AI brings potential benefits to healthcare organizations. Yet these advantages come with serious risks to patient privacy and data security. Vulnerabilities in AI systems can lead to data breaches, improper de-identification of PHI, or misuse of noncompliant third-party tools such as chatbots and transcription services. Using PHI for AI model training beyond the scope of treatment without explicit patient consent also raises the risk of HIPAA violations.

To address these challenges, compliance teams at healthcare organizations can navigate the intersection of AI innovation and HIPAA regulations by updating policies and implementing AI-specific safeguards to include:

  • Regular audits of AI systems to pinpoint vulnerabilities and help ensure ongoing compliance.
  • Rigorous vendor vetting to confirm that third-party AI tools meet HIPAA standards, with contracts explicitly covering data use, retention, breach protocols, and audit rights.
  • Staff education and training to help prevent misuse and build awareness of AI’s limitations.
  • Technical safeguards such as encryption, firewalls, and multi-factor authentication to help protect ePHI.
  • Strict de-identification protocols and patient consent processes when data is used beyond direct care.

Ultimately, protecting patient data during AI integration requires a multilayered approach that brings together technical, ethical, and regulatory best practices. By adopting these safeguards, healthcare organizations not only meet compliance obligations but also strengthen trust and uphold ethical standards in AI-enabled care environments.

How Forvis Mazars Can Help

Healthcare organizations can leverage the transformative potential of AI while safeguarding patient data and maintaining regulatory compliance. At Forvis Mazars, we emphasize regulatory excellence and strategic agility, aligning with these frameworks to support healthcare clients. For more information, please contact a professional at Forvis Mazars today.

  • 1“Manatt Health: Health AI Policy Tracker,” manatt.com, October 30, 2025.

Related FORsights

Like what you see?
Subscribe to receive tailored insights directly to your inbox.