Skip to main content
Shot of a young businesswoman using a digital tablet during a late night in a modern office

Certifying AI With HITRUST’s Common Security Framework

Learn how a HITRUST framework can benefit your organization.

Effective Artificial Intelligence (AI) governance calls for thoughtful collaboration across disparate functions, including, but not limited to, compliance, IT, data, Model Risk Management (MRM), and cybersecurity. An overall governance framework relies on processes, standards, and guardrails that cover the lifecycle of an AI system, which incorporates use case definition, data gathering, modeling and learning, deployment, business use, and monitoring. Throughout the AI lifecycle, this framework should address risks related to algorithmic biases, model transparency, data privacy, cybersecurity, and/or changing regulations, among others. When developing internal processes, understanding the key pillars of an AI governance framework is essential for organizations in maintaining regulatory alignment and operational integrity.

Current compliance frameworks do not adequately address the new threats introduced by AI. HITRUST is one of the first certifiable frameworks that has incorporated AI risk management and security for AI systems into its framework. By evaluating whether the AI model is rule-based, predictive, or generative, HITRUST’s Common Security Framework (CSF) is tailored to help assess the most appropriate controls to protect the information utilized within the model.

As AI continues to evolve, organizations must consider whether covered and protected information will be utilized to train, tune, or enhance the model’s retrieval-augmented generation. HITRUST evaluates this risk and incorporates specific requirements to help protect this sensitive information. Organizations are now able to leverage the best-in-industry framework to assess their AI environments without having to develop their own set of controls or requirements. A HITRUST e1 assessment provides an accessible yet detailed approach and encompasses 44 essential security requirements. These controls can be customized to incorporate AI-specific elements, as outlined below:

  • e1 plus AI Risk Management: 94 total requirements, including the 44 essential security requirements
  • e1 plus Security for AI Systems: 71 to 88 total requirements, including the 44 essential security requirements

Forvis Mazars has found that the majority of organizations will benefit from the AI Risk Management factor within HITRUST, as many of them have outsourced the development of a Large Language Model (LLM). The AI Risk Management factor is based on NIST AI RMF and ISO/IEC 23894 and addresses the following:

  • Detailed Documentation & Alignment: The organization maintains detailed records on both external and internal factors influencing AI, including legal, ethical, societal, and technological aspects, and aligns AI system development with organizational goals.
  • Structured Risk Management Framework: The organization implements a robust risk management framework that includes risk identification, analysis, and assessment, while helping to ensure accountability, stakeholder engagement, and regular impact evaluations.
  • Leadership Commitment & Communication: Senior management demonstrates strong commitment through formal policies, resource allocation, and clear communication of responsibilities, integrating risk management into the organizational culture.
  • Adaptability & Continuous Improvement: The organization continuously adapts its risk management practices to internal and external changes, evaluates emerging risks, and updates processes based on lifecycle stages and the trustworthiness of data.
  • Formalized Risk Treatment & Integration: The organization uses both qualitative and quantitative methods to prioritize and treat AI risks, integrates treatment plans into broader management processes, and helps to ensure ongoing performance tracking and improvement.

Security for AI Systems focuses more on those requirements and controls that organizations need to put into place if they are developing LLMs. Security for AI Systems addresses the following:

  • Governance, Accountability, & Documentation: The organization explicitly includes AI systems within its policies across areas such as security, data governance, and risk management; formally defines roles and responsibilities; and assigns human accountability for AI outputs and decisions. The organization also maintains detailed documentation on AI system components, lifecycle, and required resources.
  • Change Management, Security, & Monitoring: AI models and related assets are versioned, tracked, and subjected to documented change control processes. The organization conducts regular AI-specific security assessments, such as red teaming and penetration testing; monitors for adversarial inputs; and verifies the integrity of AI components using cryptographic methods.
  • Access Control & Operational Safeguards: Access to AI systems, APIs, engineering environments, and tools is tightly controlled using least privilege principles and multifactor authentication. The organization enforces rate limiting, encrypts communication channels, and reduces output specificity to mitigate potential attacks.

As outlined above, AI governance is essential to help ensure ethical, transparent, and compliant AI practices. HITRUST’s established threat-adaptive approach helps reduce the complexity of managing compliance with AI controls across multiple frameworks by providing a tailored set of measurable standards, which have been consolidated into a single framework that can be used to benchmark AI risk management efforts. Organizations seeking a detailed solution to help assess AI should consider HITRUST for their strategic compliance initiatives.

For more information, reach out to a professional at Forvis Mazars today.

Related FORsights

Like what you see?
Subscribe to receive tailored insights directly to your inbox.