ai cybersecurity and data governance risks
Webinar

AI: How to manage cybersecurity and data governance risks

Since OpenAI publicly released their ChatGPT tool in the fall of 2022, artificial intelligence (AI)—and, specifically, generative AI—has proliferated. With ever-increasing speed and complexity, new AI systems, vendors, platforms and capabilities have burst onto the scene.

With its promise to reduce inefficiencies, perfect operations and improve customer-facing products and services, organizations are employing AI in bolder and more creative ways each day.

By 2026, more than 80% of organizations will have used generative AI enabled systems—a dramatic increase from less than 5% in 2023. "Real Power of Generative AI: Bringing Knowledge to All.” Gartner, 17 Oct. 2023.

The benefits of AI

AI can accelerate decision-making processes, increase employee productivity, interact with customers, collect and organize vast amounts of data, make accurate predictions/forecasts and more.

Generative AI has already come a long way. In its comparatively short existence, it can already produce text, images, videos, presentations and even music in mere seconds. With all this creative ability, the question is not SHOULD your organization use AI, but WHEN and HOW.

Through these capabilities, generative AI can hopefully help organizations achieve their top objectives—reducing costs and increasing revenue in their core business, creating new business and/or sources of revenue, increasing the value of offerings by integrating AI-based features or insights and, in essence, creating more value without extraordinary effort.

And as with most technological innovations, this new creative power comes with new challenges and risks. Even if your organization is not implementing bleeding edge tools and techniques with AI, such capabilities are likely already in use by the software and service vendors that your organization relies on to accomplish its goals.

The challenges and risks of AI

AI has brought about new—and elevated existing—risks and compliance challenges that are increasingly specific to AI platforms. Organizations need to be proactive in their considerations of leveraging AI in a timely, efficient and responsible manner. Organizations should treat AI risk considerations as more than just afterthoughts and should appropriately adapt practices and approaches from existing control frameworks.

At the forefront of these risk considerations are two main themes: cybersecurity and data governance. As you consider the relevant risks facing your organization, ask these questions:

  • How will we ensure safety, security and compliance when utilizing AI platforms?
  • How might real-world examples play out?
  • How are these risks any different with AI-enabled systems?
  • How do we tailor our approach to address these specific risks, especially when we may not fully understand the risks impacts and likelihoods?
  • How do we create an effective set of internal controls around using these platforms?

At the most fundamental level, these questions will illuminate the need to update your organization’s approach to cybersecurity and data governance. But where do you start?

The governance of AI

Your organization should pursue a certain level of risk management by employing certain controls that are appropriate and adequate given the level of AI adoption and your organization’s risk tolerance.

To do that effectively you should first establish an AI governance framework. While you may already be using an ad hoc approach that works for the first few AI use cases, however, such an approach won’t scale as your organization’s use of AI grows.

A robust approach to AI governance incorporates people, processes and technology—for each possible AI use case or scenario—in accordance with your organization’s risk tolerance. A proper approach to AI governance seeks to ensure transparency and risk management (including cybersecurity, privacy, compliance and operational risks), fairness and inclusiveness (by avoiding detrimental bias and discrimination) and accountability (especially for the safety of humans).

This approach should include a few key activities:

  • Adopt a team approach to assign roles and responsibilities for AI governance. This is not a one person show. Bring together the right stakeholders from across the organization –technology, legal, compliance, data governance and cybersecurity. Build a diverse initial team then be flexible to adjust as needed.
  • Develop a clear decision-making process for AI implementation and use. Some organizations provide broad, overarching guidelines and then encourage their employees to be entrepreneurial in their AI-usage while adhering to said guidelines. Other organizations take a strict approach, only enabling approved employees to utilize approved AI tools that have been vigorously vetted and monitored.
  • Deploy a defined AI framework. Whether you use the National Institute of Standards and Technologies (NIST) AI risk management framework, the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) AI risk management guidelines, or HITRUST, it’s critical to consistently apply the AI governance approach.

The HITRUST approach to risk management in AI

HITRUST has developed a way to approach AI and data governance through its existing focus on cybersecurity and regulatory compliance.

HITRUST’s AI assurance program delivers practical and scalable assurance for AI risk and security management. It helps organizations who use AI to shape their AI risk management efforts, understand best practices in the areas of AI security and AI governance, evaluate their AI control environment through self and validated assessments and achieve AI security certification that can be shared with internal and external stakeholders. The HITRUST AI assurance program is built around five components: industry collaboration, harmonization of AI resources, shared AI responsibilities, AI security certification and AI risk management reporting.

HITRUST is actively engaging with AI industry leaders and external assessors (professional services firms like Baker Tilly) to identify new AI risks and threats, refine the safeguards needed to manage identified risks and continually improve AI assurance offerings.

HITRUST is incorporating over four dozen AI-centric authoritative sources—i.e., externally developed, information-protection-focused frameworks, standards, guidelines, regulations or laws—into one harmonized framework (the HITRUST CSF). These sources include ISO standards, NIST special publications, the Health Insurance Portability and Accountability Act (HIPAA) and General Data Protection Regulation (GDPR) and even state-level items like the California Consumer Privacy Act (CCPA). Incorporating all these sources into one harmonized omni-framework allows organizations to focus on implementation instead of monitoring updates and combing through the fine print themselves.

Coming in Q4 of 2024, HITRUST will leverage their proven shared responsibility model and inheritance program to:

  • Provide a relevant and actionable framework for assessing and managing AI shared responsibilities
  • Align control performance expectations between AI service providers and their user organizations
  • Enable inheritance of results of AI-centric control testing from AI service providers to their customers

Also coming in Q4 of 2024, HITRUST will offer an AI security certification to enable organizations to demonstrate the strength of their AI security program. The entire HITRUST assessment and certification portfolio is expanding to support the addition of AI controls and the issuance of an accompanying AI security certification to proactively address questions and concerns over AI security.

Coming in Q3 of 2024, HITRUST’s AI risk management insights report will provide key inputs to specific conversations related to an organization’s AI risk management program. Adding a new “AI risk management” authoritative source into a HITRUST CSF assessment will add roughly 50 new AI risk management-focused requirements and includes scorecards, assessment results and narratives about the organization’s AI risk program against both of the following:

  • NIST AI Risk Management Framework (RMF) v1.0
  • ISO/IEC 23894:2023 - Information Technology - Artificial Intelligence - Guidance on Risk Management

The steps for your organization to govern AI

In the ever-evolving world of AI, your organization has many elements to consider. And the considerations of tomorrow will likely differ substantially from those of today. Where should your organization start?

Baker Tilly recommends the following initial steps to establish AI governance:

  • Establish a cross disciplinary AI working group
  • Identify current AI use cases, at a minimum, and likely future use cases
  • Develop AI principles specific to your organization
  • Draft AI strategy and goals
  • Assess AI risks, identify improvements to practices
  • Create an AI road map to implement/improve AI practices based on risks
  • Educate your stakeholders on AI principles, strategy and risks
  • Report to board and leadership

As organizations integrate AI, it becomes imperative to navigate the complex landscape with a comprehensive governance program. Baker Tilly offers a full scope of AI services, from strategy and governance to design and implementation, to help your organization navigate AI complexity and embrace a proactive approach to risk management so that your organization can harness the transformative power of AI.

Mike Cullen
Principal
cmmc confirm readiness assessment
Next up

CMMC: How to confirm readiness