Health AI at a crossroads: Governance and transparency in 2026 | McDermott Skip to main content

Health AI at a crossroads: Governance and transparency in 2026

Overview


Artificial intelligence (AI) is already transforming clinical and operational workflows, but many health systems are still developing the governance and legal architecture required to scale it responsibly. As federal and state scrutiny intensifies and AI increasingly influences care delivery, margin, and compliance, executive teams are looking to move beyond experimentation toward structured oversight. McDermott Will & Schulte examines the evolving risk landscape, outlines key regulatory and contracting considerations, and highlights three critical questions leadership teams should ask as they operationalize AI with accountability and confidence.

In Depth


Artificial intelligence (AI) has moved from emerging concept to operational reality in healthcare and is embedded in daily workflows across the enterprise. Clinical documentation tools draft notes in real time. Revenue cycle platforms automate claims review. Predictive models identify high-risk patients before deterioration. Patient engagement platforms triage inbound messages.

For health system leadership, the question is no longer whether to adopt AI, but how to operationalize it responsibly, strategically, and at scale. The organizations that lead in 2026 will be the ones building the governance and legal architecture required to steer AI adoption.

AI throughout the enterprise

AI in healthcare extends far beyond generative chatbots. It includes machine-based systems that analyze data and generate predictions, recommendations, or decisions that influence clinical and operational workflows. Often, these capabilities are embedded within electronic health record (EHR) platforms or layered into the vendor products health systems rely on every day.

That reality can create a governance challenge if AI is already functioning across the enterprise without a centralized strategy. Departments may pilot tools independently, and vendors may introduce AI functionality through routine product updates. Clinical leaders also may test solutions that carry regulatory implications.

For executive teams, visibility is the starting point. Without a clear understanding of where AI is being used, how it functions, and what oversight applies, responsible scaling can become significantly more difficult.

An operational risk landscape

Even as AI has proliferated, it has been increasingly scrutinized. Health systems face a multidimensional risk environment that spans clinical performance, regulatory exposure, privacy, intellectual property, and reputational impact.

Inaccurate outputs or model drift can affect patient care. Biased algorithms, for example, can produce unequal outcomes. The use of protected health information (PHI) in model training raises Health Insurance Portability and Accountability Act of 1996 (HIPAA) considerations, and AI deployment can introduce intellectual property questions, including ownership of AI-generated outputs and potential liability tied to training data sources. Even vendor marketing claims may create enforcement exposure if overstated.

Regulators are responding accordingly. Federal agencies continue to apply existing frameworks, including medical device oversight and consumer protection law, to AI-enabled tools. State attorneys general are examining whether AI systems are marketed and deployed responsibly, particularly in healthcare settings, where patient trust is foundational. As a result of this scrutiny, AI has shifted from an innovation discussion to a governance priority for boards and executive teams.

Navigating a fragmented regulatory environment

There is no single federal AI statute governing healthcare. Instead, organizations navigate layered oversight from myriad intersecting authorities: HIPAA, US Food and Drug Administration software guidance, Office of the National Coordinator for Health Information Technology transparency requirements for predictive decision support, Federal Trade Commission enforcement authority, Centers for Medicare & Medicaid Services oversight in value-based programs, and a growing body of state legislation.

States are especially active. Legislatures have introduced hundreds of AI-related bills, many focused specifically on healthcare applications. Emerging laws increasingly address patient disclosure when AI is used in care, safeguards against algorithmic discrimination, transparency expectations, and documentation requirements for higher-risk systems. Accordingly, multi-state health systems should structure their compliance strategies and governance frameworks to adapt as requirements evolve.

Moving from policy to enterprise governance

Many organizations began their AI journey with an acceptable use policy focused on generative tools. That is an important first step, but it is not always sufficient for enterprise deployment.

A mature AI governance framework addresses how AI is identified, evaluated, approved, monitored, and documented across the organization. It establishes a standardized intake process for new use cases and defines decision rights at each stage of deployment. It also assigns accountability for supervision, validation, and incident response, and sets risk tiers that determine required levels of human oversight and documentation. Under an effective governance framework, performance monitoring and bias evaluation are ongoing operational obligations rather than one-time reviews. To facilitate ongoing accountability, many leading health systems are formalizing cross-functional AI committees that bring together legal, compliance, information technology, privacy, clinical leadership, and operations. These groups centralize decision making and ensure innovation proceeds within defined guardrails.

Governance should also extend to contracting, since AI risk frequently enters through third-party relationships. Vendor agreements should address whether health system data may be used for model training, who owns AI-generated outputs, what human review standards apply, how regulatory change risk is allocated, and what transparency and audit rights the organization retains.

AI initiatives also raise important data governance questions. Many health systems operate on data architectures designed for EHR adoption, not AI-enabled decision support. Clear data lineage, disciplined PHI controls, and transparency regarding how vendors use and retain system data should be strategic prerequisites. Strong data governance fosters ongoing innovation by ensuring AI tools are built on reliable foundations.

Across health systems nationally, one consistent theme has emerged: organizations that proactively align governance, contracting, data oversight, and operational accountability are better positioned to scale AI safely than those that react after deployment.

Three questions every executive team should ask

As planning cycles accelerate, leadership teams should be prepared to answer three foundational questions:

  • Do we have enterprise-wide visibility into AI use, including tools embedded within broader technology platforms?
  • Are we prepared for expanding disclosure and transparency requirements, including potential patient notification obligations?
  • Do our vendor contracts reflect the realities of AI deployment, including data rights, output ownership, indemnification, and audit provisions?

The next phase of AI adoption will be defined by execution rather than experimentation. Health systems that translate governance principles into structured implementation plans, defined accountability, and disciplined oversight will be positioned to move faster and with greater confidence than those improvising in real time.

Where health system leaders are focusing next

These governance and execution questions are increasingly central to board and executive discussions across the United States. Join leading health system executives, investors, and industry advisers in Nashville, Tennessee, on May 20 – 21 at McDermott HealthEx 2026. The program will explore how AI, deal activity, and enterprise strategy are reshaping the provider landscape. May 21 will feature a full day of programming specifically designed for health system leadership.

All health system members of The Health Management Academy are eligible for complimentary access to the event. Industry members may also qualify for complimentary or discounted registration. For more information, please contact us directly.