Lessons from HIMSS: AI will not fix healthcare: Informed leadership might

By Published On: April 10, 2026Last Updated: April 10, 2026
Lessons from HIMSS: AI will not fix healthcare: Informed leadership might

By Gil Bashe, FINN Partners Chair Global Health and Purpose, and a Health Tech World Correspondent. Bashe is the author of Healing the Sick Care System: Why People Matter

At the HIMSS Global Health Conference, the conversation around artificial intelligence felt different this year. Not quieter, but more grounded.

The industry is moving beyond fascination with what AI could do and confronting what it is already doing inside systems where adoption has outpaced governance, leaving leaders to catch up in real time.

AI Has Already Entered the System. Quietly and Unevenly

In a “View from the Top: Recognizing the “Value Proposition” Criteria While Selecting AI Applications” exchange, Hal Wolf, President and CEO of the Healthcare Information and Management Systems Society (HIMSS), made clear that the question is no longer whether AI belongs in healthcare.

That debate is behind us. What matters now is how it is governed responsibly, a shift that signals the move from experimentation to accountability.

What we are witnessing is not the introduction of a tool. It is the redistribution of decision-making across clinical, operational and administrative environments.

Those decisions must increasingly be measured not by potential, but by performance.

Healthcare has historically favoured deliberate change. Pilot programmes and controlled rollouts have defined how innovation enters the system. AI is not waiting for that cadence.

In discussion, Isaac Kohane, MD, PhD, Chair of the Department of Biomedical Informatics at Harvard Medical School, emphasised that AI is already embedded in clinical reasoning in ways not always visible to leadership.

These tools are becoming part of how decisions are formed rather than remaining external aids.

Systems are not integrating AI through structured adoption. They are absorbing it through necessity and physicians’ independent application.

As a result, use is advancing faster than understanding. AI is influencing care, yet many organizations cannot clearly measure its impact on outcomes, risk, or cost.

Without that discipline, adoption expands while accountability lags.

When Adoption Outpaces Governance

There is a persistent belief that AI will correct inefficiencies in healthcare. It will not.

During the exchange, Ran Balicer, MD, Chief Innovation Officer at Clalit Health Services, emphasised that data-driven systems reflect the structures and incentives in which they operate.

If those structures are fragmented, the outputs will be as well. AI does not resolve misalignment. It accelerates it.

This is where outcomes must take precedence over intent.

If AI cannot demonstrate measurable improvement in diagnosis, treatment or patient experience, it is simply increasing the speed of existing inefficiencies.

The lesson is not to slow innovation, but to anchor it in evidence.

What became most visible at HIMSS is that the challenge is no longer technological readiness. It is leadership readiness.

Organisations are investing heavily in AI, yet governance, measurement, and accountability often trail deployment.

The contrast between industry behavior and regulatory direction is becoming increasingly difficult to ignore.

The Food and Drug Administration (FDA) is not approaching AI as a pilot exercise.

It is treating it as a lifecycle responsibility. Its guidance for AI-enabled technologies makes clear that safety and effectiveness must be continuously demonstrated, not assumed at approval.

This includes predefined change controls, real-world performance monitoring, and lifecycle oversight.

More than 1,200 AI-enabled medical devices have already been authorised in the United States, many of which are embedded in diagnostics and imaging systems.

AI is not emerging. It is operational.

Across Europe, regulatory leadership is even more explicit.

The European Union, through its AI Act, classifies healthcare AI systems as high risk, requiring strict standards for transparency, safety, data governance, and human oversight.

These requirements sit alongside the Medical Device Regulation and the GDPR, creating a comprehensive set of expectations for how AI must perform in real-world care.

At the same time, the European Medicines Agency and the FDA have aligned on principles for good AI practice across the lifecycle of medicines, reinforcing a shared global direction.

Regulators are defining AI as a managed system requiring continuous oversight and measurable performance.

Many health organisations are still treating it as a deployed tool. That is the leadership gap.

The Leadership Gap Is Now the Risk

AI is already influencing diagnostics, clinical decisions, and workflows. Yet many organisations cannot answer fundamental questions with precision.

Where is AI improving patient outcomes? Where is it introducing risk? Where is it reducing cost rather than adding complexity?

These are no longer strategic questions. They are operational expectations.

For AI companies, the implications are clear. Innovation alone is insufficient.

As Dr. Kohane has long emphasised, the challenge is not simply building advanced models, but ensuring they are clinically meaningful, interpretable, and integrated into care delivery.

Healthcare does not need more demonstrations. It needs solutions that function reliably within clinical workflow.

Those solutions must prove themselves.

Improvement in patient outcomes must be measurable. Reduction in clinical error must be demonstrated.

Evidence must extend beyond controlled environments into real-world performance. Accuracy is no longer enough. Impact is the standard.

Hospital systems carry a parallel responsibility.

They are no longer evaluating AI remotely. They are living with it.

As Dr. Balicer has emphasised, innovation in healthcare must be paired with accountability, particularly when it affects patient outcomes at scale.

Governance must be structured, continuous, and empowered to intervene when performance falls short.

Economic discipline must also be part of the equation. Healthcare does not lack investment. It lacks organisational efficacy.

AI must reduce administrative burden, streamline workflows and create measurable savings that can be reinvested into patient care.

Without that, it risks becoming another layer of cost in an already strained system.

For biopharma, AI presents both opportunity and obligation. The potential to accelerate discovery and generate insight is significant, but so is the responsibility to ensure those insights are grounded in evidence.

Speed without validation will not build confidence among regulators, providers, or patients. Biopharma has an opportunity to lead in defining how AI-driven insights are validated and applied, not as a communications exercise, but as a scientific standard.

Through all of this, one truth remains constant. Healthcare is not a technology system. It is a human system supported by technology.

AI can enhance efficiency and inform decisions, but it cannot replace clinical judgment, empathy, or trust.

Those remain the responsibility of people.

Leadership must operate at the intersection of inspiration and execution. Vision sets direction, but operational excellence determines whether that direction holds.

AI does not need more momentum. It needs precision. It needs governance that matches its speed, evidence that proves its value, and accountability that ensures it serves patients rather than systems.

The organisations that succeed will not be those that adopt AI fastest. They will be the ones who manage it best.

They will align innovation with regulatory expectations, demonstrate measurable improvement in patient outcomes, and deliver economic value in a system that demands it.

That is how confidence is earned, not as aspiration, but as performance.

Why the new NHS financial year must fund stability today and digital maturity for tomorrow
Post