
In this article, Tina Joros, JD, chair, and Stephen Speicher, MD, vice chair of the EHR Association AI Task Force explore the importance of a risk-based approach to AI and machine learning in healthcare.
The regulatory landscape for artificial intelligence (AI) and machine learning (ML) continues to evolve. As the United States awaits the release of the AI Action Plan called for in the January 2025 Removing Barriers to American AI Innovation Executive Order, and as federal agencies review their existing AI policies, we have seen an influx of proposed laws at the state level to address concerns about the use of AI in healthcare technology. This state activity may lead to regulatory fragmentation that makes it more complex for electronic health record (EHR ) vendors, other health IT developers, and healthcare providers using these systems to build, support, and adopt AI tools to help advance patient care.
The current AI landscape
The increased focus on AI among U.S. policymakers is highly relevant as health IT developers continue to develop software capabilities that comply with the transparency requirements governing the use of Decision Support Interventions (DSI) involving AI and ML capabilities, which are part of the federal EHR Certification Program. Many are also piloting or deploying generative AI solutions that may help resolve clinical and administrative challenges in the healthcare industry.
For example, a recent survey of the nearly 30 EHR companies that comprise the Association – serving most U.S. hospitals and post-acute, specialty-specific, and ambulatory healthcare providers using EHRs and other health IT – found that asynchronous chatbots were among the top three use cases for both clinical and non-clinical scenarios respondents already or plan to support in 2025. Ambient listening and transcription topped the list for administrative use cases.
When assessing AI requests by healthcare providers using software developed by EHR Association member companies, generative AI models for clinical decision support, clinical prior authorisation, asynchronous chatbots, and patient education were among the top five.
Ambient listening and transcribing, prior authorisation, patient communications, and benefits verification were the top non-clinical use case requests, followed by scheduling chatbots, predictive analytics, clinical documentation integrity, claim denial management, and appointment reminders.
Risk-based regulatory model
The developer community has consistently advocated for a federal framework for AI regulations to ease compliance burdens and establish uniform reporting and transparency standards. However, regardless of whether it is at the state or federal level, several core issues should be addressed in AI/ML legislation or regulatory requirements. Topping that list is the need for a focus on technologies with direct implications for high-risk clinical workflows, coupled with the recognition that not all AI use cases have a direct or consequential impact on patient care and, ultimately, patient safety. This is fundamentally an argument for risk-based regulatory requirements.
Currently, many standard definitions of “high-risk AI” included in laws and regulations encompass any technology that could impact the provision of care. This includes clinical use cases (e.g., clinical decision support, medication decision support, clinical prior authorisation, clinical interventions, and predictive analytics), administrative workflows (e.g., scheduling and staffing), and supply chain, coding, and billing solutions. However, these use cases do not pose the same risk of adverse events to the patient.
A better approach would be more granular, differentiating between high- and low-risk workflows and leveraging existing frameworks that stratify risk based on the probability of occurrence, severity, and positive impact or benefit. This would help ease the reporting burden on all technologies incorporated into an EHR that may be used at the point of care.
In addition, for true high-risk use cases, “human-in-the-loop” or “human override” safeguards should be configured while implementing and using these tools, along with other reasonable transparency requirements. End users should also be required to implement workflows that prioritise human-in-the-loop principles for using AI tools in patient care.
Making sure the human involved in using the AI tool is appropriately trained for intervention is also a key element in ensuring the human override principle adds value and mitigates risk.
Other core issues
Along with taking a risk-based approach and ensuring that humans remain in the intervention loop, AI regulations should prioritize outcomes and risk mitigation over prescribing technical specifications for how AI tools must be built. The healthcare sector, with its emphasis on patient safety, requires distinct considerations compared to consumer technologies in other areas—differences that should be reflected in regulations.
As such, regulators should focus on addressing their primary concerns and the most significant risks to patients. This includes allowing developers to enhance their current software development lifecycle by incorporating appropriate safeguards rather than mandating specific steps and stages for development, training, and implementation.
Regarding liability, the recommended approach is to use existing frameworks for instances of medical malpractice that may involve AI technologies. EHR developers have a responsibility to follow safety best practices when developing and deploying AI tools, including transparently sharing information critical to their informed use. However, they cannot be responsible for harm caused by the inappropriate use of an AI tool for a particular patient. Clinicians are best positioned to evaluate the appropriateness of an AI-enabled tool for a specific patient and to obtain informed consent when required. They also have an obligation to review all AI-sourced recommendations before taking action that will affect a patient.
Additionally, to avoid widening the nation’s already significant digital divide, requirements stemming from public policy should be manageable and applicable to both large health systems and small independent clinics. Doing so ensures equitable access to AI tools, regardless of an organisation’s size.
Regulations and guidance for the adoption, use, and post-implementation monitoring of AI tools must also be reasonable and considerate of diverse care settings and capabilities. This will maximise the opportunity for widespread adoption and effective use of AI technologies.
The importance of ongoing monitoring
Ongoing post-deployment monitoring of AI models is essential to ensure quality and mitigate the risk of model drift, particularly given the nature of generative AI. The constantly evolving health information and data landscape heightens the risk that models will become outdated over time. As such, the most impactful rules will require transparency regarding EHR developers’ plans to keep post-launch information current and ensure visibility into when a model was last updated. Transparency in developers’ update practices is key to maintaining trust and reliability in AI tools, ultimately improving patient care.
This approach to AI regulations will help maintain the overall quality of AI models and provide end-users with the necessary information to determine whether a tool is appropriate for a particular patient. It also allows developers some flexibility to deploy low-risk AI capabilities without additional burden.
As the regulatory landscape in the United States continues to change, we hope to see a reasonable AI development and deployment framework that balances risk with the real-world benefits of using these new tools for patient care.
About the Authors
Tina Joros, JD, with Veradigm, is Chair of the EHR Association Task Force, and Stephen Speicher, MD, of Flatiron Health, is Vice Chair.









