Listen to this post

At last week’s America’s Physician Group Spring conference in San Diego, California, our team heard firsthand how physicians are leading efforts to integrate Artificial Intelligence (AI) applications in ambulatory and inpatient settings in major healthcare systems across the nation. Physician and IT leaders described in detail their organizations’ efforts to identify safe, cost-effective, desirable ways to leverage AI to enhance the efficiency and quality of patient care and reduce physicians’ administrative workload. Here, we highlight key approaches that have generated early success for various health systems and physician groups, as well as key pitfalls that participants looking to adopt these technologies need to account for in their planning.

In describing early successes, physician and IT leaders focused on two key goals:

Streamlining Patient Care Navigation and Improving Patient Experience

For organizations that receive millions of calls and website visits every year, even with sophisticated customer relationship management and electronic medical record resources, care navigation is critical. Organizations that participate in value-based and risk-based payor arrangements must ensure high quality care and keep costs under control, which necessitates a heightened sensitivity to ensuring that patients receive timely and appropriate care with the right provider in the right setting. AI has the potential to support consumers in patient care with symptom checkers/patient outreach efforts, as well as to streamline their administrative experience with virtual registrations and pre-appointment screenings. On both sides, AI needs to be trained to accurately interpret patient requests and information in context, and ensure patients receive appropriate instructions for self-care and follow-up – ideally to bring patients closer to a “one touch” encounter, improving patient experience.

Reducing Provider Burnout

When patients do not receive appropriate care or directions, they often message their care team, which can burden inboxes. Physician burnout is now unfortunately prevalent and multi-factorial, and many organizations report that documentation and inbox burden is a significant source of frustration among physicians. AI is a potential tool to reduce providers’ level of engagement and time spent on simple, repetitive, high-volume tasks – like basic patient messaging – or to support high complexity tasks – like complex imaging interpretation and providing decision support for critical care patients.

A large number of providers reported using tools like ambient note documentation software to help prepare draft encounter notes for providers during patient visits. We also heard of significant successes with the use of automated clinical lab result reporting to reduce the clerical burden on clinicians and communicate clinical interpretations and next steps for patients. Finally, we heard from value-based care experts on ways to identify the highest impact drivers for risk and get actual or near real-time data to predict patient outcomes and accurately risk-stratify patients for targeted population health and chronic disease management efforts. In the short term, these types of AI initiatives can enhance the care team experience by reducing stress associated with these tasks. In the long term, providers who feel they have adequate resources and support are more likely to stay with their organization, potentially improving provider retention rates.

In describing their approach in evaluating potential AI solutions, physician and IT leaders focused on two key concerns:

Legal and Ethical Considerations for Patient Confidentiality and Patient Safety

In every discussion involving AI, physicians and health systems stressed their efforts to ensure that, before adopting any AI solution, the tools can produce accurate results when applied to real-world applications while controlling for potential bias and risks to patient confidentiality. Several health systems described a methodical, evidence-based strategy to identify, test, and validate for safety, reliability, and regulatory compliance. Leaders spoke about the crucial need to confirm that AI applications are validated in the real world with results that are consistent with their performance in their testing environment before widespread adoption, and ensure that the applications are continuously evaluated once deployed into actual clinical environments. Leaders also spoke about the need for providers to collaborate with legal counsel to ensure patients have provided the necessary consent. AI can implicate myriad laws including HIPAA, and other state and federal privacy laws, among others.

Ensuring Safe Use of AI Through Structured AI Governance and Vendor Management

Multiple organizations discussed the need to identify key leaders responsible for setting enterprise goals for AI adoption and establishing clear expectations for AI governance and operations. In addition to selecting high impact areas where AI can be an effective tool, an organization needs to determine how to best choose, validate and optimize AI tools within its unique cosmos. Implementing a governance program may include the formation of committees, task forces and the implementation of policies and procedures. The development of an AI governance program should involve decision-makers and stakeholders who have a role in AI procurement and deployment, which could include legal, compliance, clinical operations, finance, IT, procurement/supply chain, and other groups who can help establish a set risk management framework and control for ethical and legal risks. Having robust internal governance can support – where appropriate – in-house development of targeted solutions that meet specific organizational needs, especially those that involve sensitive patient data or trade secrets. In addition to internal governance and operations, speakers also emphasized the importance of strategic partnerships, data curation, and smart vendor contracting for AI solutions, including, where possible, retaining full control of the organization’s data.

Conclusion

The journey of integrating AI into health care is fraught with challenges, yet the potential benefits for patient care, system efficiency, and clinician well-being are immense. By adopting structured governance models, focusing on patient safety and equity, and ensuring a responsible lifecycle for AI technologies, physicians at leading health systems are pioneering innovative approaches to solve for patient and provider needs by leveraging efficient and effective tools.