As soon as ChatGPT came on the scene in November 2022, the healthcare community started buzzing about potential medical applications. At the same time, the risks of generative artificial intelligence (AI) in healthcare became an immediate concern given the high stakes associated with medical processes and decision-making. The conversation has since turned to the responsible deployment of generative AI, balancing the opportunity to leverage this powerful new technology with the need for careful, ethical governance as generative AI applications are developed. 

Generative AI in Healthcare

As a family physician and informaticist working to build cutting- edge decision support tools for EBSCO’s Clinical Decisions suite, I have been deeply involved in efforts to  in the context of our curated, evidence-based clinical content, a product we call Dyna AI. Here’s what I’ve learned.

1) Generative AI is not going to replace clinical judgement. 

When I mentioned working on generative AI to a physician colleague recently, his response was, “Oh so you’re working to replace us?” My answer was a clear “No”. There is no way that any amount of information synthesis, timely presentation of data, or workflow tools can replicate the clinical judgement we develop through years of training. Nothing can substitute for taking that information, however powerful the technology behind it is, and applying it in the context of the patient in front of us—whether virtually, in person, or asynchronously.

Our primary principle when it comes to generative AI is that patient safety comes first.

2) In healthcare, principles come before new technology.

I am fortunate to work for an organization that has a history and culture of robust editorial independence. Our expert editorial team undergoes extensive training as part of our rigorous process for producing content. It was natural that when it came to generative AI, our organization started with building our principles first. When our teams work with generative AI, we refer to our principles daily, striving to implement them in practice. This principles-first approach is the only way our team feels comfortable exploring generative AI applications and should be the standard approach when it comes to healthcare.

3) We can make technology work for us.

I feel fortunate to have been involved in our generative AI efforts from the beginning. The close collaboration between clinical, technology, and product teams in applying generative AI to our products allows us to build a new experience that is truly designed by clinicians, for clinicians. This is a refreshing change from much of the technology involved in clinical workflows. I am particularly excited to be a part of it.

4) Clinician engagement is critical to developing AI-based tools.

The deep involvement of clinicians in the application of generative AI has been critical to our team’s success. When people with clinical experience actively advise technologists about clinical practice needs, we end up with better tools, putting our ethics and values into practice from the start as this technology evolves.   

5) AI can better serve our patients too.

Ultimately, technology that supports clinicians at providing better care also improves patient wellbeing, whether it’s through improved clinical decision support, improved clinician quality of life, or direct patient access to generative AI-supported information. Our primary principle when it comes to generative AI is that patient safety comes first. I like to think of this concept as improving patient safety by providing better care with the responsible, thoughtful application of new technology.  

Register for updates from our Innovation Center