In a press release distributed today, EBSCO shares its approach for implementing generative artificial intelligence (AI) within the company’s point-of-care resources, including DynaMedex and Dynamic Health, with the goal to deliver fast access to evidence-based clinical resources at the bedside.
When considering the use of AI in a clinical environment, especially in the context of clinical diagnosis and treatment, a measured, evidence-based approach is key. As a first step in this exploration of AI, the Clinical Decisions’ editorial team, led by Dr. Peter Oettgen, Editor-in Chief, DynaMed, Diane Hanson, Editor in in Chief, Dynamic Health, and Dr. Katherine Eisenberg, Sr. Medical Director, Clinical Decisions, developed principles for the responsible use of AI, centered around quality, security and patient privacy, transparency, governance and equity. Below are the key principles which outline the team’s editorial approach:
We prioritize maintaining our users’ confidence in our information as an authoritative, evidence-based, clinical expert-validated source. We will take a judicious approach to any implementation of AI-based tools, particularly considering the experimental nature of applying generative AI to clinical diagnosis and treatment. Any potential use of generative AI will be subject to ongoing review for bias, quality, safety, ethics, regulatory considerations, and scientific rigor. With appropriate supervision and safeguards in place, we will responsibly explore the potentially significant benefits and limitations of these tools via collaborative efforts among clinicians, technologists, subject matter experts, editors, and other stakeholders.
The following principles guide our approach to using generative AI in a responsible, ethical and safe manner:
1. Quality: Patient safety is our top priority. Our approach to quality ensures access to trusted, evidence-based content, developed by our clinical experts following our rigorous editorial process. We limit the use of generative AI tools for user-facing applications to information found in our curated content.
2. Security and patient privacy: Data are protected using best practices in data security, in accordance with HIPAA standards. Our systems are designed and monitored according to established safety principles in AI.
3. Transparency: Uses of generative AI-driven technology on our products are clearly labeled to support informed decision-making for our stakeholders. Clinical information is presented with evidence sources.
4. Governance: Clinical experts oversee development and validation of clinical applications of generative AI-based technologies and conduct continuous monitoring for quality and usability.
5. Equity: We are committed to promoting health equity by integrating measures that identify and mitigate both algorithmic and societal bias in generative AI-driven applications, from inception through deployment, with ongoing monitoring.