Responsible Research, Responsible AI
Globally respected research practices—such as citation analysis, triangulation, reproducibility, systematic review, and ethical study design—are essential for building trust in research. Librarians have long championed these practices. At EBSCO, our goal is to maintain that trust within the research community and ensure AI technologies enhance the research process, whether for personal or academic purposes.
Informed by guidance from our customers, partners, and regulatory bodies, EBSCO has developed and adheres to the following AI Tenets.
Quality
EBSCO ensures the accuracy of its AI by grounding it in authoritative data through Retrieval-Augmented Generation (RAG), knowledge graphs, and rigorous vetting by librarians and Subject Matter Experts (SMEs). We do not rely on full-text training without the creators’ consent, as doing so would not align with responsible AI practices.
Research has demonstrated that when a Large Language Model (LLM) is connected to linked data in the form of a knowledge graph, accuracy increases by 54%, reducing the likelihood of hallucinations in AI responses. EBSCO’s Unified Subject Index (USI) connects all scholarly controlled vocabularies in a linked data knowledge graph, while the EBSCO Scholarly Graph (ESG) links over 100 million scholarly articles with their metadata, citation metrics, and author and institution profiles. With billions of authoritative content artifacts, our AI activities are firmly grounded in evidence-based scholarly data to reduce inaccuracies and enhance reliability.
Transparency
Transparency is crucial for informed decision-making, and EBSCO is committed to providing clear labeling and explainable AI features. Our approach to AI transparency includes explaining:
- The origin, provenance, and composition of the data.
- How source data is used and weighted in the AI model.
- The vetting process of the AI’s grounding data, including contributions from librarians and Subject Matter Experts (SMEs).
- The structure of the prompts sent to the AI model and the ranking of relevancy (without disclosing specific prompts or algorithms).
- How we manage the environmental impact of our AI.
At EBSCO, explainable AI means providing transparency into how our AI features function within our products. We also prioritize using transparent AI models wherever possible and actively monitor the Stanford Transparency Index when selecting models.
Information Literacy
EBSCO partners with librarians to enhance AI and information literacy. As a subset of information literacy, AI literacy resources help librarians guide researchers on responsible AI use, including detecting synthetic content, assessing AI outputs, and understanding acceptable AI practices in research.
Researchers may need to consider the following questions:
- What constitutes appropriate AI use in research?
- How can you ensure that AI-generated content is not plagiarized?
- How do you verify the accuracy of AI-generated text or images?
- How should AI be cited, and what AI-generated content can be cited?
- How can unethical or inaccurate AI be reported or corrected?
- What AI standards and regulations should researchers be aware of?
- How should researchers involve the IRB when using AI?
- What tools are most suitable for AI in research?
- How can you ensure the AI you used, or its output, is ethical and based on authoritative sources?
Information literacy is a key skill for researchers to determine the answers to these questions. EBSCO aims to assist librarians with these important conversations and help educate researchers on literacy techniques.
Equity
Equitable AI depends on grounding in diverse, ethically sourced data and ensuring equal access to content, regardless of research experience, language, or expertise. Many Large Language Models (LLMs) possess general knowledge but struggle with detailed, domain-specific questions, often leading to inaccuracies. To support precise research queries, LLMs require domain-specific data and expert vetting.
Additionally, LLMs must understand culturally and linguistically diverse data to ensure inclusivity. EBSCO uses resources like the uniquely diverse content EBSCO databases offer, as well as the Unified Subject Index (USI), which includes over 280 languages and dialects from more than 100 controlled vocabularies, to add more equitable information to AI responses. These resources are not used to train the AI because that would go against our responsible AI practices.
User-First
Our AI features prioritize the user experience, undergoing thorough testing and vetting by users to ensure they are effective and responsibly support the research process. While following trends might be tempting, EBSCO is dedicated to using AI thoughtfully and responsibly, focusing on features that genuinely enhance the research journey and uphold academic integrity.
Our AI features:
- Are vetted by EBSCO librarians, users, and customers.
- Support one or more aspects of the research process.
- Are evaluated for their suitability. If a more effective method is found, AI is not used.
This approach helps keep costs low, quality high, and development focused on features that significantly impact the research experience.
Data Integrity
EBSCO ensures that our AI features comply with data policies, protecting privacy, copyright, and user data. We partner with publishers to maintain transparency in content usage, and our updated terms of use clarify that publisher content cannot be used for customer AI, as this would infringe on creator and publisher IP and copyright.
EBSCO is certified compliant with ISO/IEC 27001, 27017, 27018, and 27701 standards for information security and privacy. To protect privacy and security, we do not share customer or user data with AI models. We also closely monitor and adapt to evolving AI regulations, including the NATO and EU Responsible AI guidelines, the EU AI Act, and other national standards.
Stay Informed
Contact us to learn more about AI at EBSCO, sign up for our AI beta programs, or collaborate with us on research and development initiatives.