Clinician-centered framework will be featured in upcoming issue of the Open Access
Elsevier, a global leader in medical information and data analytics, unveiled a groundbreaking evaluation framework for assessing the performance and safety of generative AI-powered clinical reference tools. This innovative approach has been developed for all Elsevier Health generative AI solutions, including ClinicalKey AI, Elsevier’s advanced clinical decision support platform, and sets a new standard for responsible AI integration in healthcare. It will be featured in a future issue of the Open Access Journal of the American Medical Informatics Association (JAMIA).
Health Technology Insights: Nest Health Launches At-Home Program for Substance Use Treatment
The framework, designed with input from clinical subject matter experts across multiple specialties, evaluates AI-generated responses along five critical dimensions: query comprehension, response helpfulness, correctness, completeness, and potential for clinical harm. It serves as a comprehensive assessment to ensure that AI-powered tools not only provide accurate and relevant information but also align with the practical and current needs of healthcare professionals at the point of care.
Omry Bigger, President of Clinical Solutions at Elsevier: “This evaluation framework not only supports innovation and advancements to improve patient care but adds an extra layer of review and assessment to ensure physicians are armed with the most accurate information possible. It’s a critical step in the implementation of responsible AI for healthcare providers and patients.”
Health Technology Insights: Astrana Health Expands Leadership to Scale AI Healthcare Platform
In a recent evaluation study of ClinicalKey AI, Elsevier worked with a panel of 41-board certified physicians and clinical pharmacists to rigorously test responses generated by the tool for a diverse set of clinical queries. That panel evaluated 426 query-response pairs, and results demonstrated impressive performance, with 94.4% of responses rated as helpful, 95.5% assessed as completely correct, with just 0.47% flagged for potential improvements.
Leah Livingston, Director of Generative AI Evaluation for Health Markets at Elsevier, said: “These results reflect not just strong performance, but the real value of bringing clinicians into the evaluation process. By designing an evaluation framework around what matters most to physicians—accuracy, relevance, and clinical safety —we’re helping ensure that AI tools truly add value to care delivery. This approach supports clinicians in quickly accessing the right information, ultimately reducing cognitive burden.”
Elsevier is continuing to implement AI responsibly in its portfolio of AI solutions and is also involved in industry-wide initiatives. As a proud partner of the Coalition for Health AI, the company is actively contributing to industry-wide standards for responsible AI deployment in healthcare settings.
The release of the evaluation framework represents a significant step forward in the responsible integration of AI technologies in healthcare, paving the way for more efficient, accurate, and patient-centered clinical decision-making.
Health Technology Insights: Baxter to Offer Pieces’ AI Platform to Hospital Care Teams
To participate in our interviews, please write to our HealthTech Media Room at sudipto@intentamplify.com
Source – PR Newswire