The adoption of Artificial Intelligence in drug discovery and development has the potential to transform drug safety processes, but healthcare’s strict regulatory environment demands more than generic governance frameworks. In this two-part blog, Marie Flanagan, Regulatory and AI Governance Lead at IQVIA Safety Technologies, explores how IQVIA has integrated AI governance into pharmacovigilance systems and offers lessons for pharmaceutical companies navigating this complex landscape.
Health Technology Insights: Ryder and BJC Win Award for Healthcare Supply Chain Innovation
Part one focuses on the challenges that prevent generic AI governance models from being effective in healthcare, introduces the IQVIA Vigilance Platform with AI Assistant, explains the benefits of a layered governance approach, and highlights key technical considerations for implementation.
Generic AI governance solutions are not sufficient for the healthcare industry. Drug safety systems have very specific requirements, and a one-size-fits-all approach would expose organizations to regulatory and patient safety risks. The global regulatory environment is increasingly complex, with frameworks ranging from the EU AI Act to FDA regulations and additional rules in jurisdictions worldwide. Pharmaceutical companies must balance these regulations while adhering to existing drug safety standards.
Effective AI governance requires combining business, technical, and compliance perspectives from the start. Drug safety involves a wide variety of data types, including structured forms, unstructured emails, call center transcripts, and literature reports, creating challenges that generic AI cannot address. IQVIA leverages more than a decade of experience in automation and AI to support drug development and healthcare delivery systems.
Health Technology Insights: Merck Reports Positive Phase 3 Results for DOR/ISL HIV Therapy
IQVIA’s Vigilance Platform with AI Assistant is designed with governance and cross-functional expertise at its core. The system follows OECD AI principles and adapts to evolving regulatory requirements for high-risk drug and device use. Marie Flanagan emphasizes that the platform maintains patient safety as a central concern, applying risk-based assessments and transparency controls to balance human oversight with AI automation while dynamically updating processes in response to new data.
The Vigilance Platform AI Assistant reviews both structured and unstructured source documents, identifies safety events, and extracts individual case safety reporting entities for global pharmacovigilance databases. It does not involve training or fine-tuning a large language model and no customer data is used to train any AI model. The extracted data initiates and guides case processing workflows, replacing manual review and data entry tasks, thereby improving efficiency and accuracy.
IQVIA employs a multi-layered governance approach that integrates healthcare-specific controls on top of foundational OECD principles. The framework centers on risk assessment, transparency, and balancing human oversight with AI autonomy. Guiding principles include human oversight, validity and robustness, transparency, data privacy, fairness, and accountability. Trust is reinforced through respect, auditability, and adherence to current regulations.
Domain-specific expertise is critical for implementing AI governance effectively in pharmacovigilance. This includes aligning AI outputs with regulatory content requirements, integrating outputs into case processing workflows, and educating teams on operational procedures. The platform’s security measures prevent customer data from entering commercial AI models, addressing the top concern of pharmaceutical organizations regarding sensitive patient information. Real-time performance monitoring enables dynamic adjustments to AI operations based on document types, treatment categories, and evolving regulations.
Technical implementation emphasizes security and regulatory compliance from the outset. The IQVIA AI Assistant runs in a secure environment where customer data is fully protected, preventing inadvertent use in AI training. Prompt engineering allows customization without fine-tuning models, preserving privacy while maintaining functionality. Confidence scoring at the individual data field level guides human review precisely, while adjustable thresholds allow organizations to control the balance between automation and human oversight. Comprehensive audit trails ensure full traceability for regulatory review, continuous improvement, and validation.
IQVIA demonstrates that AI can be applied safely and effectively in drug safety systems by combining deep domain expertise, multi-layered governance, secure technical implementation, and ongoing monitoring. These strategies ensure AI supports regulatory compliance, accelerates workflows, and ultimately enhances patient safety in a complex and evolving healthcare environment.
Health Technology Insights: Medscape Launches AI-Powered Medical Intelligence for HCPs
To participate in our interviews, please write to our HealthTech Media Room at info@intentamplify.com

