Good AI Practice in Drug Development: FDA & EMA Publish Guiding Principles
Regulator: FDA (US Food and Drug Administration), EMA (European Medicines Agency)
Region: United States, European Union
Area: Drug Development / Evidence Generation
Published: January, 2026
Source: View here
Key Takeaway:
The FDA and EMA have jointly released a set of 10 guiding principles for Good AI Practice (GAIP) in drug development. While not formal legislation, the principles set out a clear regulatory direction of travel: AI can support evidence generation across the product lifecycle, but only where it is risk-based, well governed, transparent, and inspection-ready.
In short, innovation is welcome – but only where it reinforces patient safety, data integrity, and regulatory confidence.
What’s Changed?
For the first time, the FDA and EMA have aligned on a shared baseline expectation for how AI technologies should be developed, deployed, and managed when used to generate evidence across:
- Nonclinical development
- Clinical trials
- Post-marketing activities (including pharmacovigilance)
- Manufacturing and quality oversight
These principles apply across nonclinical, clinical, post-marketing and manufacturing activities, providing a shared FDA-EMA baseline for what “good” looks like when AI supports evidence generation throughout the product lifecycle.
Why This Matters
AI is already being used – both formally and informally – across drug development and safety activities, from trial design and patient selection to literature screening and signal detection.
This guidance makes one thing clear: regulators will assess AI on control, not capability.
The practical outcome is simple: AI will increasingly be treated like any other regulated system – meaning sponsors will be expected to evidence traceability, governance, ongoing performance monitoring, and clear accountability.
Sponsors should be prepared to demonstrate:
- Why an AI tool is being used (its context of use)
- The level of risk it introduces to regulatory decision-making or patient safety
- How its outputs are validated, monitored, and governed over time
AI that cannot be clearly explained, justified, or defended is unlikely to be viewed favourably – regardless of technical sophistication.
What Reviewers Will Still Expect
The guidance is explicit that AI does not lower regulatory standards.
Key expectations include:
- Risk-based validation and oversight, proportionate to the model’s context of use and potential impact
- Defined scope and boundaries for each AI application (what it supports – and what it does not)
- Robust data governance, including traceable provenance, processing steps, and decision records in line with GxP
- Lifecycle management, with ongoing monitoring, periodic re-evaluation, and controls for data or model drift
- Clear, accessible documentation, written in plain language and suitable for inspection
This effectively rules out any “set-and-forget” approach to AI in regulated environments.
What You Should Consider Doing Now
If you’re already using AI or actively evaluating it, you should consider:
- Mapping all AI tools to a clearly defined context of use
- Performing risk assessments focused on patient safety and regulatory impact
- Reviewing data governance frameworks, including audit trails and change control
- Ensuring performance validation is fit-for-purpose and documented
- Establishing ongoing monitoring and re-evaluation processes
- Confirming human oversight responsibilities are clearly defined and operationalised
Addressing these points early will significantly reduce friction during regulatory interactions and inspections.
PharSafer Perspective
This joint FDA–EMA publication is a welcome and overdue step. It reinforces that AI has a role in modern drug development – but not at the expense of scientific rigour, governance, or patient safety.
The differentiator will not be whether AI is used, but whether its use is defensible, controlled, and inspection-ready.
How PharSafer Can Support
PharSafer supports sponsors in applying new regulatory expectations in a practical, proportionate, and compliance-focused way, including:
- Gap assessments against emerging AI and automation expectations
- Risk-based governance and validation frameworks
- Inspection-ready documentation and evidence strategies
- Pharmacovigilance process optimisation and oversight
- Automation-led solutions designed to strengthen, not bypass, compliance
PharSafer is committed to supporting our clients during this transition.
If you have any questions or require further guidance, please contact us directly:
For regular updates, be sure to follow us on LinkedIn:
PharSafer: Follow
SaPhar: Follow
Contact Us
For any questions or to further discuss how PharSafer® can assist with your needs, do not hesitate to contact us or Book a Meeting.
