In January 2025, the U.S. Food and Drug Administration (FDA) published a draft Guidance for Industry and Other Interested Parties entitled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products”.
Public comments on this draft document can be submitted until 7 April 2025 to ensure they are considered in the final development of the Guidance.
This Guidance provides recommendations to sponsors and other interested parties on the use of artificial intelligence (AI) to produce information or data intended to support regulatory decisions regarding the safety, effectiveness or quality for drugs or combination products that include a drug.
The recommendations also may be relevant across all medical products, including medical devices intended to be used with drugs.
A key element of this guidance is the introduction of a risk-based approach to establish and assess the credibility of AI models for a specific context of use (COU).
The COU defines the specific role and scope of the AI model used to address a question of interest.
The guideline highlights the importance of clarity in defining the context of use through the collection of credibility evidence for each AI model, as this provides the basis for the evaluation of the AI model outputs.
A Risk-Based Credibility Assessment Framework
The risk-based credibility assessment framework described in the Guidance comprises seven step process to establish and assess the credibility of an AI model output for a specific COU:
- Step 1: Define the question of interest that will be addressed by the AI model.
- Step 2: Define the COU for the AI model.
- Step 3: Assess the AI model risk.
- Step 4: Develop a plan to establish the credibility of AI model output within the COU.
- Step 5: Execute the plan.
- Step 6: Document the results of the credibility assessment plan and discuss deviations from the plan.
- Step 7: Determine the adequacy of the AI model for the COU.
Special Consideration: Life Cycle Maintenance of the Credibility of AI Model Outputs in Certain Contexts of Use
The Guide underlines the importance of ongoing monitoring and maintenance of AI models to ensure that they remain suitable for use over the life cycle of the medical product for its Contexts of Use. This includes regular monitoring of model performance and documenting of any changes that could affect model outputs.
SOURCE: