FDA recently issued two draft guidance documents discussing: (1) the use of artificial intelligence (AI) to produce information to support a regulatory decision about a drug or biological product’s safety, effectiveness, or quality; and (2) the development and marketing of safe and effective AI-enabled devices. Each draft guidance document provides FDA’s current thinking on addressing the challenges unique to AI applications and provides concrete expectations for sponsors in their pre- and post-market use of AI and in their interactions with the agency.

1. Drug & Biological Products Guidance: Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products

FDA has witnessed a growth in AI use to produce data and information supporting regulatory filings regarding safety, effectiveness, and quality for drugs. Such uses include predictive modeling for clinical pharmacokinetics, developing clinical trial endpoints, assessing outcomes, and facilitating selecting manufacturing conditions, among others. But challenges unique to these uses include: bias and reliability problems due to variability in the quality, size, and representativeness of training datasets; the black-box nature of AI models in their development and decision-making; the difficulty of ascertaining the accuracy of a model’s output; and the dangers of data drift and a model’s performance changing over time or across environments. Any of these factors, in FDA’s thinking, could negatively impact the reliability and relevancy of the data sponsors provide FDA.

To mitigate these problems, FDA proposes a risk-based credibility assessment framework, including a seven-step process to establish and assess an AI model’s credibility for a specific use. Steps 1 and 2 of this framework involve defining the specific role and scope of the AI model used to address a specific question of interest. In Step 3, the sponsor assesses the model risk (i.e., a combination of model influence and decision consequence). Steps 4 to 6 require developing a “credibility assessment plan” to establish the credibility of AI model outputs, executing the plan, and documenting the results in a “credibility assessment report.” Finally, Step 7 looks back to determine the AI model’s adequacy for its specific use. The guidance provides specific outcomes if either the sponsor or FDA determines that the model’s credibility is not sufficiently established for the model risk. Separately, wary of data drift and other post-approval changes in an AI model, FDA emphasizes the importance of maintaining the credibility of AI model outputs throughout a product’s lifecycle.

To illustrate how this framework can be applied, the guidance provides hypothetical examples in (a) clinical development and (b) commercial manufacturing. The clinical development hypothetical involves an AI model that divides patients between inpatient and outpatient cohorts based on their adverse reaction risk. The commercial manufacturing example involves an AI-based visual analysis system that automates the assessment of a drug vial’s fill volume, where the volume is a critical quality attribute for the vial’s release.

The guidance encourages sponsors using this framework to engage early with FDA and envisions interactive feedback, particularly with the assessment of the AI model risk (step 3) and the adequacy of the sponsor’s credibility assessment plan (step 4). To facilitate collaboration between the agency and sponsors, the guidance identifies various FDA programs (e.g., C3TI, CID, ETP, CATT) with whom sponsors may engage according to how the sponsor is using its AI model.

The FDA has invited public comment on this draft guidance by April 7, 2025.

2. Medical Device Guidance: Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations

In its second draft guidance document, FDA explains what documentation and information should be included in marketing submissions for devices with AI-enabled device software functions. This guidance fits within FDA’s Total Product Life Cycle (“TPLC”) approach to reviewing and monitoring medical devices and includes FDA’s current thinking on strategies to address transparency and bias in the TPLC of AI-enabled devices.

The guidance explains how sponsors should describe the AI-aspects of their devices in marketing submissions and identifies what sponsors of AI-enabled devices must disclose in the devices’ labels. The recommended disclosures for marketing submissions and labels will require sponsors to provide detailed information about their devices’ uses, inputs, outputs, architecture, development, performance, installation, and maintenance, among others.

FDA likewise provides recommendations and rationales for, among other things, risk assessment, data management, and performance validation. To help FDA understand whether sponsors have identified and can control risks (e.g., misunderstood, misused, or unavailable information), FDA recommends sponsors undertake a comprehensive risk assessment and submit a “risk management file composed of a risk management plan, a risk assessment, and a risk management report.” FDA also seeks a clear explanation of data management to understand how an AI-enabled device was developed and validated. For submissions, the agency encourages sponsors to provide information on data collection, development and test data independence, reference standards, and representativeness. Further, FDA requests validation testing to characterize the model performance with respect to its intended use.  Recognizing that an AI model can change over time, consequently presenting a risk to patients, FDA also recommends proactive device performance monitoring and including a “performance monitoring plan” in certain premarket submissions.

FDA’s guidance emphasizes the importance of transparency for AI-enabled devices given that they are “heavily data driven and incorporate algorithms exhibiting a degree of opacity.” To increase transparency, FDA’s guidance calls on sponsors to provide “public submission summaries,” describing the device and the information supporting regulatory decision-making. A sponsor’s submission should identify that AI is used in the device and describe how so, the class of model and limitations of that model, the development and validation datasets, the statistical confidence level of predictions, and how the model will be updated and maintained over time.

The FDA has invited public comment on this draft guidance by April 7, 2025 and will be hosting a webinar on February 18, 2025, to discuss just this draft guidance on medical devices.

*          *          *

FDA’s guidance documents provide sponsors with direct insight into FDA’s priorities and the agency’s specific expectations–and concerns–in the rapidly evolving AI space. Sponsors intending to implement AI-solutions to produce safety, effectiveness, or quality information for their drug or biological products are encouraged to think early about the parameters of FDA’s credibility assessment framework when designing and training their AI model. Sponsors seeking approval for medical devices containing AI components should carefully evaluate FDA’s requirements and confirm that they can adequately address FDA’s concerns and requirements over a product’s predicted lifetime. Within either framework, Sterne Kessler recommends early engagement between the company’s AI experts and in-house counsel to ensure a sponsor is meeting all of FDA’s requirements to move a product to market.

© 2025 Sterne, Kessler, Goldstein & Fox PLLC