fb tracking

Testimony before the FDA’s Center for Devices and Radiological Health Digital Health Advisory Committee Meeting

View as PDF

Robert Steinbrook, M.D.
Public Citizen’s Health Research Group
November 21, 2024

I am Robert Steinbrook, Director of Public Citizen’s Health Research Group. We have no financial conflicts of interest.

Public Citizen welcomes the FDA’s comprehensive consideration of generative artificial intelligence (GenAI)-enabled devices. Public Citizen urges that disclosure and transparency to patients and health care professionals be required when GenAI is used in health care settings. We also urge that databases used for training GenAI devices reflect the patient population that they are intended to serve, to prevent discrimination and to reduce bias.  For the protection of consumers and patients, enhanced scrutiny of health-related GenAI devices is essential. We are particularly concerned about the influences of big money and greed in our health care system. When companies cut corners in rapidly developing and implementing GenAI devices, patients are at risk for harm. A Public Citizen report, “Promise and Peril: Artificial Intelligence in Health Care,” released on November 21, 2024, discusses in detail why significant additional oversight and regulation are needed.[1]

These comments address two topics: (1) The need for additional regulatory oversight that is specific to GenAI-enabled devices to provide reasonable assurance of their safety and effectiveness, including new requirements for post-market monitoring of device safety and performance, and (2) The critical importance of detailed and transparent information for the individuals and medical professionals who use GenAI-enabled devices.

Special scrutiny for GenAI-enabled devices, including consumer health-related tools and applications, requires either presumptively designating these devices as Class III devices requiring pre-market FDA approval for safety and efficacy, including compliance with the Department of Health and Human Services standards for trustworthy artificial intelligence or the establishment of a new and more stringent pre-market approval systems for GenAI-enabled devices that are not designated as Class III. In the executive summary for this meeting, the FDA states that it “may require special controls unique to GenAI-enabled devices when needed to provide reasonable assurance of the safety and effectiveness of the device,” including “certain Class II devices.”[2] We urge the agency to require such controls and that the requirements include robust post-market monitoring of device safety and performance, notification requirements to the users of GenAI-enabled devices if the device safety and performance is not as intended, and procedures to promptly remove devices with newly recognized safety and performance concerns from the market.

The FDA should establish appropriate thresholds for suspending companies found to have repeatedly violated the agency’s rules and requirements and referring wrongdoing to the Department of Justice for legal action. Companies found to have knowingly concealed harms, or substantial potential harms, should face criminal prosecution for the company as well as top-level responsible corporate officers.

The FDA has made constructive suggestions for the provision of transparent information about GenAI-enabled devices. For individuals, the critical point is that the GenAI-enabled device must not be a proprietary black box. To the maximum extent possible, GenAI-enabled devices should be explained in easy-to-understand terms about their design, their autonomy and the extent of autonomy. Complete information should be provided about the safety features to prevent hallucinations and other potentially dangerous erroneous or false content, the anticipated frequency of such hallucinations or false content, how people interact with the device, and the safety and other controls that the user may have.

Although GenAI-enabled devices have promise to improve health, their risks are not fully known. There is justified concern that without robust oversight¾that is oversight that is more stringent and demanding than current requirements¾the risks of harms will substantially increase. We urge the FDA to move forward with strong and comprehensive regulatory requirements for GenAI-enabled devices.

I thank Eagan Kemp at Public Citizen for his help in preparing these comments.

Thank you for the opportunity to comment.


References

[1] Kemp E. Public Citizen. Promise and Peril: Artificial Intelligence in Health Care.  November 21, 2024. https://www.citizen.org/article/promise-and-peril-artificial-intelligence-in-health-care/  Accessed November 21, 2024

[2] Executive Summary for the Digital Health Advisory Committee meeting. Total Product Lifecycle Considerations for Generative AI-Enabled Devices. November 20 – 21, 2024. https://www.fda.gov/media/182871/download   Accessed November 19, 2024