By Mark Swanson, QRx Partners
The latest ‘fad’ seems to be the push to apply artificial intelligence to medical devices and procedures. There is a danger though for entrepreneurs as they move to jump on this exciting bandwagon. While this draws positive attention from clinicians and venture capital investors to explore this cutting edge of technology, it draws concerns from the regulators due to the unknown outcomes and potential lack of risk controls. One clear aspect seems to be a problem in messaging and the whether the device is actually using what constitutes machine learning or the real “Artificial Intelligence” (AI).
This is a question that should be answered by your development team before talking to the regulators. “What is the level of autonomy for this device/software”. In other words, is the software really artificial intelligence (making decisions) or is it augmenting human intelligence (just providing information to a clinician for them to make a decision). To elaborate:
Is the software providing the information to a clinician that takes action (relies on the human to make the decision), or
Is the software going to provide treatment with some control by the clinician (software takes action but a human can override it) or,
Is the software just taking action (no ability for the human to prevent action or make adjustments).
These distinctions make a big difference in the safety aspects of the device, which is not only a critical point for regulators, it's a defining aspect of your risk management approach. If you need help in sorting this out, contact QRx Partners for your regulatory or quality system needs at Contact@QRxPartners.com or 833-779-7278.