BOSTON—Artificial intelligence is often seen as a “black box,” a scary concept that becomes even more so as the technology becomes increasingly embedded in the healthcare industry.
It’s crucial, then, that AI developers—and especially those in medtech—take care to build trust with their users and be transparent not only about how their algorithms work but also about their intended results and how, exactly, they’ll use patient data.
That was a key point of discussion during a panel entitled “Artificial Intelligence in Medical Devices: Post Pandemic Implications” at the AdvaMed MedTech Conference in Boston on Monday.
As Yuri Maricich, M.D., chief medical officer and head of development at Pear Therapeutics, pointed out, a core part of creating that transparency with both doctors and patients lies in the way AI-based technologies are labeled and marketed.
“One of the key things that we can all do is try to standardize in as many areas as possible,” he suggested, including in educating patients about how new technologies will use their data.
“Typically, when we make a pure hardware or a pill, patients aren’t expecting to give their information back to the manufacturer. With connected devices, we’re asking them to give their information back to the manufacturer, so we have this almost sacred duty to protect that data if we’re going to maintain trust,” Maricich said. “If we violate that trust, it’s going to make it so much harder for all of us to actually bring technologies that are really effective.”
Cassie Scherer, Medtronic’s senior director of digital health policy and regulatory strategy, suggested that more flexible labeling policies from the FDA could help AI developers do a better job of reaching users where they are.
“Some patients might actually read the label … but there’s also a lot who just want to know, ‘What do I do with this? How do I use it and feel better?’” she said. “And it’s the same thing for physicians, where the amount of information that’s given depends on the technology, depends on how they’re using it in the clinic.”
One possible solution is electronic labeling, Scherer said, which can do a better job than physical labels in keeping up with any updates or modifications to an AI algorithm over time. Electronic labels can also be adapted to a wider range of patient needs, with different languages, various text options to help those who are visually impaired and the potential to even add video content to a label.
The FDA is on board. “Especially with software, we have an opportunity to think beyond the label,” Brendan O’Leary, acting director of the agency’s digital health center of excellence, said during the panel. “Patients don’t read labeling for the most part—and why should they?—but they do onboard these products. So how do you have an onboarding experience that answers that fundamental patient question: ‘How do I know this will work for me?’”
With that key question in mind, he said, digital technologies represent “a real opportunity to move past some of the traditional frameworks and into models that can work better for patients.”