Harnessing the power of artificial intelligence for healthcare

A brave new world of AI-infused medical devices awaits – but what and how to regulate?
Harnessing the power of artificial intelligence for healthcare

As artificial intelligence and machine learning become more of a reality for use in the healthcare sector, there’s been more discussion of the benefits and challenges of medical devices that will use AI.

Laura Weinrib, partner at Blake Cassels & Graydon LLP, says issues around privacy and ethics related to AI in healthcare are hot topics. There are also questions about how medical devices that use artificial learning should be approved and regulated. The concern, she says, is “whether our current pathways for approving medical devices are sufficient for regulating AI in medical and adaptive devices,” especially as they become “smarter” as more data gets inputted.

Ian Trimble, associate at Stikeman Elliott LLP and based in Toronto, says that AI potential is “massive,” and “we’re just starting to see how it might be used in medical devices to assist in helping patients with treatment or diagnosis.” He adds that it “depends on how far we will go with AI” to see if the current regulatory system we’ve built for devices can handle any potential concerns.

Edona Vila, Ellie Marshall, Laura Weinrib and Ian Trimble
Edona Vila, Ellie Marshall, Laura Weinrib and Ian Trimble

The good news, Weinrib says, is that for at least the past five years, Health Canada has taken the initiative to adapt its regulatory approach to support the development of digital health technologies, including AI and cybersecurity software in medical devices and telemedicine.

In April 2018, Health Canada announced the establishment of the Digital Health Review Division within the Therapeutic Products Directorate’s Medical Devices Bureau. The division is intended to allow for a more targeted review of rapidly changing and innovating digital health technologies, including AI technologies.

Even software can be classified as a medical device, a concept that has been around for a long time, Weinrib says. Such software is regulated as other medical devices are, but with additional guidance and policy considerations – looking at questions such as, “How significant is the information provided by the software to making a healthcare decision? Are we looking at a critical health situation or something less serious?”

A simple device might be one with software that measures the heartbeat in a patient and notifies a healthcare provider if it goes outside specific parameters. However, Weinrib says, a similar device incorporating AI could factor in data from all patients using that device. As more patients use it, the product learns and adjusts to help the healthcare practitioner determine how concerned they should be for that patient and what to do. “As it learns, theoretically, it provides a better, more accurate product.”

Health Canada has yet to come up with how they will regulate these products, says Weinrib, and, to date, there are no adaptive medical devices with AI on the market.

There are also practical potential legal issues, such as liability related to medical devices and artificial intelligence. Edona Vila, a product liability partner at Borden Ladner Gervais LLP, says AI’s adaptive and learning abilities pose certain challenges.

While the question of “suing an algorithm” is still unresolved, Vila says there are at least “three buckets” of risk and liability dimensions that need to be considered when it comes to emerging products powered by AI or the internet of things: regulatory risk, litigation risk, and contractual risk.

From a regulatory perspective, Vila says we don’t have “internet of things” legislation yet, although there is AI legislation that might come into play within the next year or two.

The internet of things (IoT) describes physical objects with sensors, processing ability, software, and other technologies that connect and exchange data with other devices and systems over the internet or other communications networks. The devices do not need to be connected to the public internet; they only need to be connected to a network and be individually addressable.

Vila says such devices could have a vulnerability – for example, a sensor that exposes the entire system to a malfunction or breach leading to economic loss, property damage, or personal injury to the whole supply chain, including the organization and users of that system, depending on the type of product. Still, the current laws could likely deal with some litigation concerns.

For example, there is the Canada Consumer Product Safety Act if it’s a consumer product. If it’s a medical device powered with AI or IoT, Vila says, “You’d want to look at the Food and Drug Act and the medical device regulations under that.” Then there are the criminal and human rights codes, where “you’d be considering how technology is deployed.”

Find out what are Canadian laws on product liability in this article.

As for contractual obligations, Vila says it’s up to the parties involved to come up with the terms of agreements covering obligations and responsibilities, and AI or IoT features would “definitely be part of those discussions” covered through license agreements, subscription agreements, terms of service agreements, and service contracts.

“Key clauses that help with mitigating and transferring risk will be important to the parties,” she says, “and there may be clauses that have specific warranties and representations made [concerning] that particular product.” These will be essential clauses in determining liability for any failure that has caused property loss, damage, or personal injury. These may also call for a party to indemnify and hold harmless another party, and the language of the contracts is essential in those cases.

As the law and litigation develop in this space, Vila notes, “We’ll also see development in clauses around governing jurisdictions and dispute resolution that will impact how some of those disputes may be resolved.”

While the litigation is developing, there isn’t yet a large body of jurisprudence, Vila says. However, there has been some movement in the United States and Europe through individual and class actions. She notes the common denominator, when looking at these claims, appears to be allegations that the connected device “essentially contained a vulnerability that exposed the particular system to potential hacking, leading to possible or actual losses for the users.”

In healthcare, there may be allegations of a device causing personal injury to the person. “It could be an implant in the human body critical to that person’s health, and there could even be allegations of that connected device running ineffectively and impacting a person’s health. For example, the person with the implant may be driving and then suddenly veer off because their connected device stopped working.”

Privacy concerns are also top of mind regarding how AI will be used. Ellie Marshall, with the privacy and healthcare compliance practices at Blakes, points out that it isn’t just about individual privacy concerns. It’s about how sensitive, personal information is used – even if anonymized or de-identified.

“There is this concern about inputting personal information into a system that’s automated, where there’s no human in the loop, where it’s just an algorithm making a decision,” she says, “and then learning the decision is not understandable, or there is no right to access or correct the information used to make that decision.”

And if it is in a healthcare context, this could have consequences – “whether you are diagnosed with a disease, whether you’re eligible for secondary insurance, or whether you’re suitable for a particular treatment regime. “These are hugely impactful decisions, so it’s imperative that there is accountability and transparency on how all this personal data is being used.”

Marshall says privacy rights are often conceptualized as an individual’s right to protect their integrity and autonomy, and the consequences of using personal data at an aggregated level aren’t always considered. With AI, if data is used to make certain diagnostic decisions, especially without a human in the loop, that can directly impact an individual or group of individuals. If data is biased because it doesn’t cover all groups, Marshall says, it becomes a question of how the private sector benefits from that data and algorithms to commercialize information.

“These are questions that go well beyond what Canadian data protection law is meant to do and starts to get into the core of human rights and consumer protection law.”

Trimble agrees, saying that the existing system is probably suited for regulating AI now, given where we are at this stage. But when we get further down the road, and AI starts doing more of what healthcare professionals do, Trimble says, “It will be very interesting how the regulatory system develops” to encompass the technology.

“There’s just so much potential and so many things we haven’t contemplated,” he says. And perhaps, very quickly, our laws may become “out of date and ill-suited to address AI.”

And as regulators look at the future of AI, Weinrib says any new regulatory framework needs to be “future-proofed” so it can adapt to the use of technology and AI that we haven’t even contemplated yet.