AI tools in healthcare sector rife with legal and ethical risk that must be mitigated

Though specific legislation is still under development, 'AI is not a lawless space,' says Alexis Kerr
AI tools in healthcare sector rife with legal and ethical risk that must be mitigated

This article was created in partnership with Norton Rose Fulbright Canada.

As AI becomes an increasingly common tool for diagnostic and treatment recommendations, healthcare organizations need to guard against the significant risks that come with it – and the time to develop an AI governance framework is now.  

“Even though AI-specific legislation is still under development in Canada, AI is not a lawless space,” says Alexis Kerr, partner at Norton Rose Fulbright Canada LLP. “There are many legal and ethical issues these organizations need to work through in order to protect themselves and protect their patients, even if that framework may need to be adjusted in future to account for new legislative obligations.” 

Regulatory framework around data privacy and security 

When it comes to the regulatory framework around data privacy and security in the health sector, both the European Union’s General Data Protection Regulation (GDPR) and Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) apply to healthcare providers using AI for diagnostics and treatments involving personal data collection, use, or disclosure. Both laws mandate that organizations, including healthcare providers, must have legal authorization, often through informed consent, to handle personal information.  

However, the GDPR explicitly addresses automated decision-making and its effects on individuals through Article 22, which grants individuals the right not to be subject to decisions based solely on automated processing if such decisions significantly impact them.  

“Exceptions to this right include individual consent, contractual necessity, or specific laws, provided there are safeguards for the individual’s rights and interests,” Kerr, a healthcare and information governance, data privacy, and cybersecurity lawyer based in Vancouver, British Columbia, adds. “Additionally, Article 22 restricts automated decisions based on sensitive data, such as health information, unless authorized under specific GDPR provisions.” 

Though PIPEDA lacks a similar provision on automated decision-making, Quebec’s Law 25 requires transparency and allows individuals to correct inaccurate personal information used in automated decisions. Organizations should also pay close attention to provincial and territorial health information management rules and corresponding health privacy legislation as they apply to the use of AI. 

Mitigating risks associated with AI use in healthcare 

There are many legal risks to consider when using AI in healthcare, including the potential for breach of human rights law, breach of privacy, intellectual property infringement, and misappropriation of personality, among others. The primary consideration, however, should be the clinical risk to the patient.  

One of the biggest risks arises from the presence of bias in the AI system being used for diagnostic or treatment purposes. While bias can rise to the level of discrimination based on characteristics that are protected by existing human rights law, Kerr says it also refers to a situation where a generative AI system has been trained on data derived from a patient population into which an individual patient may not fit. If an AI system is trained on data relating only to Caucasian males between the ages of 19–45, for example, it may not be suitable to diagnose or suggest treatment to a woman of colour in her 70s. And if the result of that bias is clinical harm to the patient as a result of misdiagnosis or inappropriate treatment recommendations, this can create liability in tort on the part of the healthcare provider. 

“The key to managing these risks is to develop and implement an AI governance framework which includes the completion of AI impact assessments for individual AI systems prior to their implementation, and on an ongoing basis thereafter,” Kerr says. “A good governance framework will set the parameters for an organization’s use of AI technology, either as a developer or as a user of an AI system, based on the full scope of relevant legal and ethical considerations, as well as the organization’s tolerance for risk.”    

Some in the health care sector are leading the pack with proactive mitigation, including Canada Health Infoway, an organization that assists government and healthcare organizations with the digitization of health. It developed a toolkit for healthcare organizations that are implementing AI solutions with the goal of identifying the issues and risks that accompany AI technology and providing guidance on responsible governance and solution implementation. Similarly, Health Canada has published a set of 10 guiding principles for the use of machine learning in medical devices. 

“We’re also seeing some healthcare organizations actively working on their AI governance frameworks, introducing AI policies, and rolling out AI solutions on a cautious basis, often starting with solutions that do not involve personal information or that are focused on administrative efficiencies such as scheduling,” Kerr says. 

“It’s important to note that all of this is not to discourage AI innovation in healthcare, rather, to encourage responsible innovation that recognizes and addresses the corresponding risks,” adds Kerr. 

Emerging trends and anticipated changes 

Amid rapid advancement of AI integration in the sector, legislative frameworks are also building up around it at an accelerated pace. In Canada, proposed bill C-27, the digital charter implementation act, 2022, is currently the subject of a clause-by-clause review by the Standing Committee on Industry and Technology. If passed, it will bring into force three new statutes, one of which is the Artificial Intelligence and Data Act (AIDA), an act that AIDA would regulate, among other things, the use of so-called high-impact systems.  

“As you might imagine, virtually all AI systems that will be used for diagnostic or treatment purposes are likely to fall under this designation,” Kerr says. “One of the most interesting, and potentially frightening, features of AIDA is the possibility of steep penalties and criminal sanctions for non-compliance.” 

Similar frameworks are also falling into place elsewhere in the world. For example, the EU recently passed the Artificial Intelligence Act (AI Act), which creates rules based on the risk levels associated with various AI systems and solutions. It will likely enter into full force over the next 36 months, Kerr predicts. 

Ultimately, a thoughtful approach and measured adoption of AI-based tools in healthcare settings is prudent as the full power of the technology – and the regulatory framework that surrounds its use – continues to evolve. 

“Layered on top of the legal risks are the more existential, ethical considerations that should be top of mind with AI,” Kerr says. “A good rule to live by here is that just because you can, doesn’t mean you should.”