Facing the brave new world of artificial intelligence and the challenges to health privacy

As AI revolutionizes the health sector, lawyers say laws surrounding privacy will need updating
Facing the brave new world of artificial intelligence and the challenges to health privacy

Privacy related to healthcare and the life sciences has long been a concern, given the sensitive nature of the information involved, from unauthorized access to a medical file to obtaining informed consent when collecting data. However, lawyers in this sector say that the advent of artificial intelligence promises not only to revolutionize the way the health industry operates but also to transform the laws and regulations surrounding it.

Health care and life sciences are among the most vulnerable sectors regarding privacy issues, says Hélène Deschamps Marquis, AI practice lead and national co-leader of privacy & cybersecurity at Borden Ladner Gervais LLP. She notes that the Canadian Centre for Cyber Security has ranked healthcare infrastructure among the most targeted by ransomware hackers.

She says partly because there has “not been a lot of investment upgrading in privacy and cyber security infrastructure, the same way that there has been in the financial sector, for example.” 

The result is that the systems in place are “less equipped to fight attacks, so they are more vulnerable.”

She points to a 2019 hacking incident at LifeLabs, which compromised the health data of millions of Canadians. Privacy commissioners in British Columbia and Ontario said in a joint report that LifeLabs “failed to take reasonable steps” to protect clients’ data while collecting more personal health information than was “reasonably necessary.” Deschamps Marquis adds that much of what her team deals with involves providing clients with a better understanding of the privacy landscape and how to create a policy framework to address any associated risks.

Melanie Szweras, principal at Smart & Biggar LP, agrees that information related to someone’s health is considered “highly sensitive,” with the potential that some of this information could be improperly disclosed or misused. For example, Szweras notes that genetic information could be used to deny insurance or pay out claims, or health information might be disclosed that could impact someone’s employment. Says Szweras: “You may not want people to know certain things and make decisions that adversely affect you based on that.”

There is also the issue of anonymizing data collected and used in the health and life sciences sector. “The question is whether you’ve anonymized it sufficiently or simply de-identified the information.” Data initially collected with identifiable information is often input after removing identifying information. “But is there some way that a malicious actor who gets hold of this data can put it back together? That is one of the things people are grappling with when it comes to health care data.”

She adds: “The key issue becomes the balance between the duty to protect personal health information and the ability to manage information in the healthcare system,” especially when third-party providers are developing innovative technologies to improve the healthcare process. “You want to encourage that, but you want to protect personal information.”

Deschamps Marquis says that a patchwork of provincial and federal laws governs privacy issues applicable to the health and life sciences sector. “There is general agreement that many of these laws are outdated,” she says. Incidents such as the one at LifeLabs have prompted provinces to upgrade their privacy laws. Quebec has become a leader in updating statutes that protect personal health information within public and private healthcare systems.

At the federal level, Bill C-27, which would have reformed Canada’s federal private sector privacy law by replacing the 25-year-old Personal Information Protection and Electronic Documents Act with the Consumer Privacy Protection Act and enacting the Artificial Intelligence and Data Act (AIDA), died on the order table when the previous session of Parliament was prorogued earlier this year. (AIDA would have introduced a framework for regulating AI systems used during commercial activities in Canada.)

Szweras says there are current laws governing the collection, use, and disclosure of certain information, with consent being the “cornerstone” of these laws. “There are gaps to be filled, but we are not without regulation.”

It’s unclear when all or parts of this bill may be brought back. However, rookie Liberal MP Evan Solomon, appointed this spring as the country’s first minister of AI and digital innovation, said in June that the government would first examine the legislative framework for privacy and data protection. It will also explore ways to harness the economic benefits of AI in areas such as healthcare.

Indeed, AI will likely disrupt the health and life sciences industry as it is increasingly used for clinical and non-clinical applications. In a recent note, Miller Thomson LLP partner Kathryn Frelick states that pharmaceutical companies have been utilizing AI for drug development and discovery, “helping to reduce costs at all stages of development.” Devices with machine learning are also changing how physicians analyze digital imaging and disease surveillance.

Still, health industry organizations “must understand the impact of generative and traditional AI and the changing regulatory landscape before implementing these tools in their work streams,” Frelick says. There may be ethical concerns and biases within an AI system’s algorithms and training data, so caution is needed to “ensure these embedded biases are not perpetrated into the AI outputs.”

While AI technologies are increasingly reliable, the potential for incorrect or fabricated responses (so-called “hallucination”) still exists, as does the use of malicious “deep fake” impersonations, phishing, and other cybercrimes.

Says Frelick: “Where organizations are dealing with sensitive personal health information, there must be comprehensive systems in place to protect patients’ data from malicious activities. Additionally, organizations need to be aware of ethical concerns and biases that may exist within an AI system’s algorithms and training data.”

Patrick Roszell, principal at Smart & Biggar, whose practice involves technical and contractual aspects related to AI, notes that there are also legal questions about the ownership of data (patient data and derivative data) and intellectual property when informed consent is required, as well as how organizations obtain such permission. 

“In many circumstances, data passes through a number of hands between its generation or acquisition, and then its ultimate use,” says Roszell. “Along the way, there could be many parties who assert various rights or seek to impose restrictions on how the data is used.

“You can see situations where processors of data, technology vendors, or whoever may be involved in the gathering or transformation of data, taking positions that they have some proprietary interest in the data. And so, when you then turn around and try to use that data for a product such as an AI product, it can be quite complicated to unwind it all.” 

One example, he says, is that with older technologies, it is generally easier to distinguish a tool for manipulating data from the data itself. With AI, in a sense, the “tool and the data are more closely coupled,” so there is more complexity regarding control over IP and confidential data and how that can intertwine with machine learning tools.

“I think organizations are going to need to become more sophisticated about how they understand the importance of these rights, and how they contract around them.” 

Additionally, one part of the organization may understand the technical aspects, not the privacy considerations, and vice versa. Says Roszell: “technically-minded lawyers can do a lot to bring valuable guidance in connecting those dots.”

While monitoring the ethical and privacy issues related to artificial intelligence, Minister Solomon stated in June that Ottawa will shift its approach from a policy of “over-indexing on warnings and regulation” of AI to focusing on the potential benefits of AI technology for Canada’s economy. That doesn’t mean regulation won’t exist, he said, but it will “have to be assembled in steps.”

Parts of Bill C-27 that died when Parliament was prorogued were criticized by companies, which argued it would stifle innovation, and civil rights groups, which said it was vague and that enforcement measures lacked teeth.

As an example of supporting AI innovation in the healthcare sector, Solomon announced an investment in HealthSpark, an initiative of the Vector Institute aimed at accelerating the development of AI technology, in late June. The money will help support startups and “scaleups” of firms for training, mentorship and access to networks and AI engineering as they develop AI solutions “to tackle some of our more pressing healthcare challenges.”

Miller Thomson’s Frelick says those in the health industry should prepare for the next generation of AI and take proactive steps to mitigate risk. 

“Multidisciplinary teams established to identify and manage risks associated with generative AI are a fundamental part of such an endeavour,” she says, as is the education of boards of directors, employees and contractors.

“Privacy impact and security assessments, and maintaining human oversight and monitoring of AI systems are also essential aspects of a successful transition.”