Know the risks of AI and automated decision-making

Kirsten Thompson and Jaime Cardy of Dentons Canada LLP explain the perils of using personal data

Businesses are rapidly adopting artificial intelligence (“AI”) and automated decision-making systems (“ADS”) to increase the efficiency of a wide variety of processes. Equally rapidly, laws, guidelines and standards regulating the use of such technologies are being implemented, both within Canada and globally. Businesses adopting these technologies, and the vendors of such technologies, are often unaware of the regulatory landscape, their compliance obligations and how to mitigate the legal risks these technologies pose.

What are AI and ADS? How are they used?

AI can generally be described as a computer system that is developed and trained by humans to perform functions that would typically require human intelligence, while an ADS relies on AI and computer algorithms to either assist or replace human judgment in decision-making processes. Most people will have already encountered several instances of AI or ADS today, though they may not have been aware of it. Recommendation engines on streaming services and apps (for movies, music, search terms, etc.) use AI, as does the image search function on a mobile phone. Similarly, anyone who has applied for a job or a financial product of service recently, has likely been the subject of ADS at some point in the process.

Examples of how these technologies have been used by organizations include:

  • Attracting and screening job candidates,
  • Evaluating employee performance,
  • Career pathing,
  • Mitigating the risk of fraud,
  • Deciding eligibility for a loan, benefit, or enrollment in a program,
  • Offering recommendations based on interests or behaviour,
  • Adjudicating simple disputes, and
  • Escalating more complex matters for human review.

While ADS can increase speed, minimize costs, and decrease bias, their use is associated with certain risks. Where, for example, an ADS makes decisions that impact individuals, it is clear there needs to be some accountability for that decision-making process. However, who or what is held to account, and how, is less clear. While the approaches to regulating AI and ADS vary across jurisdictions, in Canada these issues have primarily been addressed through privacy law reform.

Canadian approaches to regulating to AI and ADS

Current federal approach

The Personal Information Protection and Electronic Documents Act (“PIPEDA”) regulates the handling of personal information by private sector organizations in Canada. PIPEDA does not expressly address AI/ADS, but there are elements of it that would apply to an organization’s use of such technologies where they use personal information. These requirements include accountability and transparency obligations about the use of ADS. Additionally, section 5(3) of PIPEDA would limit an organization’s ability to use personal information in ADS and AI to “purposes that a reasonable person would consider are appropriate in the circumstances.”

Proposed federal approach

In June 2022, the Canadian legislature tabled Bill C-27, the Digital Charter Implementation Act, 2022 (“Bill C-27”) with the aim of overhauling Canada’s private sector privacy regime. Among other things, Bill C-27 proposes the following new statutes:

  1. The Consumer Privacy Protection Act (“CPPA”), which would reform PIPEDA, and
  2. The Artificial Intelligence and Data Act (“AIDA”), which would govern international and interprovincial trade and commerce in AI systems (whether or not they use personal information).

Automated decision systems: The CPPA would define “automated decision system” as, “any technology that assists or replaces the judgment of human decision-makers through the use of a rules-based system, regression analysis, predictive analytics, machine learning, deep learning, a neural network or other technique." This definition is incredibly broad – it is not limited by the impact of a decision, the class of individuals who may be affected by a decision, or the classes of organizations that may use the system. It also captures decisions that both assist human decision-makers, and replace them entirely, effectively removing a loophole in the AI legislation of other jurisdictions that exempts “human in the loop” systems.

The CPPA would explicitly impose both transparency and explainability obligations on organizations using personal information in ADS. For instance, organizations would be required to make readily available a plain language account of their use of any ADS to make “predictions, recommendations, or decisions” about individuals that could have a “significant impact” on them. The phrase “predictions, recommendations, or decisions” is broad and could include everything from playlist recommendations through to loan adjudication. The scope of this phrase is limited by the qualifying language “significant impact,” which is intended to weed out trivial or inconsequential examples. In its current form, it is unclear whether targeted ads would be caught by this, or whether just some targeted ads on the basis of their content (e.g., ads for mortgage interest rates, as opposed to ads for sneakers). 

The CPPA would also create the right for individuals to request an explanation of any “prediction, recommendation or decision” made about them by those systems. The explanation would need to include a description of the personal information that was used, the source of that information, and the reasons or principle factors that led to the outcome. Many organizations will find it difficult to provide this information as it is highly likely they do not know, and never asked these questions of the vendor. Vendors themselves are, in many cases, also unlikely to be able to provide complete answers, as many current AI applications are pre-trained and use open-source software and developers are insufficiently aware of how the AI functions.

Artificial intelligence: AIDA would define “artificial intelligence system” as, “a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.” Again, this definition is quite broad. It is not limited to data consisting of personal information, as defined in the CPPA, and it includes systems that work entirely on their own, or with additional human input. It is also unclear what “data related to human activities” encompasses: is climate change data caught by AIDA, or not? What about vehicle traffic patterns?

Under AIDA, there would be new requirements regarding the design, development, and use of AI systems, as well as new obligations for individuals responsible for AI systems, and for individuals that process or make available anonymized data for use in AI systems. Companies should be aware that if they participate in any part of the AI ecosystem, they may well have obligations under AIDA. AIDA would also prohibit certain conduct in relation to AI systems that may result in serious harm to individuals or individual interests. Notably, the majority of the obligations under AIDA will only apply to “high-impact systems.” The term “high-impact system” will be defined in regulations, such that the true scope of AIDA remains unknown.

Provincial approach

At the provincial level, Quebec’s Bill 64, An Act to modernize legislative provisions as regards the protection of personal information, provides that as of September 22, 2023, private sector enterprises (and public bodies) in the province will have certain obligations regarding the use of personal information in ADS.  Bill 64 will impose obligations on enterprises that use personal information to render decisions based exclusively on automated processing - a scope which is narrower than that proposed under Bill C-27. Such obligations will include notifying an individual of their personal information used to make a decision, and the reasons and principle factors or parameters that led to a decision. This notice must be provided when or before the enterprise informs the person of the decision. Individuals will also have the right to seek correction of their personal information that is used in automated decisions, which recognizes the importance of ensuring the accuracy of ADS inputs when such systems are used to make decisions that affect individuals. Similar obligations are imposed on public bodies.

Privacy challenges associated with use of AI and ADS

Perhaps the biggest challenge that arises with the use of AI and ADS technologies is that such systems are notoriously difficult to explain in plain language. If an organization is unable to meet its duties of explainability and transparency, by articulating, for example, how its ADS arrived at a particular decision about an individual, then the organization will have compliance issues with PIPEDA, the incoming obligations under Bill 64, and the proposed obligations in Bill C-27.

Other issues can arise relating to consent. For example, if there is a power imbalance between the organization using ADS technology, and the individual whose personal information is involved in the decision-making process, the individual may be viewed as being coerced into providing consent, such that their consent would be invalid. This could arise in an employer-employee dynamic, or where an individual is applying for some kind of benefit.

Organizations will also need to be mindful of the fact that Canadian privacy laws allow individuals to withdraw their consent at any time, subject to certain restrictions. Therefore, if an individual decides that they do not want their personal information used by an organization’s ADS, the organization must be able to make those decisions using other means.

Additionally, given that AI and ADS technologies are created and trained by humans, there is always a risk of bias creeping into a system. AIDA imposes requirements relating to the identification, assessment and mitigating the risks of harm caused by bias in “high-impact” AI systems; however those obligations are not universally applicable to all AI systems. Whether an organization needs to comply with these bias review and mitigation measures will depend on assessment of whether it operates a “high-impact” system, which cannot occur until that term is defined by regulation.

AIDA also imposes obligations and requirements on different actors in the AI environment, such that a person who creates a high-impact system will have some obligations, while a person who manages the operation of a high-impact system will have others, for example. In practice this means that an organization will not be able to pass off all of the responsibility and accountability for an AI system onto the system’s developer. Each party will have their respective obligations. In addition, individuals that process or make available anonymized data to such persons will also have obligations (and associated risk), which may come as a surprise to many businesses.

Penalties and enforcement

While the Office of the Privacy Commissioner of Canada is currently restricted to making non-binding recommendations under PIPEDA, Bill C-27 would significantly expand the Commissioner’s enforcement powers. For example, the CPPA would provide for monetary penalties as high as $10,000,000 or 3% of the organization’s gross global revenue in its previous financial year, which would be available if an organization is found to have failed to provide a plain language explanation of its policies and practices, including a general account of its use of ADS. The same penalties may be available if the Commissioner finds that an organization has failed to obtain valid consent from an individual regarding the organization’s collection, use, or disclosure of their personal information, or if an organization failed to respect an individual’s withdrawal of consent.

AIDA would contain similar penalty and enforcement provisions; however, the details are to be set out in the yet-undrafted regulations.

In Quebec, Bill 64 provides for similarly large administrative monetary penalties against enterprises that fail to inform a person concerned by a decision based exclusively on an automated process, or that fails to give such a person the opportunity to submit observations regarding the decision.

Reap the benefits of ADS while mitigating the risks

The following are 10 action items that organizations can implement to mitigate the risk involved in their use of AI and ADS now, and when (or if) additional legislative requirements come into force:

  1. Companies carrying on business in Quebec should identify any ADS that have no human input in the automated processing of personal information to make decisions about individuals. Other organizations should identify any ADS that uses personal information to make decisions about individuals, regardless of whether the system makes those decisions autonomously or with human-input. Vendors of software and technology should be asked if their products/services use a rules-based system, regression analysis, predictive analytics, machine learning, deep learning, a neural network or other similar technique.
     
  2. Companies should start to understand their use of ADS and be in the position to provide plain language descriptions of all ADS processes identified in item (1), which include the use of ADS, the logic involved, the personal information used, and the decisions being made.
     
  3. Companies that have not already done so as part of their current PIPEDA compliance should consider implementing a mechanism to allow for the correction and updating of personal information used in ADS processing.
     
  4. Companies should explore mechanisms that manually make a decision that is typically rendered by ADS, in the event that an individual does not consent, or withdraws their consent, for the use of their personal information in automated processing.
     
  5. If carrying on business in Quebec, companies should consider maintaining a “human in the loop” to obviate the duties regarding automated processing of personal information under Bill 64 (if a decision is merely assisted by automated processing, then the requirements in Bill 64 will not apply).
     
  6. Companies should conduct technical due diligence when procuring AI and ADS technology to determine whether the system was developed using appropriate Standards or Codes, understand how the system was trained and how it works, and learn whether the developer conducted any reviews to assess the system’s outputs for bias or other errors.  In the same vein, companies should have contractual clauses with AI and ADS vendors to assign obligations relating to indemnification and liability.
     
  7. Companies acquiring other business should closely examine software, hardware, systems and processes they are acquiring or will be responsible for after closing. The due diligence process should include robust technical review with lawyers experienced in the area. Representation and warranties, as well as indemnification or limitation of liability clauses should be adjusted appropriately to reflect the risks of AI or ADS.
     
  8. Insist on audit rights or, at least, the right to inspect audit reports relating to the performance of AI and ADS processes, or the AI or ADS itself. 
     
  9. Establish appropriate retention periods for any personal information that is used in automated processing. If the personal information is being anonymized, companies should ensure the anonymizations process is documented, and tied to recognized standards or practices. Consent for anonymization (considered a use) should be obtained.
     
  10. Companies should ensure that policies, processes, procedures and practices (and accountability structures) across the organization related to the mapping, measuring and managing of AI risks are in place, transparent, and implemented effectively.

***

Kirsten Thompson is a partner and the national lead of Dentons’ Privacy and Cybersecurity group. She has both an advisory and advocacy practice, and provides privacy, data security and data management advice to clients in a wide variety of industries.

Kirsten’s practice has a particular concentration in data-driven industries and disruptive technologies, and she is a leading practitioner in areas such as Fintech (including blockchain and "smart contracts"), digital identity, Open Data/Open Banking, vehicle telematics and connected infrastructure, Big Data/data analytics applications and enterprise data strategy. She also helps clients prepare for and manage information crises, such as data breaches, investigations and class actions, and has advised financial institutions, insurers, health care providers and providers of critical infrastructure on cybersecurity preparedness and response planning. She has been lead Canadian counsel on some of the largest North American data breaches and has been selected as preferred cybersecurity counsel by a number of Canada’s leading financial institutions and insurance providers.

Jaime Cardy is a senior associate in the Privacy and Cybersecurity group in Dentons’ Toronto office. She has particular expertise in providing risk management and compliance advice under various legislative privacy regimes, including in both the public and healthcare sectors.

Before joining Dentons, Jaime was an adjudicator at the Office of the Information and Privacy Commissioner of Ontario, where she conducted hearings and issued binding orders deciding appeals and complaints under Ontario’s provincial, municipal, and health sector privacy legislation. She has extensive experience interpreting and advising on both privacy and access to information issues.