In November, Ontario introduced “Bill 149, Working for Workers Four Act, 2023”. While artificial intelligence (“AI”) features minimally within the bill’s text, the government’s goals in relation to the technology are lofty: “ensur[ing] that when artificial intelligence is used in the hiring process… it’s done…in a transparent manner”; “ensur[ing] workers aren’t excluded from the job market because of technological biases and that their privacy rights are protected”; and “[h]elp[ing] workers make informed decisions in their career search.”
It is undeniable that AI has the potential to upend hiring practices. Concerns about biases, privacy, and consent are legitimate and substantial. Clearly, legislation and government regulation is required. What is less clear is whether Bill 149 meaningfully addresses such issues.
What does the Bill say and purportedly do?
Bill 149 is the fourth component in a series of Ontario legislative reform initiatives, which have been ostensibly aimed at modernizing worker protections. Relevant to this article, the bill proposes to add a Part III.1 to the Employment Standards Act, 2000, SO 2000, c 41. Part III.1 will regulate job postings generally, and specifically obligate employers to disclose the use of AI in screening, assessment, or selection of candidates in public advertisements.
The bill also obligates employers to “retain or arrange for some other person to retain copies of every publicly advertised job posting within the meaning of Part III.1 and any associated application form for three years after access to the posting by the general public is removed.”
“[P]ublicly advertised job posting” is to be defined by the regulations and the Bill provides for regulatory exceptions to the AI disclosure rule.
Where are the potential gaps?
While recognizing that associated regulations may close gaps, as drafted the legislation does not appear to protect workers in any meaningful way.There is no “opt-out” right for applicants concerned about privacy or screening bias. The Bill also appears silent on employer/software company reporting obligations or specific oversight mechanisms designed to identify and address algorithmic hiring biases.
Such omissions are substantive. Although addressing privacy concerns (e.g. controlling ownership of applicant information) is of critical importance, this article will focus on the gaps in protection regarding bias. If left unaddressed, biased algorithms may “encode and amplify social ills as devastating as racism, sexism, and economic inequality” in hiring practices. Failure to ensure the visibility of and redress for such issues is a substantive problem with the bill as drafted.
An analogous New York City law obliges employers to arrange and publicly post “bias audits” of AI hiring tools by an “independent auditor”, and to provide candidates with ten business days’ notice of AI use as well as the “job qualifications and characteristics that such automated employment decision tool will use in the assessment.” Other United States jurisdictions have introduced similar rules. Even a brief comparative analysis of Bill 149 and the New York City law makes clear that the Ontario legislation would provide only a byte sized solution to the terabyte sized problem of addressing AI use in hiring decisions.
That is not to suggest that the New York City law is without flaws. Despite theoretically providing oversight, the law has been criticized for “a lack of standardisation [sic] and little information about what a desirable audit might look like”. However, flawed rules that begin with the aim of providing both transparency and accountability may provide a superior starting point for future lawmakers. By placing the starting line as it has, Ontario risks playing perpetual catch up in the race to legislate AI. As noted in commentary on the province’s earlier regulation of algorithms in the workplace: “Ontario’s workers, like any other workers…deserve more than transparency – they need agency, particularly through collective oversight and action.”
Even if lawmakers re-draft Bill 149 to address the above, further reform will be required to ensure substantive legal remedies for workers. The British Columbia Law Institute recently noted that “[a]s usage of artificial intelligence expands, there will be more cases of unintended discriminatory effects that do not fit readily into the human rights framework.” Identifying biased algorithms is not the only hurdle and mitigating against the risks posed by such technology “will require a team effort from scholars across many disciplines.”
Is collective data important?
Privacy compliant collection of AI hiring program inputs and outputs could be an important first step in providing consequential worker protections. In 2016, ProPublica published a revelatory article on bias in the assessment algorithm used by judges in sentencing. In revealing such bias, the journalists involved analyzed thousands of data points. When Amazon scrapped its internal hiring tool due to bias, it had known about the issue for years and been unable to solve the problem. In concluding the algorithm was biased, the company may have repeatedly manipulated variables and reviewed the resulting output changes. Put simply, extensive data sets are the mechanism through which algorithmic problems are often discovered. While Bill 149 provides that employers must keep records of certain materials, it does not obligate any form of systematic review or mass data collection or retention.
What does the public want?
Although Ontario does not appear to have pointedly surveyed the public on the use of AI in hiring decisions, the government’s consultation results on the creation of a “Trustworthy AI Framework” in relation to the province’s own use of AI is germane. Ontarians wanted “[d]isclosure and transparency” – but also “a recognition of human rights values and principles, and commitment to address systemic bias in AI.” Stemming from the consultation responses, the Ontario government committed that:
- AI would not be used in secret;
- risk-based rules would be put in place to ensure safe, equitable, and secure AI use; and
- AI use would protect and reflect the “the diverse communities” that make up the province.
The government will hopefully pursue these commitments in all AI based legislation, not just laws targeting internal AI use.
Bill 149 currently addresses the first of these commitments, providing transparency where previously there was none. In achieving even this small step forward, the Ontario government has begun to fill the “regulatory and legal vacuum” surrounding AI. But, much more work is required to ensure all three commitments are reflected in new laws and to protect workers from the tangible risks associated with AI in hiring.
Julia Lawn performs diverse and essential roles within the firm. Julia joined the firm in 1999, became a partner in 2006, and has performed in a management role since 2014. While overseeing all aspects of the firm’s operations over the past many years, Julia also conducts research and performs complex legal analysis on significant and important cases for NST’s clients, including work leading to recent successes or submissions in all levels of court.
Allyse Cruise is an associate at Nathanson, Schachter & Thompson LLP. She joined the firm this year after working in government. Prior to law, Allyse worked as a degreed engineer, primarily in the mining industry.
 For further discussion on emerging U.S. laws, see: O'Neil, Cathy and Sargeant, Holli and Appel, Jacob, Explainable Fairness in Regulatory Algorithmic Auditing (October 10, 2023). Available at SSRN: https://ssrn.com/abstract=4598305 at p. 14.
 See, for example, Jennifer Quaid’s article (The risk of waiting to regulate AI is greater than the risk of acting too quickly", Policy Options, Sept 27, 2023) on Bill C-27 and “the impossible burden of identifying the best way of doing things from the outset.”
 See, for comparison, a collection of bias audits resulting from New York City’s law: https://github.com/aclu-national/tracking-ll144-bias-audits.