Artificial intelligence: how new tech is creating privacy concerns and modernizing IP law in Canada

Dentons' Panagiota Dafniotis & Chantal Bernier on AI-based problems – and solutions – for businesses

There is no single agreed definition of AI. The Organisation for Economic Co-operation and Development (OECD) defines AI as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions […] designed to operate with varying levels of autonomy” while the World Intellectual Property Organisation (WIPO) defines AI as including "machines and systems that can carry out tasks considered to require human intelligence, with limited or no human intervention … [and] techniques and applications programmed to perform individual tasks.”   

Regardless of how we define AI, one thing is clear: AI is automating tasks, augmenting human decisions and shaping go-to market strategies for new products. New business models are emerging in every industry from financial services to healthcare and beyond. AI has become a mission- critical resource to businesses as they address and anticipate customer needs and utilize AI to inform them on decisions that impact their operations and services.

An interesting trend has emerged with companies integrating business expertise with advanced AI to solve complex business challenges for clients. Team silos have been replaced by strong collaboration and great examples of this can be found in the financial services industry, where data scientists and equity traders or bankers have worked to develop AI-based electronic trading platforms for improved trading results, insights, or money management platforms for clients.

1. AI and IP

One of the most important questions in the field of AI is whether and how intellectual property (IP) can protect an AI and its by-products.

Copyright

For modern AI to be implemented properly and to succeed at scale, researchers and engineers need access to large datasets. Since AI systems use data to create value, the question of ownership and the right to use such data becomes key. Claims to data ownership are often based in copyright, even though “there can be no copyright in information.” (Nautical Data International, Inc. v. C-Map USA Inc. 2013 FCA 63 at paragraph 11). In Canada, there is no stand-alone ownership right in data (Commissioner of Competition v. Toronto Real Estate Board 2017 FCA 236), but the selection or arrangement of data may cross the threshold of creativity required to attract IP protection. Understanding the potential for these rights to exist in the absence of a text and data mining exception in Canada, as well as the legal risks that stem from data ownership and data use for AI systems, cannot be overlooked.

Moving from the question of ownership to the question of authorship, AI applications are increasingly capable of generating literary and artistic works. This capacity raises major policy questions for the copyright system. The Canadian Government has been actively pursuing initiatives to determine whether and what copyright policy measures should be taken to ensure Canada's copyright framework continues to achieve its underlying policy objectives and related priorities in the face of the challenges brought about by AI.

Patents

As is occurring in other countries, Canada is seeing an increase in the number of patent filings directed at AI-based inventions. Examination of AI-based inventions in Canada involves determining which elements of a claimed invention are essential to solving a technical problem such as a problem related to computer operation.  From a Canadian perspective, a description of the technical problem solved by or through an AI-based invention is important in order to improve the odds that the invention will avoid a subject matter eligibility rejection. The technical problem can be characterized as a "computer problem" (for example: the use of a neural network for faster classification of images when compared to non-AI implementations) and include technical detail regarding the hardware and software used to solve the technical problem to help establish the essential nature of the hardware and software.

Computer programs and mathematical methods are generally excluded from patentability in Canada. However, the Federal Court of Canada decision in Choueifaty v. Canada (Attorney General) 2020 FC 837, has paved the way to patenting software inventions – even when implemented using generic computer components – and confirmed that where an algorithm improves the functioning of the computer, it could be considered patentable.  

Trade secrets

In combination with copyright or patent protection, the AI algorithm of an AI system can be protected as a trade secret in the same way that traditional software and related inventions are often protected. In Canada, there is no statute law specifically governing trade secrets or confidential information. Trade secret law is instead based on common law – or in the case of Quebec, civil law – principles enforced in the courts through claims including torts, such as breaches of contract or confidence. There are also relevant dispositions in Canada's Criminal Code. Companies doing business in Canada must therefore take all possible measures to maintain the secrecy of their commercial information in order to benefit from trade secret protection. Trade secret protection would be suitable for areas of information related to AI such as algorithms or neural network design, training data, output of the AI system, source code, and the way a business utilizes AI to implement machine learning.

While trade secrets are increasingly important for AI companies, they come with one major drawback: protection is only afforded to the extent the intellectual property can be kept secret. Given the employee turnover rate we have recently observed at technology companies and the practical need to share technology widely with employees and partners in the ordinary course of business, strong NDAs and employment agreements are needed to ensure departing employees or business partners are legally required to keep trade secrets secret.

Due to the complex nature of an AI system that distinguishes it from conventional software, the traditional protections we have relied on for software are not as straightforward to apply to AI systems. To protect IP (and data) generated by their AI systems, companies will want to be sure their IP strategy routinely considers other forms of IP protection, has tight confidentiality agreements with employees and partners, discloses as little as possible of the sensitive materials, and maintains physical access controls in their facilities where necessary to protect their investment in AI.

Trademarks

AI’s increasing prominence and the corresponding diminished role of human beings in the product search and suggestion purchasing process, means some of the historic concepts and principles of trademark law will have to be interpreted differently. The questions of confusion, imperfect recollection, the average consumer, and trademark infringement, as well as in comparative advertising where the “influencer” is an AI application, change significantly as does the resulting liability.

2. AI and Privacy law

Currently, none of the four privacy laws in Canada expressly address artificial intelligence, whether the federal Protection and Electronic Personal Information Documents Act (PIPEDA), the Alberta Personal Information Protection Act (Alberta PIPA), the British Columbia Personal Information Protection Act (BC PIPA) or the Québec Act respecting the protection of personal information in the private sector (Québec Act). The Office of the Privacy Commissioner of Canada (OPC), however, applies PIPEDA to artificial intelligence and, coming into force in September 2023, amendments to the Québec Act will regulate the use of AI.

The application of PIPEDA to artificial intelligence

The OPC reads into the Fair Information Principles enshrined in Schedule 1 of PIPEDA specific application to the use of AI, as described in its proposal for A Regulatory Framework for AI: Recommendations for PIPEDA Reform:

  • Limitation of use (Principle 4.5): Privacy law requires that personal information be used only for the purposes for which it was collected. In relation to AI, it mandates that the processing of personal information through AI must remain consistent with the original purposes to which the individual consented or that new consent be obtained.  
  • Consent (Principle 4.3): Profiles obtained through the application of artificial intelligence generally constitute sensitive personal information because of the breadth of information they rest upon. It follows that the use of personal information through AI for new purposes requires express consent (opt-in).
  • Safeguards (Principle 4.8): The Safeguards Principle requires that personal information be “protected by security safeguards appropriate to the sensitivity of the information.” This means that the personal information resulting from the application of AI must be safeguarded at a high level. 
  • Accountability (Principle 4.1) and Openness (Principle 4.8): The algorithmic use of personal information must be transparent and explainable.

The OPC also addresses algorithmic bias as a privacy risk and has inquired about organizations’ measures to avoid it. Organizations should be ready to explain how they ensure that their application of AI to personal information does not lead to biased, discriminatory outcomes.

The new Québec privacy law

Bill 64 amends the Québec Act by introducing provisions that regulate AI. Under the new section 12.1 of the Québec Act:   

  • organizations will be required to inform individuals of a decision based exclusively on automated processing, no later than at the time of informing the individual of the decision itself (section 12.1);
  • individuals will have the right to:
    • access to the personal information used to render the decision, to the reasons and the principal factors and parameters that led to the decision;
    • obtain correction of the personal information used to render the decision; and
    • make representations on the use of that personal information and obtain a review of the decision.

Failure to meet these obligations can lead to an administrative monetary penalty by the Commission d’accès à l’information (CAI), under new subsection 90.1 (5).

Further developments to come 

If the expected amendments to PIPEDA follow those proposed through the former Bill C-11 in 2020, An Act to enact the Digital Charter Implementation Act, 2020, federal privacy legislation will also extend to AI. Bill C-11 introduced the requirement for organizations, upon request by individuals, to provide an explanation of the prediction, recommendation or decision resulting from an automated decision-making system and of how the personal information was obtained for that purpose.

Ontario’s White Paper on Modernizing Privacy in Ontario, consulting on the possibility of adopting a provincial privacy act for the private sector in Ontario, envisages the regulation of the use of AI. In addition to granting individuals the right to receive an explanation of the outcome of an automated decision, it is proposed to also address the risk of profiling and surveillance.

In British Columbia, the Report of the Special Committee to Review the Personal Information Protection Act recommends that the Government of British Columbia “Undertake a public consultation to study the long-term socioeconomic impacts of artificial intelligence”.

Conclusion

AI challenges established legal concepts and principles. As a consequence, we should expect legislative reforms and court decisions that will create new legal requirements around the use of AI and its products.

***

Chantal Bernier is a member of Dentons’ Canadian Privacy and Cybersecurity, Government Affairs and Public Policy practice groups. Chantal advises leading-edge national and international companies as they expand into Canada and Europe, enter the e-commerce space, adopt data analytics and roll out data-based market initiatives. Her clients include ad tech companies, financial institutions, biotech companies, data analytics firms and government institutions. During her nearly six years at the helm of the Office of the Privacy Commissioner of Canada (OPC), Chantal led national and international privacy investigations in the public and private sectors, as well privacy audits, privacy impact assessment reviews, technological analysis, and privacy policy development and research.

***

Panagiota Dafniotis is a partner in our Corporate group and she is the National Lead of the Intellectual Property group. She is an IP lawyer who works with a wide range of companies from emerging growth technology companies to Fortune 500 companies, in diverse industries, on the development and implementation of integrated IP strategies covering the IP lifecycle. She advises in all areas of IP law and related risk management, governance and strategy, including patents, trademarks, copyright, data management and analytics, open source, licensing, marketing, IP in M&A, commercialization, co-development, partnerships as well as IP disputes. Panagiota is also a member of Dentons' Global Venture Technology and Emerging Growth Companies practice, Dentons Global AI project team and Nextlaw In-House Solutions, General Counsel Leaders Team.