Deal-maker or deal-breaker: the legal ins and outs of using AI in M&A

Imran Ahmad, Roxanne Caron, and Suzie Suliman on why due diligence is key in AI-related acquisitions

As companies look to streamline operations, there’s a growing demand for digital transformation, with many turning to artificial intelligence (AI) technologies for solutions, and the acquisition of AI-focused entities is becoming more and more common. While AI has been a hot topic for many years, its popularity has increased exponentially in recent years, owing partly to the introduction of generative AI platforms like ChatGPT. The global AI market is expected to surpass $680 billion in 2023, and is projected to jump to $1.2 trillion by 2026.[1]

There are many things to consider when creating or integrating AI into a company’s activities, including pitfalls to avoid ahead of a potential share of assets. In this article, we take a deep dive into what companies need to consider before they buy AI.

Increasing presence of AI in transactional landscape

The rise of AI allows systems to complete tasks that previously required human intelligence. These tech advances have introduced opportunities for increased efficiency and accuracy in all aspects of life, helping people in their work and in their personal lives. As AI becomes more advanced, we have seen it transform a wide range of industries. For example, it has been used in healthcare to improve medical diagnoses and at-home health services. It’s used in the financial industry for improved credit scoring and fraud detection, in agriculture to improve precision harvesting, and in manufacturing to improve production and quality control.

Companies continue to struggle to find people with the knowledge and skills to test and institute AI throughout their organizations. There’s a shortage of AI talent in a highly competitive environment, even though many companies have raised their budgets and, in some cases, doubled the average salary.

Given this shortage of talent and the increase in demand/capabilities of AI technology, acquisitions of AI-focused entities have also increased. Despite recent economic trends, technology is also one of the sectors expected to be least impacted by inflation.[2] In fact, technology (and in particular, emerging technologies) are expected to see the highest growth in inbound cross-border M&A in 2023, as compared to 2022. As Canada is one of the leaders in artificial intelligence, with over 850 AI-related start-up businesses,[3] we can expect to see an increase in acquisitions of Canadian AI-related entities as well.

Issues specific to AI

Deals involving AI bring about specific and unique issues for consideration during the due diligence process. Understanding the specific challenges created by AI is important for companies to ensure that the AI technology holds genuine value and would not raise red flags during the course of a transaction.

  • Source code: As mentioned earlier, AI is resource-intensive. To assist in the creation of AI algorithms, developers can use pre-existing code and add layers as needed. Companies need to be aware of the source of these codes and train employees to use codes that do not infringe on another entity’s intellectual property and are not bound by an open-source licence, which could impede the commercialization of an AI product or service.

  • Biases in data: Data used to train an AI algorithm may contain biases. If the data used for training an AI system comprises human data limited in race, gender, disability, socioeconomic status, or other biases, it could result in technology that is susceptible to errors or create other issues.  For example, if an AI system is used to make decisions about individuals, then it runs the risk of resulting in discrimination under the Canadian Human Rights Act. When conducting due diligence on AI technology, steps should be taken to ensure that the company has appropriate procedures and policies in place to identify and reduce the likelihood of biases in its training processes.
     
  • Liability: Depending on the activities performed by the AI, it can carry varying degrees of liability for a company. For example, an AI technology facilitating diagnosis based on MRI scans could have significant consequences if the outputs lack precision and consistency. This could open a company to potentially costly lawsuits. When providing a product or service integrating AI technology, companies should understand the legal ramifications associated with the deployment of the technology and avoid unlimited liability clauses in their contracts.
     
  • Visibility on process to obtain output: It can be difficult to understand the reasoning behind outcomes or decisions made by an AI system. These systems are often referred to as a “black box.” As a buyer, it is important to seek assurances regarding the reliability and accuracy of an AI system. Due diligence should be conducted to understand how reliable an AI algorithm is. This includes understanding how the system was trained, what research went into developing it, who developed it (and their qualifications), what testing was conducted, whether there are any known errors or bugs, etc.
     
  • AI-specific laws: Evidently, there are risks associated with the use of AI, and many jurisdictions have only recently started to consider enacting laws or regulations relating to AI. For example, the European Union has introduced a draft “AI Act,” the United Kingdom has released a proposal for regulating AI, the United States has a Blueprint for an AI Bill of Rights, and Canada recently tabled the “Artificial Intelligence and Data Act” as part of Bill C-27. There is a discrepancy between the speed at which the adoption of AI is growing compared to how quickly these laws are coming into effect. Many agreements related to AI technology (i.e. license agreements) are governed by the laws of jurisdictions with very little to no AI regulation, introducing an additional layer of uncertainty in a transaction. How these agreements will be interpreted and the impacts of them are unclear. The governing law provisions of material contracts relating to AI should be factored into the overall risk analysis when commercializing AI.
     
  • Intellectual property: Another significant factor that must be considered is what data was used to train the AI system and whether the company had appropriate rights to use the data. When a third party’s data is used to train an AI system, it can result in improvements to the AI system and new IP rights. It is not uncommon for entities to use publicly available data to train their AI system. However, even if data is publicly available, it may come with restrictions to use in the commercial context. Without appropriate contractual measures in place, the company’s rights in the AI system may be in jeopardy, resulting in significant risks to the business and the value of the transaction. Companies should carefully consider the intellectual property rights of all parties involved in the development of AI systems.
     
  • Privacy: As with most technology, the use of AI also presents data privacy issues. Where an AI system is trained with human data or where it accepts personal information as input, it is important to ensure that the company has complied with all applicable privacy laws when creating its AI system and in the conduct of its business. Acquirers should have visibility into the origin of the datasets used to train their AI systems and the process used to anonymize any personal information as necessary. Particular care should also be taken around privacy policies and procedures.
     
  • Cybersecurity: The increased adoption of technology in daily life has come with increased cybersecurity attacks, which can interrupt business operations and can have significant economic impacts. However, when a cybersecurity attack impacts an AI system, the impacts can be catastrophic (beyond the unavailability of the system). For example, if an AI system is used in healthcare, a bug could result in improper outcomes and serious risks to an individual’s health. If the code behind an AI system is impacted, errors can be introduced into the algorithm and may be difficult to remedy. Tech-focused companies, in particular, should ensure that they have provided proper training to all employees (especially those involved in the development of AI). Industry-standard policies, practices and tools required to quickly detect and prevent cyber attacks should also be in place. 

Due diligence checklist for AI

Potential acquirers should understand the issues discussed and use the due diligence process to confirm the extent to which the AI technology considered in the transaction is free of these issues. Relevant questions to ask include:

  • Are there any AI-specific legislation in the jurisdiction(s) where it would be deployed?
  • What intellectual property rights are derived from the AI technology?
  • Is there any open-source code integrated into the AI technology?
  • Does the AI technology necessitate the use of third-party intellectual property rights?
  • Where does the data used to train the AI come from?
  • How was the AI technology developed?
  • What policies and processes are in place within the target company relating to privacy, cybersecurity, ethical development, etc.?
  • Does the target company have agreements with unlimited liability (or high liability) clauses in relation to the use of its AI product or service?
  • Has the target company taken appropriate measures to preserve its intellectual property rights (for example, when it licenses its AI system for others to use)?
  • Does the AI use personal information to train itself?

Learn more about intellectual property (IP) in Canada and what may or may not be registered under the various IP laws.

Overall, it is essential in any transaction involving AI to retain legal counsel with specific experience and expertise in the AI space. AI-related acquisitions present unique challenges, and as AI continues to grow rapidly, legal issues are expected to grow too.

***

Imran Ahmad, partner, is the Canadian head of Norton Rose Fulbright’s technology group and the Canadian co-head of the information governance, privacy and cybersecurity practice. Imran advises clients across all industries on a wide array of complex technology-related matters, including outsourcing, cloud computing, SaaS, strategic alliances, technology development, system procurement and implementation, technology licensing and transfer, distribution, open source software, and electronic commerce. As part of his cybersecurity practice, Imran works closely with clients to develop and implement practical strategies related to cyber threats and data breaches. He advises on legal risk assessments, compliance, due diligence and risk allocation advice, security, and data breach incident preparedness and response. In addition, Imran has acted as "breach counsel" on some of the most complex cross-border and domestic cybersecurity incidents. He has extensive experience in managing complex security investigations and cross-border breaches. In his privacy law practice, he advises clients on compliance with all Canadian federal and provincial privacy and data management laws, with a particular focus on cross-border data transfer issues and enterprise-wide governance programs related to privacy.

***

Roxanne Caron is privacy and technology lawyer. Her practice focuses on information governance, data protection, cybersecurity and commercial intellectual property.

She advises clients on compliance with all Canadian federal and provincial privacy, data protection and data management laws, cybersecurity breaches and reporting. She also regularly assists in corporate mergers and acquisitions, with respect to intellectual property, IT, privacy and various technology aspects.

Before joining Norton Rose Fulbright, Roxanne worked at the Centre of Genomics and Policy and interned at the Centre for Applied Ethics of the McGill University Health Centre. Her research focused on the use of AI in biotechnologies, the design of regulatory frameworks for biobanking, and the ethics of doping in sports.

***

Suzie Suliman is a corporate and intellectual property lawyer. She focuses on technology, privacy/cybersecurity and intellectual property matters. Her privacy/cybersecurity practice focuses on assisting clients in preparing for and responding to data and privacy breaches. Suzie also assists clients in transactions or agreements relating to technology and intellectual property.

Suzie obtained a bachelor's degree in electrical and biomedical engineering before pursuing her law degree. During her engineering studies, Suzie designed and implemented a seizure detection alert monitor utilizing machine-learning software, and took a particular interest in artificial intelligence. While in law school, Suzie's area of concentration was intellectual property, information and technology law.

 

[1] According to the Artificial Intelligence and Data Act (AIDA) companion document: The Artificial Intelligence and Data Act (AIDA) – Companion document (canada.ca)

[2] Norton Rose Fulbright: Global M&A trends and risks

[3] According to the Artificial Intelligence and Data Act (AIDA) companion document: The Artificial Intelligence and Data Act (AIDA) – Companion document (canada.ca)