'More an art than a science'

AI is shaking up established processes in procurement and contracting
'More an art than a science'

This article was created in partnership with Norton Rose Fulbright Canada 

At this stage in the game, there’s no understating AI’s clout. As Domenic Presta, partner at Norton Rose Fulbright Canada LLP, puts it succinctly: “It’s a fantastic, massively advantageous, and life changing innovation.”  

But as a long-time practitioner in the space, Presta preaches to proceed with caution. 

“With any new technology, there are a lot of considerations we haven’t even turned our minds to — and the potential of implications that have never arisen before.” 

In the context of procuring and contracting for AI technologies, these possibilities are currently at the fore. Clients are unsure whether a new approach is needed when onboarding new tech solutions and answering that requires multidisciplinary insights.  

Not all AI is identical; there are many types of solutions for different purposes in various industries with different requirements, and some clients operate locally while others work much more globally. 

The bottom line, according to Imran Ahmad, senior partner and Canadian Head of Technology and Canadian Co-Head of Cybersecurity and Data Privacy at Norton Rose Fulbright Canada, is this: “Procurement is now more an art than a science.” 

“Currently there’s a checklist — five things to be mindful of, a framework that can apply to every tool — but it doesn’t work that way anymore,” Ahmad explains. “It calls for a very bespoke approach right now.” 

Imran Ahmad, Domenic Presta, and  Manpreet Singh

Identifying valid use case 

A common misstep is businesses procuring an expensive AI solution simply because AI is popular and everybody else is. Identifying a valid use case is critical, says Manpreet Singh, of counsel at Norton Rose Fulbright Canada, because it informs the relevant approach and framework to contractual considerations. 

The purchaser needs to understand how to implement the technology and what the desired outcome is before they get into next steps, such as due diligence on potential vendors.  

“Housekeeping considerations such as who’s going to own it internally must be in place first and foremost,” Singh says. 

A network monitoring system that “doesn’t see the light of day” — has no access to meaningful data and doesn’t generate output — is a completely different scenario from engaging with a vendor for a customer-facing system.  

A good example is a tool that provides automated decision making. There must be controls in place in terms of privacy, ensuring there’s no bias, and establishing visibility around how the AI works at an underlying level. Are you able to explain its outputs; justify why a decision was made? These types of factors increase complexity in the contracting process quite significantly.   

In this context, audit rights can be important — a contractual safeguard that ensures the system is operating fairly and in accordance with legal and ethical expectations, especially in areas of a business such as hiring or financial services.  

“Businesses must determine what nuances are associated with the AI system they’re looking at,” Presta says. “Considerations around intellectual property, privacy and confidentiality, and security come into play, and must be born in mind going forward.”  

AI-specific considerations when negotiating provisions 

When evaluating any AI solution, there are essentially three parts to it: the data that goes in, the algorithm, and the output. The first consideration, then, is data ownership. Is it the business’ or somebody else’s? If the former, will it remain the business’ if it’s fed into somebody else’s AI tool?  

“You can’t just pull it out like in a database, so you must get it right,” warns Ahmad, and Presta agrees, noting that it’s difficult to negotiate any other outcome than the data you input becoming part of the algorithm. 

“That's the value of an AI system,” he says. “The consumption of data to refine its decision-making ability.” 

Presta ties it back again to the individual use case. Does it matter, in the context of what the business is looking to get from the tool, whether what the training data is comprised of is publicly available information or proprietary? Will the business get a license to own the output that’s generated? Does the vendor have the necessary consents with respect to the data being used to train the solution?  

Even once ownership is established, there are still copyright concerns: it's not settled law in terms of who owns what, Singh notes. She urges businesses to consider what they're providing, what they’re receiving. 

While there is a plethora of AI-specific considerations when negotiating confidentiality, privacy, security, and intellectual property provisions, Singh sees these falling into the bucket of due diligence. She recommends going back to the drawing board, quite literally.  

Before engaging with the vendor, understand how the AI model was assembled and what data was used. Is it open-source software, and is it completely proprietary? Or are they licencing something from third parties? 

“All of that can muddy the customer's ability to own the outputs,” she explains. “AI open-source software comes with its own licensing restrictions. Sometimes they’re fairly permissive, requiring an attribution, but other times it can require full disclosure and that’s where you want to be a bit more concerned.”  

Another AI-specific risk to address is the phenomenon of hallucinations. As Ahmad notes, these tools can generate convincing but entirely false information when faced with gaps in training data or ambiguous prompts. 

Businesses should assess the level of factual reliability in outputs and avoid over-reliance in sensitive or high-risk contexts. Disclosure and disclaimer clauses may be appropriate where AI interacts with external users. 

Future-proofing AI contracts 

Ongoing contract maintenance and governance is also a nuanced endeavour. In existing contracts for more traditional technologies, the vast majority include language that states if regulations change, the parties have the right to come back to the table and mutually renegotiate certain terms and conditions. When dealing with AI, that wording is even more crucial given the lack of settled law in Canada.  

While there are some provincial requirements, there’s a lack of legislation at the federal level that addresses AI — at least to a degree that would make you feel comfortable, Singh notes. Because it's a continuingly changing legislative environment, you want to reserve the right to amend. The vendor should make a covenant that they will modify the AI model as needed or customize the services in a way to comply with future legislation.  

This is where purchasers should be vigilant about with whom they partner. Hiring a cool new vendor with the most innovative product is well and good, but if they don’t have financial wherewithal, there’s the risk they won’t be able to afford to comply with the changes needed to be on the right side of any new law.  

“Give yourself an exit opportunity,” Presta advises. “Reserve the right of termination for convenience and have a transition plan in place so you’re not left holding the proverbial bag.”  

From an operational perspective, businesses need robust internal governance and the ability to review all policies to update and accommodate changes in legislation as well.  

While a lot of countries don’t have legislation across the board, there are best practices that they should be benchmarking to. He points to the EU as a jurisdiction with robust laws that contemplate AI and can be used as a benchmark. 

“Whether they're complying with them does give some level of comfort in terms of what they’ve done; they haven’t created an AI tool in the Wild West,” he says. “Even before you get to the contracting process, part of the procurement process should include a diligence review of their frameworks or overall compliance strategy.” 

Additionally, cybersecurity remains a critical consideration in AI contracts. Contracts must include robust cybersecurity obligations that define how vendors will protect data, notify of breaches, and cooperate with customers during incidents. These provisions are essential to help businesses meet their own legal obligations and maintain trust in the AI system. 

Context is everything 

When navigating AI procurement and contracting, one principle stands above all: context is everything. A rigid checklist approach no longer suffices in a world where use cases vary widely, risk profiles shift, and the regulatory horizon remains unsettled. 

“Checklists won’t help you if you don’t appreciate the nuances of the use case you’re dealing with,” Presta sums up, emphasizing that each scenario demands a tailored legal strategy, especially as AI’s impact reaches deep into operations and lives. 

“If you can maintain a posture of not inhibiting innovation by being too conservative but bearing in mind the potential risks by following best practices, established frameworks... that would probably stand you in good stead.” 

For Ahmad, the right approach is fostering a strategic agility. He recommends combing through all potential considerations and decide on a case-by-case basis which to but more emphasis on. For example, the liability piece may be more heavily weighted in one circumstance than another. 

“Having that awareness that this is something that’s evolving and that you need to have a bespoke approach on your procurement strategy for AI is critical,” he says. “AI contracting isn’t a fixed formula — it’s an iterative, thoughtful process that balances opportunity and accountability.” 

Lawyer(s)

Domenic Presta Imran Ahmad