Generative AI is about to reshape the entire legal practice, says Microsoft’s Jonathan Leibtag

The senior corporate counsel says artificial intelligence heralds a new era of efficiencies
Generative AI is about to reshape the entire legal practice, says Microsoft’s Jonathan Leibtag

The transformative potential of generative AI technologies is poised to make profound and far-reaching strides – that’s according to Jonathan Leibtag, senior corporate counsel at Microsoft.

Speaking to Lexpert, Leibtag believes that the game-changing aspect lies in automating routine tasks, such as legal research, document drafting, and contract review, heralding a new era where mundane processes are streamlined and accelerated.

"Generative AI is on the cusp of reshaping the entire legal practice," Leibtag says. "The uniqueness of generative AI lies in its capacity to go beyond mere predictive analysis. It has the power to generate and create entirely new content based on training data."

This is a paradigm shift from traditional AI, offering not just insights but tangible, usable content that significantly augments the efficiency of legal professionals. However, beneath the surface of this technological marvel lies a crucial question: does the integration of generative AI translate to timesaving for lawyers, or does it fundamentally alter the essence of their role?

“I think it's important to note the intentionality of Microsoft's suite of AI products as co-pilots,” adds Leibtag. “It's very intentional in the name because the technology is meant to augment, complement and support a lawyer's legal work – not replace the role of a lawyer. I think that these technologies are actually going to enhance the value of lawyers because with more routine tasks being handled by technology, lawyers can now put more of a focus on complex and strategic work and client engagement.”

Leibtag also anticipates a significant evolution in the criteria distinguishing great lawyers from good ones. He says a lawyer's judgment and creativity will emerge as differentiating factors. While technology can synthesize vast amounts of data and research, applying these outputs relies heavily on a lawyer's innate ability to make sound and creative decisions.

“Legal practitioners should maintain a ‘human in the loop’ approach to review, to edit, and to customize, as quality assurance for AI-generated content. This human element is super important.

“For law firms, something that they should think about before leveraging AI is how they can ensure that the data used to train their AI systems is diverse and representative. Law firms can actually conduct their own regular impact assessments to identify and address potential sources of bias – as well as establish policies and procedures to promote the responsible use of AI.”

In alignment with this ethos, Microsoft has made a “Responsible AI Impact Assessment” template publicly available, providing a valuable tool for law firms to navigate this landscape and assist with compliance in the AI space.

“It's important for law firms to stay informed about the latest developments in AI, as it seems like they're changing constantly,” says Leibtag. “To ensure that they comply with different legal regimes and standards across jurisdictions that may regulate the technology.”

Leibtag says the most crucial strategic partnerships foster trust in technology. Regardless of the power and novelty of AI, its potential can only be fully realized when widely adopted. Companies at the forefront of AI development, like Microsoft, can play a pivotal role in reducing barriers to adoption by building trust and understanding of the technology.

For example, Leibtag cites Microsoft's partnership with Anthropic, Google and Open AI to form the Frontier Model Forum. This industry body, comprising influential players in the AI landscape, focuses on ensuring the safe and responsible development of Frontier AI models.

“These partnerships are so important in shaping the AI landscape at this point in the game,” adds Leibtag. “There needs to be this ongoing and proactive engagement and relationship building amongst privacy leaders, law firms, government organizations, civil society and academia to work together to ensure AI innovations benefit everyone. [They must be] centred on protecting privacy and other fundamental human rights. That's really the foundation that needs to be built now – it really needs to come together collectively to set this framework of AI for it to be meaningfully deployed.”

In addition, Leibtag emphasizes the necessity for thoughtful and forward-thinking standards, indicating that the impact of AI will be felt globally. In this context, Leibtag envisions the creation of standards that both benefit the industry and protect fundamental human rights.

And, segueing into complex AI transactions, Leibtag paints a picture of negotiations as metaphorical tugs-of-war around risk allocation. In any deal or transaction, whether commercial or M&A, the core of the negotiation revolves around understanding and mitigating risk. In the context of transformative technologies like AI, where skepticism and awareness around adoption are magnified, managing risk perception becomes even more critical.

“Negotiations are always a tug of war around risk allocation,” says Leibtag. “Aside from the exchange of goods or services, the crux of any negotiation is understanding the risk each party is willing to take. Ultimately, that risk allocation is reflected in the financial bargain.”

In the dance of risk allocation in negotiations, Leibtag goes back to the significance of demonstrating tangible commitments – and for AI, this means sharing the responsibility of its use with customers.

“If I'm negotiating a deal around the licensing of our technology, the complexities lie in managing the customer's perception of risk,” he tells Lexpert. “[It’s important to] build trust and lay down a foundation to underpin that trust.”

For example, Leibtag explains that customers see a main risk around AI stemming from third-party claims for IP infringement, specifically copyright. Essentially, what happens if the output of the AI infringes on a third party's intellectual property?

“It’s a very reasonable concern,” says Leibtag. “It's a very fair risk factor that companies need to think about as something they need to mitigate. When you’re facing such concerns, it’s a matter of addressing them head-on. At Microsoft, for example, the company made the decision to stand behind the technology by agreeing that if a third party sues our commercial customers for copyright infringement for using our copilots, [Microsoft’s AI suite of products], or the outputs they generate, Microsoft will defend that customer and pay the amount of any adverse judgment or settlement as a result.

“You have to demonstrate skin in the game – a willingness to share and fairly allocate risk. For AI in particular, a commitment of a provider of the technology to share the responsibility of its use with its customers will go a long way to closing off on many issues.”