Transformative power of AI and how lawyers can prepare for it

The increasing adoption and popularity of ChatGPT and other AI tools in white collar work, and our personal lives, raises a number of legal issues businesses should be aware of. Here, Fadi Amine and David Krebs, both partners at Miller Thomson, one of the country’s leading business law firms, walk us through what lawyers need to know.

To view full transcript, please click here

Greg Hudson  00:00:08 

Hello! Welcome to Lexpert. TV. I'm Greg Hudson. The increasing adoption and popularity of chat GPT and other AI tools and white-collar work and our personal lives raises a number of legal issues that businesses should be aware of. To walk us through some of the challenges and opportunities of generative AI, today we're speaking with Fadi Amine and David Krebs, both Partners at Miller Thomson, one of Canada's leading business law firms. Fadi, David, thank you so much for joining us today. 

Fadi Amine  00:00:40 

Thank you for having us. Looking forward to this. 

David Krebs  00:00:43 

Thank you, Greg. 

Greg Hudson  00:00:45 

Well, let's start with the good news. Fadi, what are some potential benefits for lawyers with AI?  

Fadi Amine  00:00:51 

It's a very interesting question. I'm by no means an expert on AI. And I don't actually think there are very many lawyer experts on AI. It's so brand new, but I have been playing with the technology since it came out. And so far, what I think AI could potentially benefit lawyers is first idea scaffolding. That's a term that's being thrown around and I've experienced the benefit of it. In the sense that you can go from having an idea to structuring a potential output very quickly. So something that would maybe take you an hour, an hour and a half or two to get to the point where, you know, you can reliably look at it and say, Okay, this could lead to something with extra work. Now you can get it done within maybe 10 minutes, a few prompts, and you have something good and workable to go forward on. It's not, as by no means a final output very far from it but lately, it's a good generator of ideas scaffolding. Also, I mean, from that, it could lead to efficient workflow, quicker workflow, less jumping through hoops and finding someone to get an initial document or output out or an initial plan of argument. It could generate some ideas for you and work from that. It should lead ultimately to some cost savings, which would benefit not just the legal profession, obviously. But the clients we serve. I don't know if the technology is there yet. It's going to take some work before we get to see that. But right now, there's a lot of lawyers that are having fun and playing with this technology, seeing where it goes. 

Greg Hudson  00:02:44 

David, do you have any thoughts about this?  

David Krebs  00:02:47 

Yeah, so I don't have a lot more to add to Fadi. But I would say personally, you know, I'm a technology and a privacy lawyer. But I'm not an early adopter of technologies, oddly enough. So I'm going to wait and see, to see what the benefits are for sort of for our practice at Miller Thompson, and for my own personal practice, but I do see some efficiencies, what I'm excited to see is how it can be integrated with other technology that already exists. But again, I love advising on it. I'm, firstly, you know, myself, I'm not an early adopter. So I'm going to I'm going to sit back and see what my clients do. 

Greg Hudson  00:03:24 

Now, either of you can answer this. What are the top legal issues and risks around generative AI? 

Fadi Amine  00:03:31 

I'll lead off on that it, it's got, I think, three, Top of Mind issues. For me, it's first quality of outputs. We've all heard the news about you know, hallucinations, that we have technology, this hallucinating is pretty amusing when you read about it. But I expect that is going to get better, in the sense that I'm sure there's a lot of bright minds working on it. And eventually, we'll be able to rely on the quality of output a lot more. Anyways, if this technology is going to survive, it's going to need to get there. It's not there yet. There's also an issue of security of inputs. So a big part of a lawyer's job is confidentiality of information to clients solicitor privilege. So you need to be very careful about what you put in there and how it's going to be used. And would it be available for someone else, especially if you're working with draft documents that sort of evolved through time? So, I don't know what's the ultimate solution to that. I think my friend David will be able to has put in a lot more thought on that. The last issue I think that the regulatory authorities have not really grappled with and I'm talking about the authorities in regarding the legal profession is who can use legal AI. So, in most provinces, you know, the legal job, the legal profession is regulated, and not anybody can go out there and give legal opinions. So we were now learning of this new self-described job called Prompt Engineering. It's very amusing. But basically, it's people who are experts in prompting outputs from legal AI, by the use of words, as sometimes characters. So, should Prompt Engineering, for example, given itself should Prompt Engineering written relation to legal AI, should it be restricted to lawyers or can anybody get into a system and start, you know, generating legal advice or legal output from these? So, I don't think the regulatory authorities or the governments have gotten there yet. But this is, to me, something that's going to needed to be legislated so, that protects the profession. 

Greg Hudson  00:05:51 

David, do you have any thoughts about that? 

David Krebs  00:05:54 

Yeah, so just to supplement what would Fadi was saying about not so much about the legal profession itself, but some of the main risks and obviously, I'm biased being, you know, in the privacy and cyber field. But I do see privacy, data privacy, and cybersecurity as top risks, I mean, one of, AI has gained so much attraction, and everybody's talking about it now, more and more over the past few years, but especially I think 22 and 2023. It relies on massive amounts of data and questions around, you know, how is it getting that data? That's definitely one concern so, top issues there obviously, data privacy is personal information being used without the knowledge of impacted individuals. The other is copyrighted material. So, IP, you know, are there infringement issues, then that sort of on the input side on the training side of the models, I think the other big risks, and I don't want to overplay them, but I think they are risks that are being addressed currently in the EU with their draft regulatory framework, that sort of making its way through the European process at quite a good clip, are the sort of human rights-based issues. So, biased inputs, meaning biased outputs, are we exacerbating, you know, existing inequalities by having trained these models to then make decisions about people in a way that is not fair? There are concerns about this being used in healthcare, and actually leading to worse outcomes for already underprivileged demographics based on, again, based on bias inputs of data. And then I mean, the sky's the limit in terms of legal risks. I mean, you can also look at, you know, potentially certain democratic risks, if we can't rely on the outputs, if you can't rely on or if you actually don't know, what is factual. When you're getting especially, I mean, if you're getting outputs from professionals from legal professionals, for example, that the public then doesn't trust because they're thinking okay, does this lawyer generate some output using AI, and how do we actually look behind the scenes of that algorithm? I think that's when you start getting into those sort of more, you know, sort of society-based risks. But pure legal, top of mind, obviously, human rights, privacy, IP copyright, I think those are those are three big ones right now. 

Greg Hudson  00:08:35 

Fadi, David, what legal and regulatory frameworks need to be put in place in Canada? 

David Krebs  00:08:42 

Yeah, I can take a quick stab at that. I think just generally speaking in Canada right now, the ADA, as it's called, Artificial Intelligence, and Data Act is part of Bill C27, which is a broader piece of legislation that was introduced to modernize Canada's privacy laws. Now, I'm not going out on a limb to say that ADA has not been widely, you know, embraced as being, you know, something that the, you know, that Canadians can sort of see as a complete code. And then everybody knows how artificial intelligence will be regulated. I think right now, there's still some debate as to whether or not Canada specifically should follow a more sectoral approach. So regulate specific sectors, or go horizontal. I can tell you that in Europe, much like they do data privacy, they look at a more at a more horizontal approach to regulating AI. And I think, in terms of and I'd be interested to know, you know what, what Fadi has to say about this issue as well. But I guess the concern is okay, do we already have a lot of the tools in place to regulate it or do we need a complete framework? I think many people are saying AI is specific enough, it's and it's, it has a wide enough effect. It's new enough. It's big enough that you need a specific regulatory framework that goes across all uses of AI. But there's some debate on how to approach that. 

Fadi Amine  00:10:20 

Yes. So I'm not yet sold on the idea of having a separate regulatory framework for AI. There may be, there may be some need of legislation for specific issues, safety, and efficacy of output, that sort of stuff. But we, I mean, Canada does have a plethora of laws and regulations across provinces and at the federal level, that should good and do regulate technology risk, or any sort of information provided by companies, marketers, etc. I'll give you an example. I mean, we do have data privacy laws, that for I don't see why they weren't applied presently. They may be they may need some tweaking, but the laws are there. We have consumer protection laws that do regulate, for example, false and misleading representations for products or advertising, We do have the Competition Act. So, we have a lot of laws that are on the books that should for example, AI lead to, let's say, for example, false and misleading advertising on a product. Well, I mean, there are we have laws that will answer that. I did, I've already said, what I had to say regarding the regulatory framework for the profession. So, I do think that will need to be addressed eventually on how legal AI is used by lawyers and by who. 

Greg Hudson  00:11:53 

What can businesses and individuals do to protect their IP and their privacy against generative AI? David? 

David Krebs  00:12:01 

Thank you. No, that's an interesting question. I think, you know, we've talked about, I guess, what government can do what legislation can do to protect and as we heard, there's some debate around how to do that maybe there's maybe certain applications should be banned outright? I know that's what the EU is looking at, for certain surveillance technologies and emotion recognition, technologies and that sort of thing. So, we'll see about that. But I think what businesses can do and individuals, there's I guess, there's an easy answer. And, for individuals that is, you know, don't put so much information about yourself out on the open internet. Now. That's a cop-out, I think, because we all know that we're an information-sharing economy, we share data with our friends with our families. There is going to be information out there. But I think one thing that individuals can do, I mean, choose platforms, maybe have a read of the of your terms, and conditions and the privacy policies of the of the platforms that you choose, I think, for businesses, it's important to acknowledge that you have certain Crown Jewels, so protect your important information as best as you can. The other side of that is when you're using these tools. And I think if it had mentioned this early on in our talk, we have to be very mindful of the fact that when we're putting confidential information into a generative AI tool, that that's potentially disclosure of that information so, we should avoid doing that at all costs. While you know, obviously wanting to leverage some of the some of the technology and some of the cost savings that these tools can provide. But I think just being cognizant of the fact that when you're using these tools, that you are sort of, you know, disclosing them into the world, I think that's an important step. 

Greg Hudson  00:13:58 

Fadi, do you have any insights to add? 

Fadi Amine  00:14:01 

I'll just add that from a, from a business perspective, if you're dealing with service providers, for example, lawyers, knowing that they may have this tool generative AI, in their toolbox. You know, sophisticated clients may be thinking about either restricting or telling their lawyers or financial planners or whatever, that they don't want them to using these tools, or they want them using specific tools. So, you can put it into, you know, service contracts, terms and conditions. So, that is one way for companies to protect their privacy for individuals. That's I think that's a larger societal debate. I think that and that's not a legal opinion. I just think that you know, we have some privacy and confidentiality expectations that were developed back in, you know, in the paper economy when everything was putting on paper and print acted in a folder somewhere and you could put in a bank vault. Maybe our privacy expectations for individuals should evolve with the times. And maybe we should expect less privacy, I don't know. But in any event, in relations, like, for example, some of our more sophisticated clients that I deal with, I can tell you that some are actually thinking of integrating terms and conditions in the service contracts that limit to use or, if not just limit to use, but I'll give you an example. Some clients may want to get a confirmation if a legal output is generated with the use of AI they want to know. So, that may be something that you'll have to disclose when you're giving legal output in the future. 

Greg Hudson  00:15:45 

Fadi, finally, how will lawyers cope with the challenges of AI? 

Fadi Amine  00:15:50 

I don't know specifically, I just know it's going to have a big impact. And we'll have to add that I can give you an example. When I started law school back in 2001, they were teaching us how to do legal research by going through books looking through indexes, and something that would take two days. That's in 2001. By 2002, we had you know, the search platforms come in Canley, Alexus, and all that stuff. And within a year, books became obsolete. So, I am expecting that legal AI is going to have a similar effect on the legal profession. I think many lawyers are taking for the time being a wait-and-see approach. And this is a prudent approach. There was a lot of hype at the beginning. But we are starting to see more and more stories in the industry where legal AI has not had the outcome that's expected. There was a New York Times article a few a couple of weeks ago where someone actually filed a brief at court that turned out to have invented entire cases that were not correct. Ultimately, though, I expect that the kinks will be worked out and lawyers will integrate legal AI within their workflows. For example, search engines will probably have a legal AI component to them, contract word I'm expecting eventually word and Outlook to have some legal AI, I know they already have sort of the beginnings of it, but probably more advanced translation services, it's going to impact the entire industry. It will have an impact on staffing, junior lawyers. And there may, there may need to be a rethink of how the industry integrates and advances younger lawyers and promotes their experience and learning. When currently some of their work, some of the work they do currently may eventually be done by AI, better, quicker. I don't know what the answer to that. Presently, more experienced lawyers can look at the outlook and get an instinct that okay, there's something wrong there, I need to go dig deeper. I've never heard of that case. But someone that may be starting out, if they rely blindly on it. May lead their clients astray, or may end up in results like that time, New York Times article that we read a couple of weeks ago. So, basically, I don't know it specifically. But it's going to lead to many challenges that we'll need to integrate within our industry. 

David Krebs  00:18:16 

From my perspective, and my early kind of experience with it, or using it in my own practice, or sort of toying around with it, I've come to the initial conclusion that it will most likely be helpful for experts to become more efficient in their field, from a legal perspective, as opposed to allowing someone like me to all of a sudden become an expert in, you know, real estate law. So, I think it's going to make people be able to go and I think if I'm right about that, I think that's going to be a huge benefit to clients and to the legal profession, really. If you can get quicker, more efficient and better outcomes for the people who already are quite knowledgeable in a specific area. So, it's just an extra toolbox. That's a very powerful tool, just like you know, email was or Google searches, right? Or even Wikipedia can help you, you know, point you in the right direction. But it's, I don't see it's something right now anyway, that allows for, you know, professionals that are not expert in an area to then, you know, generate outputs that are similar to expert outputs. That's my take on it currently. 

Greg Hudson  00:19:41 

So there's definitely a lot to think about. David, Fadi, thank you so much for joining us today. 

Fadi Amine  00:19:48 

Thank you for having us. 

David Krebs  00:19:50 

Thank you.  

Greg Hudson  00:19:51 

With Lexpert TV. I'm Greg Hudson. Have a great day.