What can the Legal Industry Reasonably Expect out of ChatGPT?

During the recent Legalweek, panelists debated what impact ChatGPT and generative AI could have on the legal industry, if any, but urged caution nonetheless.

It’s safe to say that ChatGPT and generative artificial intelligence (AI) as a whole have captured the imaginations of those in professional services like few technologies have before. The idea behind ChatGPT seems simple: You ask it a question, in plain language, and it provides a straightforward answer to your prompt.

However, the reality remains far from simple. There are layers of algorithms that create content (thus the generative part of the name) by continually predicting the next word, with extremely large data sets needed to make these predictions accurate. Further, the newly-released GPT-4 is even multi-modal, meaning it can accept both text and image inputs, which ratchets up the complexity even further.

As a result, even among the most optimistic technologists, there remains some generative AI risks that can’t be ignored. And as a key panel, Reshaping the Legal Profession: Thriving in the Age of Generative AI & ChatGPT, at the recent Legalweek explored, the heavily hyped technology may be less of a do-it-all tool and more of a “moderately bright, but very lazy first-year associate.”

What generative AI is & what it isn’t

That comparison came courtesy of panelist Aaron Crews, currently Chief Product & Innovation Officer at alternative legal service provider UnitedLex and formerly Chief Data Analytics Officer at law firm Littler Mendelson. Crews noted that while many legaltech types have high hopes for generative AI use in law, including himself, at its core the technology isn’t that revolutionary.

“Generative AI is fancy marketing-speak for a machine that anticipates where you want to go next,” he said, adding that while there may be high expectations of a tool named artificial intelligence, in reality “it’s not intelligent.”

Indeed, generative AI is bounded by the data that is put into the system. That means that ChatGPT, developed by OpenAI and currently the most famous generative AI platform, has access to untold amounts of data to make its predictions — but that data is only current as of 2021, meaning it cannot adjust to newer events.

The tool also suffers from “hallucinations,” meaning that sometimes the technology “predicts” facts that have no actual basis in reality. In one notable case, as explained by panelist Foster Sayers, General Counsel & Chief Evangelist at software company Pramata, a Michigan judge tried asking ChatGPT about why a certain court decision was decided the way it was and found that ChatGPT completely made-up precedential cases — something the judge caught easily, since he had decided the case himself.

With the recent release of GPT-4, OpenAI claims a factual accuracy rate between 70% and 80%, depending on the subject matter. But that 20% below perfect is “significant” in law, explained another panelist, Ilona Logvinova, Associate General Counsel at McKinsey & Company. Technologists in law often run into risk-averse attorneys and clients, where one bad experience can lead to a closed door for all future technological advancements. And although some companies and even some law firms are hiring for a new role known as a prompt engineer to ask generative AI platforms more specific questions to get a desired outcome, it’s impossible to create a foolproof system.

“Prompt engineers are getting more popular, but they’re also learning on the spot,” Logvinova noted.

So where’s the use?

That’s not to say that generative AI will fall by the wayside, however. The panel identified a few potential use cases for generative AI in professional services as it now stands: document analysis, review and drafting; research and knowledge management; contract analysis and drafting; and chatbots and assistants. However, the technology is moving quickly, and so too are its potential applications, panelists added.

One panelist, Danielle Benecke, Founder of Baker McKenzie Machine Learning at law firm Baker McKenzie, noted that “firms and other enterprises have been sitting on this unstructured data forever,” which generative AI can help unlock.

However, while many regular generative AI use cases focus on the wide data sets the tool already has, it’s more interesting to start with the enterprise’s data and running AI against it, Benecke explained, adding that, for example, a firm’s M&A deal room could run generative AI against the data set and theoretically create a due diligence checklist based on the firm’s contracts that are already in place.

Pramata’s Sayers did question how much better generative AI is at producing new documents and contracts than simply using regular templates. While generative AI may produce bespoke work product, legal documents often have to be worded in a very specific way that’s tough to predict, he said. Contract experts such as TermScout’s Evan Harris have noted the same, finding that while generative AI can create a passable first draft contract, the outputs still require a good deal of editing and governance.

With these limitations in mind, Logvinova added “it’s safer and less risky” to use generative AI for internal purposes rather than for client-facing content or communications. Crews agreed, saying that he “absolutely would not” use ChatGPT for client work as the technology currently stands, but that it may be helpful in fast-forwarding the data creation and ingestion process.

No matter the use case, however, all panelists agreed it’s paramount to avoid the temptation of adopting generative AI just out of curiosity. Due to its risks, and with the technology in its early stages, any use should be conducted with the firm’s overall data strategy underpinning the AI use and with a specific goal in mind.

Benecke said her exploration of generative AI primarily focuses on holistic applications across the firm rather than one-off use cases. Any time the firm adopts an AI tool, she said, it’s with the specific goal to “supercharge the firm’s most valuable pre-existing service lines,” directly tying the AI use with a firm strategic initiative.

Still, there remains a number of unknowns about generative AI’s use in professional services, and the balance between risk and innovation with generative AI weighing on the scale is one that firms are still working out.

“Be forward-leaning, but be smart about your governance,” Benecke warned. “You don’t want to be that cautionary tale.”

This article was originally published on the Thomson Reuters Institute, and features on Insight with permission.

Subscribe toLegal Insight

Discover best practice and keep up-to-date with insights on the latest industry trends.

Subscribe