The truth about ethical AI for legal teams: A guide

As legal teams become more strategic, generative artificial intelligence (AI) technologies can enhance their value. However, the implementation of ethical AI is an emerging concern, not just for legal counsel but all departments.

Organisations are increasingly turning to their legal teams for valued strategic advice. Which means the remit of the in-house lawyer has vastly widened in scope, and workload. AI can both streamline existing work and make legal counsel better advisors than ever before, with access to sophisticated insights and predictive analytics. 

But to reap these benefits, you have to successfully integrate AI into your organisation. And as such trusted advisors, guiding the strategy for AI implementation will likely fall to in-house counsel themselves. 

To safeguard the integrity of your organisation, clients and customers – you’ll need to create a set of ethical AI and data principles. From this foundational document, everything else will flow. 

Firstly, it will help you choose the right technology for your organisations’ needs. Then, you’ll use it to inform your processes. And lastly, it will influence how you train your team. 

By building a collaborative and ethical AI culture, you give your organisation the greatest potential to benefit from the technology. Because you’ve built something that can last – and that will stand up to risk.

Let’s break down how to create a streamlined and responsible AI culture within your organisation.

How will in-house counsel benefit from AI?

Organisations are increasingly turning to in-house counsel for their strategic input. According to Forbes, 80% of Chief Legal Officers are reporting directly to CEOs. These days, it’s not just about assessing the legality of a decision. Instead, it’s about advising on all kinds of factors from risk-assessment to revenue-drivers. 

This strategic work comes on top of an already jam-packed working day for in-house professionals. AI has the power to hugely reduce workloads, while increasing efficiency. For example, the latest in legal AI technology has created a task-oriented AI-assistant, CoCounsel Core, that can take on your routine tasks for you. 

You can routinely turn to the tool for lightning-fast research. Use it for example, for background information, or to keep you up to date on legislation and precedents. Have the tools to draw up summaries. But you can also go one step further. 

The technology pulls information from huge swathes of data and can make highly-informed predictions. In the corporate context, using this insight will make for more educated strategic decisions for your organisation.

What are the AI use cases for legal teams?

Legal AI can have a huge impact on helping in-house counsel reach their goals. 

The Future of Professionals report, published by Thomson Reuters in late 2023, shed light on the priorities for in-house counsel over the next 18 months. A considerable 71% named ‘Risk identification and mitigation’ as the top priority. ‘Keeping abreast of regulation and legislation’ (58%)  ‘Enabling company growth’ (58%), ‘Internal efficiency’ (48%) and ‘Reducing external spend’ (39%) are also key priorities. AI will impact internal operational efficiencies, as well as reducing external spend, the report goes on to claim. 

How can AI help in-house legal teams?

1. Risk identification. Generative AI tools can analyse vast amounts of data and provide predictive insight for the business. Simply by plugging in a few quick sentences (to learn more about prompting AI tools, read our primer), you can gain targeted advice for your business and identify risks.

2. Keeping abreast of regulation and legislation. You can have your AI tool brief you on any changes to regulation or legislation. For example, it can analyse past court decisions and legal precedents to give you predictions for potential litigation outcomes. Armed with this knowledge, you can make informed decisions for the business, faster.

3. Internal efficiency. It often falls to in-house counsel to first research and then translate complex legal matters to the wider organisation. In-house counsel routinely need to make complex legal information digestible for those outside of their discipline. This takes time. Legal AI systems can create quick summaries and memos for you to review and pass on.

4. Become a better advisor. The automation of routine tasks liberates corporate counsel to develop their professional skills. Upskilling or even focusing on becoming a T or an O-shaped lawyer is all within reach.

Does your organisation use ethical AI tech?

One of the key blockers to AI adoption within corporations are questions around trustworthiness. In the legal and corporate world, data protection, privacy, and precision of information are paramount. 

This is why your choice of provider matters. Cutting edge legal AI tools have been designed to deliver accurate, safe, and reliable results. 

New technologies such as ‘retrieval augmented generation’ (RAGs), for example, ensure that a tool only pulls information from certain sources. This helps to prevent so-called hallucinations, when the machine draws inaccurate conclusions by making broad interpretations from patterns it finds in vast public data sets. 

Novel technologies, such as Thomson Reuters’ CoCounsel, use this technology by restricting research to verified legal texts from its extensive publications. 

“There are a variety of ethical considerations we apply into our data and AI programs here at Thomson Reuters. The following are some high-level ethical AI concepts we look at: transparency, fairness, explainability, sustainability, responsible partners, security and privacy, based on our Data and AI Ethics Principles,” says Carter Cousineau, Vice President of data and model governance at Thomson Reuters.

Carter Cousineau
Carter Cousineau is the Vice President, Data and Model (AI/ML) Governance and Ethics for Thomson Reuters.

“Notwithstanding, we also incorporate embedded governance, some of these concepts include data quality, proper access, data classification, lineage, documentation, monitoring and decommissioning.”

What is ethical AI?

Ethical AI seeks to balance the acceleration of the technology with human values and ethical principles. In essence, this means ensuring that AI does not compete with human interests, but supports them. 

Ethical AI covers various emerging harms presented by the rapidly advancing technology. These include data security and privacy as well as environmental and social issues. 

To mitigate these risks, ethical AI includes recommendations that cover the full life-cycle of the technology. This means developing, implementing and using AI responsibly.

What AI regulations do organisations need to comply with?

In Australia, there is no AI-specific legislation – yet. AI usage is governed by existing legislation. This, at the time of writing, includes legislation in areas such as consumer law to anti-discrimination law. However, the government is under pressure from groups like The Australian Human Rights Commission to introduce further backstops. The commission has outlined four ‘emerging harms’ of AI – privacy, algorithmic discrimination, automation bias and misinformation and disinformation.

Globally, there is mounting pressure to regulate AI – and Australia has been taking part. In late 2023, Australia signed the Bletchley Declaration along with the EU and 27 countries including the UK, U.S., and China. The declaration aims to work towards global legal frameworks to safeguard against emerging harms. 

Elsewhere, the European Union has reached a provisional agreement on the EU AI Act. If formalised, it will be the first comprehensive act of its kind and will regulate AI on a sliding scale according to the proposed risk.

Why do you need an ethical AI framework?

An ethical framework is hugely beneficial for an organisation, both externally and internally. Not only will it work to build trust on the inside. It can also help to drive client and consumer loyalty in your services. Public ethical guidelines can also help to position your organisation at the forefront of the technological revolution, adding a competitive edge and maintaining relevance.

Internally, it will act as a single point of truth that will guide the objectives and process for AI-enhanced work. Establishing AI governance before implementing processes will best set you up for the future. This is especially true should any regulatory changes occur, as you’ll be well prepared for compliance.

What should an ethical AI framework address?

To guide organisations transitioning to AI, the Australian government released a voluntary ethical framework. Guided by the principles of ethical AI, the framework is a useful reference for organisations developing their own AI governance. 

Per its recommendations, the key concerns that a framework should address are:

  • Human,
  • societal and environmental wellbeing
  • Human-centred values
  • Fairness
  • Privacy protection and security
  • Reliability and safety
  • Transparency and explainability
  • Contestability
  • Accountability

Use the above as a checklist or framework to guide the considerations of your organisation’s AI principles.

You’ll also need to do your industry research. Leading global organisations are leading the culture of transparency and responsibility by publishing AI ethics principles. These include Microsoft, L’Oreal and Thomson Reuters.

How to create an ethical AI roadmap?

As humans we can be resistant to change. We see this, too, in large organisations. After you develop a code of ethics, you’ll need to develop internal guidelines and a training program. 

The key is to take your team on the journey with you, and collaborate with HR on a training path for employees.

Carter Cousineau, Vice President of data and model governance at Thomson Reuters, says that data and AI ethics principles are just the beginning. The principles are a lodestar that should guide the formation of dedicated AI ‘enablement’ teams and practical training programmes. 

Here’s a step-by-step breakdown for guiding the implementation of AI

1. Set up a multi-disciplinary AI taskforce

As implementing AI technology will affect all areas of the business, an interdisciplinary approach is key. It also ensures trust and buy-in from throughout the organisation. 

Leading global workforce solutions company Manpower Group credit the success of implementing their AI guidelines to their collaborative approach. To draft their AI guidelines, they first formed an AI committee with members from across disciplines. This included participants from technology, legal, marketing, data privacy, human resources, and business operations. 

Richard Buchband, Senior Vice President and General Counsel, and Dr. Tomas Chamarro-Premuzic, Chief Innovation Officer, shared the following view with Thomson Reuters:

“Rather than imposing directives from the top, our approach was collaborative. We engaged our teams in a collective effort to gather insights, crowdsource ideas, and synthesise thoughts. 

“It was important to us that the guidelines were not only relevant but also relatable to the experiences of our employees, balancing AI’s capabilities with human strengths. As we state in the document, “AI is a powerful tool, but it cannot replace human creativity, intuition and empathy.”

2. Be your own backstop

Creating a task force is the first step to fostering a collaborative AI culture within your organisation. Then, the task is to maintain it. Carter Cousineau, Vice President of data and model governance at Thomson Reuters, recommends forming a dedicated team to review use-cases within the business and ensure compliance to the code of ethics. 

“Standing up an AI and data enablement team allows you to be able to ensure these controls are adequately in place. My team and I work with every data and AI use case across all business functions from product, TR labs, Reuters, marketing, commercial excellence and more to ensure every use case adheres to our Data and AI Governance and Ethics policies and standards,” she says.

Being your own ‘backstop’ means constantly reviewing the robustness of your governance and processes in action. Only then can you continue to strengthen your compliance and protect the organisation from emerging harms.

3. Track AI regulations

Throughout the developmental process, Carter recommends staying across the evolving regulatory landscape. Doing so will keep your governance watertight.

“As we were building the policies and standards, we mapped these to the EU AI Act, NIST AI Framework and Biden’s Executive Order. This was intentional as we wanted to ensure that the work we are doing here at Thomson Reuters is not only of value from a responsible use perspective but also aligns to the global laws and regulatory environment.” 

4. Foster AI literacy

Risks do, and will always exist. They range from inaccuracies created by so-called hallucinations to ‘data-poisoning’ risks from hackers. 

Carter outlines two key factors to help safeguard the risks. Firstly, it’s having a basic understanding of how the technology was built and how it works. 

“Getting to know the technology is the first step to mitigating red flags through a risk-based AI approach to monitoring the inherent and residual risks of AI,” she says. 

“Now, you don’t always need to know every technical detail, but it is very important to have a clear understanding of how that technology was built, the types of data that feed into the AI model and what outputs the AI model provides.”

Secondly, it’s vetting your third-party providers. Look out for transparency of practice and methodology. 

“Another key step for me is ensuring my team has the answers and information they did to do a proper assessment of our partners. More and more, there are reasons to use third-party AI models, and it is important for you as the customer of that technology to have the transparency and clarity on the technology itself, its limitations, the types of data and possible risks that we know to mitigate for,” adds Carter. 

5. Provide staff with sophisticated training

Embracing AI in the workplace relies on teams who feel empowered to use the tools. This can only come through training. Hands-on training can not only give team members the confidence of practical skills. 

Showing practical examples of applying the AI and data principles in day-to-day work means your guidelines shift from high-level to actionable.

Only when employees understand how to use the tools will you a) begin to see a desired uptick in efficiency and b) ensure uptake of your ethical policy. 

“We developed a set of policies, standards, guidelines, and framework that directly tie to our Data and AI Ethics Principles. We also created training modules that are open to all Thomson Reuters employees to take and mandatory for those who are involved in the AI lifecycles (from creation, deployment to decommission). We have and continue to spend the majority of our efforts developing, building, and enhancing the practical implementation of ethical AI,” says Carter. 

It’s also worth noting that job security is a huge concern in the age of AI. You can allay fears of job displacement by highlighting your organisation’s commitment to staff’s professional development. This goes beyond offering AI training to looking at the upskilling opportunities automation will free up time for.

6. Choose responsible AI tools

There are various factors to consider when choosing legal AI technology for your organisation. As well as streamlining work and providing cutting-edge, AI must draw from legal sources you can trust. That way, you know it can deliver you accurate, unbiased information. And as the technology grows in potential, it’s essential to partner with an organisation committed to the ongoing safety of its products. 

At Thomson Reuters, ethical AI acts as the foundation of the technology – and the impetus to continue innovating and fine-tuning.

Thomson Reuters are continually innovating their machine learning algorithms. This enables them to respond to emerging harms and real-world customer and legal requirements. Carter went on to walk us through how this looks in practice.

Machine-learning in action: Preventing bias

“Let’s bring this into an example using the concept of fairness, in the form of bias detection and mitigation. First understanding if there is a risk of bias within the AI model, then developing a means to measure for bias detection and provide appropriate mitigation techniques. 

“There are a variety of techniques or tools that can enable bias detection and mitigation, at Thomson Reuters we’ve released enterprise tooling such as our Bias as a Service, also where required we develop and use custom techniques on AI models,” she says.

“These custom bias detection techniques vary in form. For example, adding further performance evaluation measures to the AI model or measuring for false negative rates. The bias mitigation techniques as well in turn can have a variety of techniques that can be applied: data preprocessing, adding proxies to the data, algorithmic adjustments and/or bias monitoring.”

Finally, lock in your ethical guidelines

In-house counsel who use ethical AI can unlock both the time and insight to contribute strategic value. Time consuming tasks from research to contract drafting can be outsourced to legal AI tools. Using the tools will give lawyers further edge with data-driven decision making.

As valued advisors, organisations will be turning to legal counsel to help guide AI adoption in the workplace. Especially given concerns over legality, privacy, and data security. 

Your work begins with enabling and creating a robust set of AI and data ethics principles. Founding multi-disciplinary AI task forces will ensure you capture a multitude of perspectives and use cases. In turn, this will give you the best chance of transmitting the values across the business. 

This will be one of the most important tasks you undertake for your organisation. Your ethical principles will be the bedrock of your AI governance. It will flow into processes, the technology you choose, and how you train your staff. They will build trust internally, as well as with clients and the public. And, they’ll keep you prepared for any regulatory changes. 

Creating and fostering an ethical AI culture will safely lead your organisation into the future.

Subscribe toLegal Insight

Discover best practice and keep up-to-date with insights on the latest industry trends.

Subscribe