Contact Sales
Legal

AI in the Legal Sector: Vetted AI Tools and Training are Critical

Legal and Tech Expert Advice on the Rise of AI: How to Stay Secure and Compliant 

In a highly-regulated sector like law, it’s easy to see why some professionals are nervous about embracing AI. 
There are currently no laws specifically governing AI in the UK, nor a clearly-defined rulebook from the SRA.

So, firms are navigating it in line with existing legislation, like GDPR, and standards, such as protecting client confidentiality, set out in the SRA’s Code of Conduct.  

As we explored in our blog, AI will replace admin, not legal advice, around half of the UK workforce now uses AI tools – although this drops to 44% in the legal sector. 

While most (79%) agree that their firm could benefit from AI, the most common concerns include job replacement (51%), data security (48%), and confidentiality (24%). 

But industry experts are hopeful these concerns can be overcome.

Master of the Rolls, Sir Geoffrey Vos, has said there’s ‘no real reason’ why the legal profession shouldn’t embrace AI, as long as it’s backed by a strong understanding of what the technology can and cannot do, and other safeguards. 

7 minutes

Written by Danielle Park, Product Manager at Access Legal.

Long-term, Richard Susskind, President of the Society for Computers and Law and former Technology Adviser to the Lord Chief Justice of England and Wales, believes that AI could improve access to justice. This may come in the form of a large legal data set made available, protected in the same way that medical records are. He suggests these datasets could include ‘agreements, letters of advice, opinions, skeleton arguments, unpublished decisions, know-how databases, templates and precedents, textbooks, informal guidance notes and even newspaper articles’, with AI being used to make relevant information available to professionals. 

Staying ahead of the regulators

The explosion of powerful Generative AI (GenAI), notably ChatGPT, caught many sectors off guard, including legal. Our research found that despite their concerns about confidentiality, 55% of legal professionals still use ChatGPT. 

The rate of AI development has been so fast that it’s difficult for regulators to keep pace, as Brian Rogers, Regulatory Director at Access Legal, explains: 

Brian Rogers

“There’s no specific guidance from the SRA – which I think is because they want to be flexible in how firms use AI. My concern is that as AI starts to take off, the regulators stop everyone in their tracks and tell them that what they’ve been doing isn’t right. 

“Recently, the law firm Hill Dickinson came under fire for apparently banning ChatGPT – with the Information Commissioner’s Office saying that the use of AI at work shouldn’t be discouraged. However, Hill Dickinson challenged the claims, explaining it hasn’t banned ChatGPT – in fact, it wants staff to embrace AI in a ‘safe and proper way’. This case reflects the tightrope firms find themselves walking between encouraging AI innovation and having oversight of how it’s used.

“We’ll see what guidance emerges in terms of how AI relates to the established principles of confidentiality, integrity, honesty, and acting in the best interests of clients. 

He added:

“One of the big challenges for firms is whether they can actually interrogate their AI systems to satisfy the regulators that the decision-making process is sound. Most lawyers are not technically minded, but they’ll certainly need to understand how the technology works.

“This is why any firm adopting AI needs to prioritise security, compliance and confidentiality to protect themselves from potential risks and future regulations and guidance. Choosing software providers that have clear security protocols and transparency around data use is an important step in this process.”

Weighing up the risks

Like anything, law firms first need to understand the risks of adopting AI, as well as the risk of doing nothing. 

Such Amin, CEO of client communications technology inCase, as well as a solicitor, and a former president of the Manchester Law Society, believes those who are nervous about implementing AI should build their knowledge so it feels like less of an unknown. 

“If you’re not immersing yourself in the AI landscape, you might assume it’s all about ChatGPT,” he said. 

“Increasingly, GenAI tools, such as Microsoft’s Copilot, are being embedded in legal tech SaaS (software as a service) products that do not use your data to train language models. Data is ring-fenced within a secure environment – and, as with any SaaS solution, customers (i.e. law firms) have a right of redress for losses caused by data breaches. 

“While these safeguards are reassuring, there also needs to be training for both teams and individuals on interpreting AI responses, and using their professional judgement to understand why.”

Explaining more about the risk of doing nothing, he shares: 

“The whole industry has a responsibility to deliver a high standard of service and meet the expectations of clients. Consumers are already used to ultra-fast transactions and personalisation in sectors like ecommerce and fintech – if they don’t have a similar experience with law firms they’ll go elsewhere and probably leave a bad review. 

“Firms need to start implementing AI now, so they can deliver the best outcomes for clients but also ensure that the tools they’re using have been properly vetted. If they leave it too late, they’re more likely to take shortcuts and make mistakes.”

Getting your data in order

Of course, AI is only as effective as the data that feeds it – so firms must prioritise getting their data in order so they can overcome concerns about inaccurate or unreliable results.

David Sparkes, M&A broker and advisor to law firms and legal tech companies, cautioned:

“Many firms are sitting on large amounts of unverified (i.e. inaccurate or out-of-date) data. The pool of data on which AI decisions are made is critical from a regulatory perspective. In some cases, the data is not just inaccurate but also fragmented in different systems and departments, so firms will need to rigorously cleanse and curate their data, and become better at sharing it internally.” 

Brian Rogers added that firms need to be able to trust both the data and the AI itself: 

“It’s dangerous to rely on AI as the sole arbiter, as it is with any technology. The Post Office Horizon scandal is a good example of that. If you also rely on historical data, it could result in extremely harmful biases. Anyone who uses AI must ensure that the data is correct and that the AI is sound. Then they need to ask the right questions to bring out the right insights.”

Responsible everyday AI

As we saw previously, experts firmly believe that AI is not about replacing legal advice but freeing professionals up to concentrate on delivering a great service. 

Danielle Park, Product Manager of Access Legal, stresses that it should be used to automate tasks that reduce profitability of cases:

“There’s so many tasks AI can make a difference with – from retrieving data to generating document summaries, to organising daily workloads within case management software – which are relatively low risk, as long as these are overseen by a lawyer. Even if it’s not being used for critical legal decision-making, firms should always partner with software vendors who keep data locked down and offer permission-based access for users.”

Finally, Stu White, Product and Engineering Director for Access Legal, highlights how ‘everyday AI’ can be used to identify training needs that support compliance: 

“Staying on top of compliance training, including information security and data protection, isn’t always easy. AI tools can now be used to notify both users and managers in their daily work feed when they need to undertake training. This ensures everyone is up-to-date with their training requirements, and has the knowledge to use AI in a secure and compliant way.”

5 key takeaways: Implementing AI, without compromising security

1. Build your knowledge of AI: Increase your awareness of different AI tools, beyond ChatGPT, how they work and what they are for – particularly those embedded in trusted SaaS solutions. 

2. Weigh up the risks of AI adoption versus the risk of standing still: A thorough risk assessment of AI tools is vital – but firms should also consider the risk of not adopting them. Failing to embrace AI could leave a firm trailing behind competitors, impacting client experiences, and potentially leading to reactive, knee-jerk action in the future. 

3. Maintain good data hygiene: Good quality data is the foundation of AI-led decision-making, so ensure that it’s always accurate, comprehensive and up-to-date. Human oversight and intervention throughout design, deployment and use of AI is key.

4. Prioritise your partnership with a technology provider that ensures security: A trustworthy SaaS partner, with deep knowledge of the legal sector, will help you navigate regulatory and security challenges, while realising the benefits of the technology.

5. Consider the everyday applications of AI first: AI isn’t the stuff of dystopian nightmares – where it can offer most value is in automating low-risk everyday activities like task management and training requirements. 

This blog is part of the AI in the Legal Sector series. Find out how Evo, Access Legal’s brand new AI tool (launching soon), can help you to cut down admin in your firm, and drive innovation. 

For more insights on securely implementing AI in legal practices, explore our AI for law firms content hub.

Be the first to know about any future AI developments that could revolutionise your law firm