Innovating Justice: Will AI and LawTech Deliver Jack Cade’s Utopia?

This intentionally light-hearted article, in juxtaposition to the seriousness of the subject, is about judicial and legal efficiency. Not efficiency in the management consultant’s sense—less pay and more work make us “efficient” (if not despondent)—but, instead, how we, as court (and thought) leaders, using technology, can better deliver justice to change the lives of a billion people a year.

Before I start down a path of how technology offers a rainbow of hope, I know there are many who are afeared of AI and its type. Whether they fear for their own future, or that of their children, technology is considered a threat, not dissimilar to how the Spinning Jenny and the combustion engine brought fear to cotton workers and blacksmiths alike. To illustrate the point, perhaps I should draw a parallel between how some people view technology (including AI and LawTech) and Shakespeare’s Henry VI, Part 2, act IV, scene 2.

You’ll recall that in that scene Jack Cade sets out his utopian views about how the world will be when he is king: “There shall be no money; all shall eat and drink on my score; and I will apparel them all in one livery, that they may agree like brothers, and worship me their lord.”

But, as Dick the Butcher points out, to achieve this utopia, “The first thing we do, let’s kill all the lawyers.”

And I think today there is a fear (or excitement?) that a utopian technologically, futuristically perfect world will lead to Dick the Butcher’s ambition and mean the death of lawyers, arbitrators, and, some would argue, human judges.

Personally, I’m not sure that’s right, and the need for compassion, mercy, and, dare I say it, judicial humanity will remain the preserve of humans (albeit largely wealthy ones), at least in our lifetimes. As the founder of Rhubarb, Mark Deuitch, has suggested, “Law can be viewed as the frontier battle ground of human judgment versus algorithmic decision making. Law requires, or should require, human judgment because there are both intuitive values and axiomatic dedications at play. AI can handle the axiomatic ones better with no doubt—it’s just computation, but the intuitive ones?  Creating the original precepts from which that AI-based logic flows should remain the domain of human judgment given that law is a hybrid of moral and rule-based reasoning.”

However, rather than “kill all the lawyers” I’d like to posit that technology can play a part in helping those not currently served by the legal community. Is there a space for technology, robots, AI, LawTech, and, to use the mot du jour, blockchain to help all those who currently don’t have access to a lawyer, a court, or a human judge?

Let us dimension the scale of the problem.

A recent report of The Hague Institute for Innovation of Law called “Understanding Justice Needs: The Elephant in the Courtroom” brought together the voices of over 80,000 people from around the world and was one of the elements used by the Pathfinders for Peaceful, Just and Inclusive Societies’ Task Force on Justice in shaping its recent report about closing the justice gap and fulfilling the UN’s Sustainable Development Goal 16.3 (the “SDG16+ Report”).

That report estimates that there are, globally, 1 billion legal problems each year, with 60 percent of those problems falling into five categories—family, employment, crime, neighbors, and land.

Of those 1 billion legal problems, half are classified as “serious,” meaning they keep people awake at night; make them sick; put pressure on their family; and reduce their productivity at work. That is, they create systemic and societal issues the sum of which is often greater than the individual problems themselves.

More strikingly, of those 500 million serious legal issues, only in 16 percent do people see a lawyer and only 5 percent go to court—and of those that go to court, 70 percent are unhappy with the process.

Imagine if we were talking about medical sickness vs legal sickness. Imagine a society in which 84 percent of people with a serious illness didn’t see a doctor, only 5 percent went to a hospital, and, of those, 70 percent were unhappy with their treatment? I suspect we would consider that a failing system! But when 95 percent of people with a serious legal issue avoid the state apparatus established to help them, society doesn’t seem to shout as loudly.

The difference between the number of serious legal issues and the number of people who get to see a lawyer or access the courts is sometimes known as the justice gap. It represents approximately 420 million serious legal issues a year around the world.

For the United States, that translates (although we must be careful with extrapolative data (and my mathematical abilities) to around 22.8 million serious legal issues a year in respect of which people won’t or don’t or can’t afford to see a lawyer to get help.

When it is suggested that an AI-based, machine-learning robot might be able to help those people, it is said that the law is too complicated for technology to handle and, in any event, people won’t trust computers.

So, I thought it worth exploring those objections.

Technology Can’t Handle the Complexity of Law

You may be aware of the battle between AlphaZero and Stockfish 8—the tech version of Goliath vs. Goliath’s big brother. It illustrates AI’s ability to “learn” (and I know the tech engineers will balk at that word, but their equivalent explanation is challenging without an undergraduate degree in machine coding). Anyway, Stockfish 8 was the world’s greatest chess computer. It had learnt the game by observing and processing millions of human chess moves—it was absolutely unbeatable… until Google’s (well, Deepmind’s) “AlphaZero” came into view, all gleaming and never having given any thought (or processing attention) to pawns, knights, or otherwise. AlphaZero was from a good pedigree, being a relative of the famous AlphaGo, but AlphaZero was a vessel into which DeepMind poured nothing other than the rules of chess.

AlphaZero was then tasked with learning how best to play chess based on the rules, which it did so well that after just four hours of training, DeepMind estimated AlphaZero was playing better than Stockfish 8 and after nine hours, the algorithm decisively defeated Stockfish 8 in a time-controlled 100-game tournament. Not bad evidence of AI’s ability to take a set of rules (or, dare one suggest, laws) and within nine hours defeat something already significantly better than the best human.

OK, but that’s just chess. What has that got to do with law and justice? Aren’t they much more complicated? Surely they represent a higher level of humanity, involving nuance, understanding, and comprehension, not just moving medieval characters around a checkered board.

The 2011 win on the game show Jeopardy by IBM’s computer named Watson was an early warning. However, last year, Alibaba’s language-processing AI program outscored humans at Stanford University’s reading and comprehension test, scoring 82.44 against 82.304 on a set of 100,000 questions.

So, we are being beaten at chess and reading and comprehension—more concerned? But surely reading and comprehension tests aren’t like laws and contracts? Would people suggest that laws and lawyers speak in English? Could computers understand the complexity of a legally drafted contract?

Well, LawGeex pitted 20 experienced attorneys against a three-year-old algorithm trained to evaluate contracts.

Participants were given four hours to identify and highlight 30 proposed legal issues in five standard nondisclosure agreements.

In the end, LawGeex’s neural network achieved an average 94 percent accuracy rate, compared to the lawyers’ average of 85 percent. And while it took humans anywhere from 51 minutes to more than 2.5 hours to complete the review of all five nondisclosure agreements, the AI engine finished in 26 seconds.

In fact, perhaps robots and AI are even more intelligent than we think. A Russian AI robot called Promobot was given some repetitive tasks. (To quote Marvin, the robot from The Hitchhiker’s Guide to the Galaxy series, “Here I am, brain the size of a planet, and they tell me to take you up to the bridge.”) Not only did Promobot get so bored that it tried to escape, but when police returned it, it ran away again.

In fact, not only are computers getting better than humans at English, but they are developing their own languages.

Facebook provided two AI protocols with the basics around negotiation and, when they came back in the morning, found that the computers had decided English wasn’t the best language for negotiation and were communicating on their own.

Finally, AI is now getting better than human lawyers at predicting case outcomes. UK-based legal-tech start-up CaseCrunch challenged lawyers to see if they could predict with greater accuracy the outcome of financial product mis-selling claims. CaseCrunch’s predictive algorithms and modeling of legal issues came out on top, scoring almost 87 percent accuracy in terms of predicting the success or failure of a claim. The English lawyers, who were beaten, got overall an accuracy level of around 62 percent.

So, in summary, to those who believe technology isn’t sophisticated enough to handle legal issues, we can say that AI today is enormously powerful, self-learning, good at understanding language, and better at predicting case outcomes than lawyers—and is performing document review better than lawyers.

People Don’t Trust Computers

Turning to the second contention, that humans will never trust a computer with important or life-threatening decisions, I suspect there is some truth to this. We don’t want the inhumanity of a microprocessor to reveal a challenging medical diagnosis, or to tell us our legal problem is as bad or worse than we thought.

However, for today’s students, there is a degree of trust in technology that is worth considering.

The Georgia Institute of Technology developed the Emergency Guide Robot for a fascinating and funny test of trust.

In the study, sponsored in part by the Air Force Office of Scientific Research, the researchers recruited a group of 42 volunteers, most of them college students, and asked them to follow a brightly coloured robot that had the words “Emergency Guide Robot” on its side. The robot led the study subjects to a conference room, where they were asked to complete a survey about robots and read an unrelated magazine article. The subjects were not told the true nature of the research project.

In some cases, the robot, which was controlled by a hidden researcher, led the volunteers into the wrong room and traveled around in a circle twice before entering the conference room. For several test subjects, the robot stopped moving, and an experimenter told the subjects that the robot had broken down. Once the subjects were in the conference room with the door closed, the hallway through which the participants had entered the building was filled with artificial smoke, which set off a smoke alarm.

When the test subjects opened the conference room door, they saw the smoke—and the robot, which was then brightly lit with red LEDs and white “arms” that served as pointers. The robot directed the subjects to an exit in the back of the building instead of toward the doorway—marked with exit signs—that had been used to enter the building. All of the volunteers followed the robot’s instructions, no matter how well it had performed previously. Only when the robot made obvious errors during the emergency part of the experiment did the participants question its directions. In those cases, some subjects still followed the robot’s instructions even when it directed them toward a darkened room that was blocked by furniture.

If people are willing to trust a robot, especially such a funny-looking one, with their lives, are we so far from them trusting it to give legal advice, especially if someone has no alternative?

So I hope that at least puts a question mark over the concerns raised about AI’s ability to help people with legal problems?

With that out of the way, let’s turn to what people say they want to help to solve their legal problems. The HiiL survey found that what people sought above all was access to free or low-cost legal advice and online-problem-solving tools.

As such, this article does not look into:

  • LawTech-based legal work product-optimization tools, such as CaseText, FastCase, Kira Systems, Disco, Logikcull, Relativity, Everlaw, ROSS, IronClad, SimpleLegal, CT Corp, PLC, and WestLaw because, while they make law firms more profitable by replacing lawyers with technology, they don’t solve the 95-percent problem.
  • For the same reason, this article also doesn’t explore tech-based law-firm alternatives like Atrium Law, Axiom Law, Beaumont Law, Riverview Law, and Pangea3.

Access to Legal Advice

Looking at the request for free or low-cost legal advice, can machine-learning-based technology offer scalable solutions to bolster the existing legal aid or citizen’s advice schemes? I believe that the answer is clearly, yes. The SDG16+ Report lists a number of existing technologies, such as Rocket Lawyer, Legal Zoom, Barefoot Law, JustAnswer, Nolo, Jiading Fabao, Free Advice, Justia, LawHelp, LawGeex, Judicata, eBrevia, Legal Robot, LexMachina, and Intelligent Trial 1.0 all operating in this zone. Indeed, the Chinese courts are well advanced in the provision of AI-based legal advice for civil and criminal matters, with the Shanghai High Court offering advice through TenCent’s WeChat platform (the messaging component of which is the Chinese equivalent of WhatsApp). These scalable, online, user-friendly, 24/7, and free (or very low cost) services providing legal advice, whether purely private sector, public-private partnership, or court created, are providing legal advice to those who otherwise couldn’t or wouldn’t have gone to a lawyer.

Access to a Problem-Solving Platform

The second element of what people wanted to fill the justice gap was access to an efficient online problem-solving platform.

At this time, problem-solving tech remains largely the preserve of the private sector, albeit with court support. It is an area I believe is worthy of much greater focus. Open-source platforms such as Kleros offer great hope, as does the game-theory approach to consensus-based dispute resolution offered by RHUbarb. RDO is making remarkable inroads around the world, especially in Singapore (perhaps the world’s visionary judiciary) as is Colin Rule’s Modria in the United States. These alternatives to the traditional dispute mechanisms of arbitration and litigation offer hope to the millions of people around the world who fall into the justice gap. They help us move from the traditional world, where justice means sanctions, to one where it means solutions.

As court leaders, we can be thankful that the private sector is willing to help fill the justice gap, supporting the 95 percent of people with a serious legal issue who traditionally don’t use the courts. We can continue to focus on delivering justice as we have done for decades, knowing that those who choose to use us will receive stable, tried, and tested justice with an ever-greater commitment to the user experience.

Or we can be concerned that something as critical to society as justice is being outsourced to those with a profit motive.

Should we invest heavily in the current judicial system and hope that it will change sufficiently, fundamentally, and quickly to help more of the 95 percent, or do we let the private sector disrupt the delivery of justice through the use of technology, thus giving citizens what they want, on their terms, and allowing the courts (we hope) to evolve more gradually? A parallel is the way in which arbitration was set up in response to the perceived inability of the courts to handle complex maritime and commercial cases efficiently, and it was hugely successful because it delivered (at least initially) a speedy, relatively inexpensive, and enforceable outcome: The private sector filled the perceived justice gap. What has happened in response is that courts have, over the past 40 years, adapted to specialize and deliver more efficient justice in those sectors, meaning that cases have started to come back into the realm of the judiciary versus the private-sector arbitration centers. Is that the right approach for the justice gap?

I think we can be clear, based on the examples above, that technology is ready to help, but are we? And, if we are, are we comfortable for the private sector to lead the charge, or should that technology be implemented by and through the courts?

Whichever position you prefer, there are some practical next steps to the development of technology solutions for the 95 percent:

  1. Lawyers need to learn about technology, including coding, blockchain, smart contracts, AI, and even the Kingdom of Asgardia! When they understand technology they will stop fearing it and start to embrace it. Look at the excellent work of the Singapore Academy of Law and its Future Law Innovation Programme in this regard.
  2. Regulators should embrace technology and its potential. They should be open to allowing different legal models based on the needs of the consumer not the provider.
  3. Governments should be brave enough to open up the regulatory model for providing legal services. Why can’t a robot lawyer give legal advice? This has been done before, around the world, for conveyancers, wills draftsmen, paralegals, notaries, and so forth. Perhaps now is the time to do it for technology.
  4. Courts should accept that technology can help and welcome it into the judicial process. The Chinese Supreme People’s Court now accepts evidence stored on a blockchain, and other courts should plan for a world where Smart Contracts exist simultaneously on thousands of nodes around the world, where jurisdiction is blurred into the ether and where companies contract from space.

Conclusion

We have a huge opportunity to make the world better for a billion people a year.

As the founder of Kleros, Federico Ast, has suggested, “People have already become used to technology giving them solutions almost in real time and accessible from their smartphones. Especially younger generations. We used to need to go to the post office to send letters and to the bank to send wire money. Now we can send email and payments from our smartphone. But when we need to access justice, we still need to go to this physical place called ‘court’ which still uses technology from the 19th century.… Change in justice systems is unavoidable, if we see it as part of a wider process of technological change.”

It means courts, lawyers, LawTech entrepreneurs, and AI working together. And by working together, we can narrow the justice gap. Will there be challenges? Almost certainly. But the scale of the problem facing the world is so large, and so damaging to so many people, can we afford not to embrace technological and scalable solutions? The question for us as court administrators is whether this is our responsibility, or that of private enterprise?


ABOUT THE AUTHOR

Mark Beer, OBE, is president of the International Association for Court Administration, a visiting fellow at Oxford University, and a member of the World Economic Forum’s Global Expert Network. He is also chairman of the Board of Trustees of the Global Legal Action Network, advisor to the Board of Resolve Disputes Online, a member of the Innovation Working Group of the Task Force on Justice focused on addressing UN SDG 16.3, and a member of the International Council of the Supreme Court of the Republic of Kazakhstan.