Artificial Intelligence (AI): Early Court Project Implementations and Emerging Issues

In the summer of 2017, the National Association for Court Management (NACM) was kind enough to publish our initial article, “Artificial Intelligence (AI) Coming to a Court Near You.”1 It was our hope to prompt some spirited discussion on possible use of AI court technologies and related policy issues, like workforce impact and the quality of justice. Just two years later, with technology advancing at a breakneck pace, we now see some use of AI and robotics in courts, albeit by a small group of early adopters.

Admittedly, the initial court-based AI deployments are limited in scope, requiring more time for full-scale implementation and comprehensive evaluation. That said, the first wave of projects seems promising, with much to be learned from these early pioneering efforts.

What follows is a brief update on some recently identified court AI projects, along with discussion on policy considerations and related ethical issues.

Early Implementations in the Courts

Court Operated Robot Assistant (CORA): Recently deployed in the 20th Circuit Court, Ottawa County, Michigan, this robotic mobile device serves as a “concierge” at the local courthouse. CORA leverages an array of integrated technologies, including robotics, AI, voice recognition, and video communications. In phase 1 of the project, CORA provided wayfinding (maps and directions), searchable court dockets, judge biographies, and answers to frequently asked questions (FAQs). These automated services were provided in both English and Spanish.

CORA’s functionality is now being expanded to include natural “voice-to-voice” interactions and a telepresence. Phase 2 of the robot project also includes:

  • real-time support in 30 languages;
  • automated on-site payment of child support and traffic tickets, including credit card transactions;
  • instant messaging; and
  • QR bar codes on documents to quickly assist litigants in navigating court processes.

Kevin Bowling, court administrator for the 20th Circuit Court and past NACM president, reports that initial impressions of CORA were favorable for the most part, particularly among children and young adult visitors. However, while the local funding body—Ottawa County—is supportive of this technology project, the court will be required to show a return on investment (ROI) for this project. Also, particularly related to our policy discussion, a few community members expressed concern about displacement of court staff positions.

Automated Workflow: In Palm Beach County, the clerk’s office is using AI to extract and enter unstructured data from e-filed court documents into the court’s case management system. The automated workflow process includes indexing, redacting, and docketing case data.   

Launched in March 2018, the system now processes documents in criminal, family, and civil cases. Currently, the automated case processing is limited to subsequent filings. Work is underway to include initial filings, which will involve the processing of filing fees.

As of February 2019, the court was processing approximately 7,700 documents weekly through the AI system, representing approximately 21 percent of the incoming documents. Plans call for expanded use of AI, with up to 85 percent of case filings to be auto-processed within the next year. For cases involving sensitive information, the workflow is diverted to staff in the traditional docketing process—for example, sexual battery of a minor.

The automated data entry system runs behind the scenes, 24/7, processing documents in real time as received. The resulting reduction in workload has eliminated several staff positions in the docketing area. To date, the staff reduction has been managed without layoffs, both through natural staff attrition and reassignment of some staff to other critical areas, such as managing record retention.

Document Redaction: Working with two private vendors, the National Center for State Courts (NCSC) undertook an auto-redaction proof-of-concept project in 2017.2 This technology holds promise in three areas related to the goals of NCSC’s “Best Practices for Court Privacy Policy Formulation”:

  • Maximiz[ing] accessibility of court case records;
  • Protect[ing] users of the court from harm; and
  • Mak[ing] effective use of court resources.3

Using machine learning with actual court documents from several states, the project targeted structured and unstructured data for redaction. High levels of accuracy—up to 98 percent—were achieved with some of the test cases. Optic resolution levels of the test documents proved to be critical, with low-resolution documents having a 66 percent rate of accuracy in the redaction process.

Juvenile Court Case Management: In Montgomery County, Ohio, the juvenile court is using a medical model AI system to scan and aggregate pertinent unstructured case notes from multiple sources, including law enforcement, treatment providers, probation officers, and case workers. Judges ordinarily need to read and synthesize information from hundreds of pages before juvenile court hearings, in preparation for rulings on placement and treatment services for minors. This AI system uses algorithms to identify key “social determinants,” risk factors, and most effective case management strategies.    

Chatbots: Recently, the Superior Court of Arizona in Maricopa County (Phoenix) launched a chatbot service, which answers FAQs of users through a 24/7 online user interface. Other chatbot services include online exchange of court forms and instructions, automated juror check-in, and access to juror parking maps using the juror ID number. The chatbot services are available via social-media platforms such as Facebook Messenger, Twitter, Skype, and the court’s website.

Powered by a natural-language-processing engine, the court’s chatbot application ingests large data sets and develops skills to answer frequent user questions. When the chatbot is asked a question that is not within the bot’s pre-trained knowledge base, the matter is transferred to court staff, staff responses are logged, and the data are used to “train/retrain” the bot on the question. (See the appendix for a more-detailed description of the Maricopa County chatbot features.)

Pretrial Release: Using a large data set from pretrial criminal cases in New York City, researchers have explored whether “machine learning” predictions can improve upon the way judges render bail release decisions.4 The research has identified explanatory variables, such as criminal history, current charges, and defendant’s age, which may be predictors of risk of recidivism, flight, or both. Results from the preliminary research suggest that the algorithm can increase the accuracy of identifying high-risk defendants by 25 percent. Another scenario discussed in the study is a 40 percent reduction in the rate of pretrial jailing, with no increase in the crime rate.

Litigation Analytics: Publishing companies are now providing detailed information on federal judge profiles; motion outcomes, appeals outcomes, and case disposition times by judge and court; and related comparative analytics.5 The litigation analytics aggregate legal data from court docket systems, case law, and company business information. This information is made available via a paid subscription. Such data will, no doubt, become increasingly available and widely used in litigation strategy, forum selection, and business decisions.   

Policy and Planning Considerations

As more and more courts consider the use of AI systems to augment daily business practices, we must remain cognizant of our responsibility to the principles inherent in the rule of law. Automated intelligence carries a risk of undermining, or at the very least raising concerns about, the fairness, transparency, and accountability of the legal process. As we know, these are key components of our judicial system, and when in question, the quality of what courts do is diminished and public trust and confidence wanes. What can court leaders do to support the use of AI, preserve these lofty tenets, and support our mission to provide a fair and impartial forum for the resolution of disputes?

The Internet abounds with research papers, articles, and other publications on artificial intelligence, its current and future use, policy considerations, and ethical issues. It is a foregone conclusion that these issues will continue to grow in both volume and scope as machine learning expands, along with the understanding of these issues. However, as we look toward the future of AI in our courts, at a minimum we suggest that leaders give attention to the fundamental questions below as they consider the use of AI systems.

Where is the use of artificial intelligence appropriate in the court environment? 

No one knows with any certainty where AI systems may take us, especially in the public sector. The debate has already begun across our nation’s courts regarding the use of predictive analytics at the core of some pretrial release systems.6 Following the private sector’s trend to use “data driven” analytics to foster improvement, the public sector, including the courts, has begun to follow suit. While, in theory, this appears quite attractive, inserting AI into the core of what courts do is not without its detractors. The authors of the “AI Now 2017 Report” (a group of researchers from New York University, Google, and Microsoft) recommend that core public agencies, such as those responsible for criminal justice, health care, welfare, and education (i.e., “high stakes” domains) should no longer use “black box” AI and algorithmic systems.7 Why? Still in their infancy, artificial intelligence systems lack sufficient guidelines, rules, regulations, and processes to protect those who are affected by these systems.

But that does not necessarily mean the courts should sit on the sidelines. We can, and should, explore advanced technology to assist us in providing services to the public that promote efficiency, effectiveness, and accountability. We believe systems that eliminate or reduce data-entry requirements; support the decision maker in gathering, coordinating, and presenting information; and facilitate real-time information to the public are worthy of serious consideration.

How will AI impact our organization? 

Most certainly, court employees will be affected by this technology. From employees staffing a basic information desk, to those engaged in routine data-entry tasks, and even those who support judicial officers through legal research, AI systems currently exist that can, in whole or in part, replace what they do. As AI continues to evolve and we grow more comfortable with its use, many other functions and duties may be challenged. Most definitely, AI will be an alternative to take over our most routine, structured, and predictable tasks. As court leaders, we must actively plan for the transition and the effects to our organizations before we engage technology that may have such an enormous impact. This includes how to address employee displacement; what new knowledge, skills, and abilities are needed; public funding of AI systems; justice system stakeholder reaction; and public transparency and accountability.  

What are the ethical issues to consider when using AI?  

Ensuring that our AI systems are not biased is of primary concern. We know that AI systems are only as effective as the data on which they are trained. As such, we may inadvertently introduce unintended bias if our existing data is not pure and free from such fault. Moreover, and as noted previously, public-sector AI lacks substantive rules and guidelines for development, assessment, analysis, and transparency. Obviously, this inhibits the court’s responsibility to be fair, open, transparent, and accountable. Our friend and colleague, Alan Carlson, has researched and written extensively on the risks of using AI in our courts, including inherent bias, and we encourage court leaders to review his paper on this topic for an in-depth, comprehensive discussion of the issues.8 In summary, Alan suggests we seek to mitigate these concerns by:

  • using data sets that are more inclusive of the general population to whom the tool will be applied;
  • using training data sets with more relevant data, both in terms of detail and greater granularity;
  • excluding data irrelevant to the intended recommendation; and
  • analyzing the data before subjecting it to the AI system to clean up and eliminate potential bias.

In addition, we must be diligent in our commitment to accountability and transparency of the algorithms used in our AI systems. In April of 2018, AI Now published its work, “Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability.”9 The authors recommend:

  • agencies should conduct a self-assessment of existing and proposed AI systems, evaluating impacts on fairness, justice, and bias;
  • agencies should develop a meaningful external researcher review process;
  • agencies should provide notice to the public disclosing their definition of “automated decision system”;
  • agencies should solicit public comments to clarify concerns; and
  • government should provide enhanced due-process mechanisms for affected individuals to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses.

Closing Thoughts

Given the generally risk-averse nature of the judicial branch, the first types of court AI projects are a remarkable development. We are struck by the breadth of AI application, ranging from high-volume automated workflow to integrated case management in the juvenile court. As our project listing is by no means all inclusive, we would be pleased to hear about other initiatives—both from local courts and industry service providers in this developing space.        

The use of AI in our courts is both promising and inherently risky. From the initial project implementations, we have already begun to see how these systems can provide quality services that enhance our abilities to serve the public and improve efficiencies in the justice system. But, as the technology continues to evolve and gradually advance into all facets of the justice system, court leaders must actively confront the issues that will surely arise, including its appropriate use in our justice environment, the substantive impacts to our organizations, and how we will resolve issues of potential bias, system accountability, and transparency.

Appendix: Maricopa County Chatbot Feature List

Artificial Intelligence and Natural language Processing: The chatbot is powered by a natural-language-processing engine, which ingests large data sets and develops skills to answer user’s questions. The chatbot gets better with experience as the data set gets bigger with each conversation faced by the chatbot.

Multilingual Support: Users can speak in any language with the chatbot. There is an AI-based language-detection-and-translation feature, which detects the user’s language (if it’s not English) and converts the user’s inputs to English for the bot to process and converts the bot’s responses back to the user’s language.

Speech-to-Text and Text-to-Speech Conversion: Users can use the microphone of their device to speak their queries, and the bot converts them to text for processing. Likewise, the bot’s responses are read back to users in a human voice.

SMS: Users can chat with the bot over an SMS channel.

Agile Jury Integration: The chatbot is integrated with agile jury to complete juror check in, postponement, and questionnaires

Frequently Asked Questions: The chatbot has a set of pretrained skills and a knowledge base created by natural language processing to answer frequently asked questions from users.

Website Search: If a user’s query is not mapped to any of these skills or knowledge base, the bot searches through the entire website of the superior court in natural language, including pdf documents, to get the most suitable results for the user. The top-three results are displayed to the user.

ICJIS/Case Lookup: Users can look up their case details using a case number

File Upload: Users can upload completed forms to the bot through an upload button, and the bot sends an email to the concerned office or mailing list with the file as an attachment.

Agent Transfer and Dashboard: Users can request to speak with an actual representative, and the bot gets the user’s name and nature of query and opens a request in the “dashboard.” The dashboard is a site for court staff from various departments to login and take over any open requests from users. Now users can chat with the representative through same window in which they were chatting with the bot. The language-detection-and-translation feature works here too. A single English-speaking representative can also handle conversations in multiple languages in real time without having to know the language.

GeoFencing: GeoFencing features let court staff set a perimeter within which a user can check-in. Users can check-in as a juror or defendant or plaintiff, but they must be within the perimeter. Chatbot captures the GPS coordinates of users (with their permission, of course) and checks to see they are within the boundary. When a user doesn’t provide permission, or there is a GPS error, there is an override code with front-desk staff, which can be used to override location-based check-in. A perimeter can be set around court premises through a map interface provided on the dashboard, and court staff can just draw the boundary using their trackpad/mouse.

One-Click Integration with popular social-media platforms and messaging apps, such as Facebook Messenger, Twitter, Slack, and Skype.

Parking Location Lookup: Users can look up their parking location using juror Id/case number, and a map is displayed to them.

Email or SMS: When users are not satisfied with a response from the bot, they can request a representative to send them an email or SMS for their query later. Court staff can once again use the dashboard to view such requests and respond to them.

Training on the Fly: When court staff handle situations that were not within the bot’s pretrained skills or knowledge base, the court staff’s responses are logged, and the data are used to retrain the bot to answer the new type of question that was asked by the user. This improves the bot’s performance when it encounters a similar situation in future.

Acknowledgments:  Special thanks to Ms. Marretta Mathes, court specialist, Administrative Office of the Courts, Arizona, for the research and insights she provided for this article and related presentations.


ABOUT THE AUTHORS

Marcus W. Reinkensmeyer is director of the Court Services Division, Administrative Office of the Courts, Arizona Supreme Court, and a past NACM president.

Raymond L. Billotte is the judicial branch administrator for the Judicial Branch of Arizona in Maricopa County, Phoenix, and a former NACM board member.


  1. Raymond L. Billotte and Marcus W. Reinkensmeyer, “Artificial Intelligence (AI) Coming to a Court Near You,” Court Manager 32, no. 2 (2017).
  2. Tom Clarke et al., “Automated Redaction Proof of Concept,” National Center for State Courts and State Justice Institute, December 2018.
  3. Id.
  4. Jon Kleinberg et al., “Human Decisions and Machine Predictions,” working paper 23180, National Bureau of Economic Research, February 2017.
  5. Bloomberg Law Analytics and Thomson Reuters.
  6. See, for example, Kade Crockford, “Risk Assessment Tools in the Criminal Justice System: Inaccurate, Unfair, and Unjust?,” ACLU of Massachusetts, March 2018.
  7. Alex Campolo et al., “AI Now 2017 Report,” New York University, 2017.
  8. Alan Carlson, “Using Artificial Intelligence and Big Data to Develop Tools Used in Courts,” version 2a, July 18, 2018.
  9. Dillon Reisman et al., “Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability,” April 2018.