How Fair Is Your Court?

How fair is your court? Take a moment to think about it before you read on.

My guess is that most court professionals would be inclined to answer, “Pretty fair!” After all, you know the hard work and good intentions that staff dedicate to their jobs each day. Fairness must flow from those efforts. But how would you know how fair your court is? Perhaps your answer is based on your own self-assessment, which certainly has value. Or perhaps it’s based on data you have about case-processing times or rates of appeal or other hard data about fairness. In any case, it’s natural to want your court to be fair because fairness is a fundamental goal and hard work should contribute to our goals. Indeed, fairness is likely part of every court’s mission statement and may even be one of the reasons you chose your career path. Fairness matters to you.

Assuming that fairness is important to you and is fundamental to the function of our courts, then how might we get a more accurate read on how we’re doing on that front? You may think fairness is too abstract a concept to pinpoint with any certainty or that self-assessment is plenty accurate. While fairness can be an abstract concept, and as professionals you’re in a very good position to know how your policies and practices align with fairness from your own perspective, fairness is inherently subjective and therefore must also be considered from the perspectives of those impacted directly:  court users.

National opinion polls suggest that court fairness ratings aren’t as high as we might hope. In a 2019 National Center for State Courts public opinion poll—before all the ongoing disruptions of 2020—just 54 percent of voters surveyed felt the courts were fair and impartial.1 Further, only 65 percent of people polled had confidence in the courts, down from 76 percent the year before. Amid uncertainty and changes spurred by the pandemic, a similar poll in June 2020 showed the confidence rating at 70 percent, and an overwhelming 61 percent (consistent across demographics) said they would be very concerned about their ability to receive a fair and impartial trial in a case tried online versus in person.2 Courts have been forced to adapt, but we may not have the public’s full confidence along the way.

As the pandemic and responses to it continue to affect most of us personally and professionally, and as many communities are wrestling with other tough questions about use of public funds, the topic of fairness is timely. Let’s start with the fundamental question we should be asking, “Did the court treat you fairly today?” In this article we cover concrete and innovative strategies to share the right tools to measure how you’re doing. By assessing fairness, we can ultimately improve upon it.

Beyond the national polls mentioned above, countless studies of procedural fairness have assessed perceptions of fairness from court users and members of the public. A key finding may seem counterintuitive. The most influential aspects of fairness for most people are not the tangible outcomes of their case and whether they “won” or “lost,” but rather how they were treated and their assessment of the procedures as being fair. Procedural fairness (also known as procedural justice) has been explored extensively in courts over the past decade. This work has included local and statewide trainings of judges and other court personnel, along with the creation of a number of practitioner tools to support implementation of procedural fairness practices to include self-directed training, bench cards, interviews with practitioners, and improved court websites or signage. Procedural fairness is the topic of its own resolution and commitment from the Conference of Chief Justices in 2013.

While I’ve had the good fortune of working on many of these efforts, court leaders dedicated to improved fairness sometimes lacked hard data showing how their efforts were paying off. Sure, there were anecdotes that kept the initiatives alive, like “Procedural fairness has given a name and structure to the kind of judge I’ve always wanted to be!” and “Procedural fairness has elevated our commitment to customer service!” Knowing concretely whether court users perceived practices as more fair can be complicated. I can understand why. Among the many courts I’ve visited and worked with over the years, I’ve seen empty comment card boxes. (“Oh, I’m not sure whose job it is to replace those comment cards.”) I’ve also heard how hard it is to stay on top of the feedback that comes back. (“Honestly, I don’t even know who’s tasked with reading them once they’re submitted.”) Aside from managing formal complaints, collecting and learning from court user feedback may not always appear in all court staff job descriptions. When it’s an add-on task, it’s hard to prioritize. As there already is a lot to do, perhaps you could outsource it. Some courts participate in occasional studies with a local university or similar organization to help survey many court users. The analysis flowing from these efforts is often rich, but these projects are also rare. Larger-scale studies tend to offer findings long after the data are collected, rendering the data less useful for real-time improvements. This is a troubling situation when fairness is one of our cornerstones. Let’s consider how to address it.

Picture a small airport in Costa Rica. (In fact, picture an entire vacation in Costa Rica! You’ve earned it). I was in the airport there about a year ago with my husband on one of our first getaways without our young children, who were spoiled back home by Grandma. We hiked through waterfalls, painted ourselves with volcanic mud, and saw a range of exotic wildlife that rivaled any zoo or nature preserve. It was the perfect escape from our hectic lives as working parents in Manhattan. Imagine my surprise, sitting in that airport before we flew home, to see a feedback kiosk at the tiny deli selling snacks. It was a simple stand right next to the checkout counter with a series of smiley and sad face emojis, inviting feedback from customers about their experience at the restaurant. I’d seen similar feedback stands in taxi lines and at my local IKEA, but I could hardly believe that this little restaurant was making the effort to learn from its customers in this way. That was the final push I needed. Courts, too, can be strategic about collecting customer feedback. We just need better tools.

For the skeptics, I understand courts and restaurants and taxi lines are different, and not every court user is smiling when they leave court. Focusing on the difference, however, misses the idea’s simplicity. There is potential in asking people for their feedback. With some adaptations, we can harness this for our purposes. In late 2019 and throughout 2020, I worked with a handful of courts to use real-time feedback tools to hear from court users about their experiences. The most common question we asked was a fundamental one: “Did the court treat you fairly today?” We did so via in-person formats like the iPad kiosk shown here and through virtual contact. The results were encouraging. Courts could implement these new tools in relatively easy, cost-effective ways, and court users seemed quite amenable, if not eager, to supply valuable feedback.

Here’s some initial guidance based on a structured pilot project I led in 2020 in partnership with the Texas Municipal Courts Education Center, the State Justice Institute, and seven brave municipal courts in Texas.3

1. Decide what feedback you seek and why. There are many possible reasons your court might seek feedback. Perhaps you’re designing a new courthouse and want to better understand what problems court users hope will be addressed by the new space. Or maybe you’ve added new services or resources at the court and want to assess their accessibility or utility. Or perhaps it’s more fundamental, and you want to know how you’re measuring up on fairness. There’s no right answer, of course, but getting court leadership on the same page about the goals of collecting feedback is an important first step.

2. Find the right tool(s). With so many technologies available, it’s tempting to pick the tool first and then figure out how its bells and whistles might help you achieve your goals. This approach risks wasting time and money on a matchup that may be misaligned from the beginning. For example, I knew that the emoji-based kiosk I saw in that airport restaurant wasn’t quite right for a court. Aiming for a smile felt inappropriate in a setting where people are handling very serious legal matters. Instead, I aimed to find a feedback tool that would allow court leaders the opportunity to draft their own questions and invite feedback that was more dynamic than just a smile or frown. It still needed to be simple. I didn’t want to be lured into using a full-blown surveying platform to ask 100 different questions in 100 different formats.

The compromise offered a handful of question formats, including multiple choice, thumbs up/thumbs down, and even open-ended—all of which could be controlled remotely via an online user-friendly portal. We could also ask questions in English and Spanish and use conditional formatting to ask follow-up questions based on the initial response.

We certainly didn’t anticipate that we’d be rolling out the pilot in a pandemic, but it helped that the feedback company we used offered both in-person and remote feedback collection.4 For in-person foot traffic in the courts, court users could use the kiosk. Questions were tailored based on the location, such as leaving a clerk’s window versus leaving a help desk. For remote interactions with the court, such as a virtual court hearing or after hearing back from court personnel via email, the feedback was invited via a simple one-click feedback link. There are also text-message-based feedback formats (not yet tested in a court, to my knowledge) where court users could scan a QR code to be connected on their phone to one or more feedback questions. In any case, the feedback tools offering real-time input from court users require little staff time from court personnel.

3. Embrace “quantity over quality.” Like many things in life, less is more with user feedback. We limited our questions. Yes, it would be incredibly illuminating if 500 court users would complete a 100-question survey about their demographics, experiences, and recommended improvements for the court. Not only is it too much to ask of most court users, that’s also a much bigger project for court leaders to undertake given the possible breadth of the findings. It might require a mind shift, but simple feedback approaches can reap the benefits of quantity over quality. With precise tools like these, you could aim for 100-plus responses a week over a whole month and still hit the 500-responses goal, and with a lot less effort. Responses may only cover a few questions at a time, which can be modified easily once you get a good read on the first batch, and so on.

4. Be transparent with the findings. Asking for feedback is not easy. In all areas of our lives, opening ourselves up to potential criticism can be uncomfortable. Nevertheless, it is incredibly valuable and worthwhile. Once you’ve cleared that hurdle, make the most of it and share the results. Publicize and show pride in your court’s commitment to fairness and efforts to invite feedback. Indeed, giving a voice to court users is a key dimension of procedural fairness. Being transparent about the findings furthers that commitment. If your court tracks the national average, that might mean that the first round of feedback shows only a 70 percent fairness rating. Or perhaps your court will beat the national average. In any case, that first step will set an important benchmark to track internally and externally as part of a tangible commitment to fairness.

This is not an expensive undertaking. Courts committing to user feedback for a whole year in the ways we tried in Texas, in person and virtually, would need less than $1,000 to participate. This covers one-time equipment costs of about $300 for the iPad and stand, plus software licensing for about $50 per month. Some courts found that they could then use the iPad and stand for other purposes, like court check-ins and announcements on off-days they weren’t collecting feedback. It’s something new, but it’s not extra to assess fairness. Unlike so many court improvements, this is not terribly difficult or expensive to try, and the benefits are many. Inviting feedback gives a voice to courts’ most direct constituents. It also helps leaders focus precious resources for improvement in a fundamental court area.

If you are committed to fairness, make sure you’re measuring up. The best way to know is to ask. Read the full toolkit here and reach out directly if I can help your court get started with collecting real-time feedback. Learn more about other fairness tools at www.lagratta.com.


ABOUT THE AUTHOR

Emily LaGratta, J.D., is the founder of LaGratta Consulting. She works with courts and other local and national agencies to design, implement, and train on innovative programming and tools to build trust and promote fairness. She has published dozens of resources, including To Be Fair, Procedural Justice in Action, and Measuring Perceptions of Fairness: An Evaluation Toolkit. Before starting LaGratta Consulting, she was the director of Procedural Justice Initiatives at the Center for Court Innovation, a national nonprofit where she led a training and technical assistance team. Before that, she practiced law in Manhattan where she still lives with her family.


  1. State of the State Courts: Survey Analysis,” National Center for State Courts, 2020.
  2. State of the Courts in a (Post) Pandemic World,” National Center for State Courts, 2020.
  3. These lessons and more are captured in a project toolkit.
  4. See www.surveystance.com.