We should approach AI as a deliberate redesign of the “machinery of justice,” with careful, small-scale experimentation and AI literacy
Key insights:
-
-
-
AI as a design decision, not just a tech add-on — AI gives us a chance to rethink the “machinery of justice” and redesign it for today’s needs rather than simply automating existing systems and processes.
-
AI to expand access and usability, without replacing judgment — The most promising value is in reducing friction for litigants and helping people navigate the process.
-
Progress requires disciplined, court-by-court experimentation — We can start small, build AI literacy, set leadership tone, invite diverse perspectives, and address legal and ethical issues as design constraints, not deal-breakers.
-
-
Today, interest in AI across the judiciary is clearly growing, but most discussions are still constrained by certain fears:
-
-
- Fear that AI will replace human judgment — This concern is legitimate, but it focuses almost entirely on endpoints. Judging (and the systems around it) involve far more than final decisions. Focusing only on high-stakes endpoints misses much of what judges and courts do day-to-day.
- Fear of hallucinations, errors, and bias — These are also legitimate fears, but there are ways to mitigate these risks, which are not new. The source may be different, but we have long needed to protect against errors, bias, and misstated law.
- Fear of change — This is a difficult one to overcome, but a desire to protect the status quo sometimes presupposes that the system as it exists today is working exactly as it should. It isn’t. At least not for everyone.
-
I’d like to see the narrative shift from fear of AI in courts, to the possibilities of AI in courts. AI presents a rare opportunity to upgrade the machinery of justice.
Justice as machinery
Most of us were taught to think about justice as an outcome, something the system delivers. However, justice is also the machinery we use to deliver it, and that machinery is a set of design choices. Rules, procedures, forms, hearings, briefs — we crafted these frameworks to manage conflict and produce decisions that feel fair and legitimate. Like most frameworks, they reflect the era in which they were built.
Once we start thinking about justice as something to be designed rather than simply delivered, the access-to-justice problem looks different. The question is no longer how to get more of the current system to more people; rather, it’s whether the machinery itself is still fit for its purpose.
Reimagining the machinery
The machinery has been redesigned before. Justice was once deeply human because it had to be: Law lived in minds, judges traveled from town to town, decisions were announced aloud. That system was more human and personal, but it was limited, exclusionary, and fickle. It was dependent on local norms and personal relationships. It yielded uneven outcomes.
The first great upgrade was writing, and more importantly, the printing press. It brought stability and protected litigants from arbitrary local power. But it also entrenched a new kind of authority. Yet, understanding it required literacy, training, and expertise. A professional bar emerged and ordinary people were pushed further from the center of their own disputes. Then came the digital age. It optimized the process and made more information available. But many people feel overwhelmed by the deluge of information and experience modern justice as a series of obstacles.
Does AI present a different kind of opportunity? Could it deliver an upgrade that finally closes the gap rather than widens it? I’m optimistic that the answer is yes, but our design choices matter and we have to be willing to reimagine justice from the ground up.
What if every litigant had access to an AI agent that could help them navigate forms, understand the process, and translate legalese? What if AI could take messy human stories and translate them into structured information for the court? What if courts offered AI-assisted dispute resolution in the early stages of litigation or at key milestones during the litigation? Can AI make navigating the legal system feel less like data entry and more like a conversation?
We’re not ready for giant leaps, and we can’t ignore the open questions: Unauthorized practice of law issues, privilege and work product implications, the reliability of AI-assisted work product, and more — but these are not dead ends. They’re current design constraints to account for, and they shouldn’t keep us from reimagining what’s possible.
Where do we start?
The institution of justice will not be redesigned overnight, and there is no central authority to drive change. Rather, it will be redesigned court by court. The principles below apply broadly and reflect a starting point for thinking about AI as a design decision, not just a technology decision.
Set the tone from the top
Fear can be paralyzing, and in courts it often is. If judges and court staff are afraid to experiment, nothing moves. We need environments in which thoughtful, controlled experimentation is encouraged and supported. When more people are engaged in testing ideas and thinking about how to improve their processes, the likelihood of meaningful innovation and redesign increases.
Court leadership can create that space by setting a vision, encouraging responsible experimentation, and supporting innovative mindsets.
Build AI literacy
Encouraging experimentation is an important first step, but it can create risk if not paired with the right training and education. AI requires new competencies in prompting, guardrail development, output verification, bias awareness, iteration, context framing, documentation for audibility, fit-for-purpose judgment, and more. As tools evolve, education should evolve, too. Agentic AI, for example, will require a different set of skills and a different type of supervision than we’re accustomed to now.
For more information about toolkits and resources around AI in courts, visit
Judges and court staff do not need to become technologists, but they need enough training and education to ask the right questions, spot the right issues, and use the tools responsibly.
Rethink the systems, not just the tools
This one is critical. Currently, most conversations about AI focus on use cases, such as whether AI can assist with research or automate certain workflows. These are good questions, but the tougher questions will lead to bigger rewards. Where are our pain points? What can we do better? Which policies and processes are essential, and which have never been re-examined? Which parts of the machinery were built for a different era and have outlived their usefulness? And perhaps most importantly, who is the system failing?
We shouldn’t start with the technology and look for places to apply it. We should start with the people we serve and ask how the technology can help us serve them better.
Invite diverse perspectives
The strongest ideas emerge from the push and pull of different viewpoints. Court leadership can form committees that bring together innovators and skeptics, technologists and traditionalists, those who are excited and those who are concerned. We also need perspectives across different court functions. AI is not something to hand off to IT departments. They are essential partners, but the questions AI raises go far beyond any one department.
Outside perspectives are helpful, too. Many people across the country are already approaching this work with a multidisciplinary lens, and courts can draw on that experience.
Finally, remember to start small
It’s easy to create so much process and deliberation that progress slows. We need concrete steps that move us forward, however incrementally. Start with policies and data governance, then move to small, targeted pilots that can address low-hanging fruit. Small adjustments can help teams become comfortable with change; and early wins build confidence and create momentum.
Closing thoughts
Justice has been redesigned before, and it is on the brink of being redesigned again. AI will reshape courts whether or not we participate. However, as the people who know the system from the inside and want it to work for everyone, we may be in the best position to guide the next upgrade. The chance to build something more equitable, more accessible, and better designed for today’s world does not come around often, let’s not miss it.
You can find more insights from Judge Braswell here