NCSC Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/ncsc/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Fri, 10 Apr 2026 08:47:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Pattern, proof & rights: How AI is reshaping criminal justice /en-us/posts/ai-in-courts/ai-reshapes-criminal-justice/ Fri, 10 Apr 2026 08:46:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70255

Key insights:

      • AI’s greatest strength in criminal justice is pattern recognition鈥 AI can process vast amounts of data quickly, helping law enforcement and legal professionals detect connections, reduce oversight gaps, and improve consistency across investigations and casework.

      • AI should strengthen justice, not substitute for human judgment鈥 Legal professionals are integral to evaluating AI-generated outputs, especially when decisions affect evidence, warrants, and individuals鈥 constitutional rights.

      • The most effective model is human/AI collaboration鈥 AI handles scale and speed, while judges, attorneys, and investigators provide context, accountability, and ethical reasoning needed to protect due process.


The law has always been about patterns 鈥 patterns of behavior, patterns of evidence, and patterns of justice. Now, courts and law enforcement can leverage a tool powerful enough to see those patterns at a scale at a speed no human mind could match: AI.

At its core, AI works by recognizing patterns. Rather than simply matching keywords, it learns from large amounts of existing text to understand meaning and context and uses that learning to make predictions about what comes next. In the context of law enforcement, that capability is nothing short of transformative.

These themes were front and center in a recent webinar, , from the听, a joint effort by the National Center for State Courts听(NCSC) and the 成人VR视频 Institute (TRI). The webinar brought together voices from across the justice system, and what emerged was a clear and consistent message: AI is a powerful ally in the pursuit of justice, but only when paired with the judgment, accountability, and constitutional grounding that human professionals can provide.

AI’s pattern recognition is a gamechanger

“AI is excellent,鈥 said Mark Cheatham, Chief of Police in Acworth, Georgia, during the webinar. 鈥淚t is better than anyone else in your office at recognizing patterns. No doubt about it. It is the smartest, most capable employee that you have.”

That kind of capability, applied to the demands of modern policing, investigation, and prosecution, is a genuine gamechanger. However, the promise of AI extends far beyond the patrol car or the precinct. Indeed, it cascades through the entire arc of justice 鈥 from the moment a crime is detected all the way through prosecution and adjudication.

Each step in that chain represents not just an operational and efficiency upgrade, but an opportunity to make the system more fair, more consistent, and more protective of the rights of everyone involved.

Webinar participants considered the practical implications. For example, AI can identify and mitigate human error in decision-making, promoting greater consistency and fairness in outcomes across cases. And by automating labor-intensive tasks such as reviewing body camera footage, AI frees prosecutors and defense attorneys to focus on other aspects of their work that demand professional judgment and legal expertise.

In legal education, the potential of AI is similarly recognized. Hon. Eric DuBois of the 9th Judicial Circuit Court in Florida emphasizes its role as a tool rather than a substitute. “I encourage the law students to use AI as a starting point,鈥 Judge DuBois explained. 鈥淏ut it’s not going to replace us. You’ve got to put the work in, you’ve got to put the effort in.”


AI can never replace the detective, the prosecutor, the judge, or the defense attorney; however, it can work alongside them, handling the volume and velocity of data that no human team could process alone.


Judge DuBois’ perspective aligns with broader judicial sentiment on the responsible integration of AI. In fact, one consistent theme across the webinar was the necessity of maintaining human oversight. The role of the legal professional remains central, participants stressed, because that ensures accuracy, accountability, and ethical judgment. The appropriate placement of human expertise within AI-assisted processes is essential to ensuring a fair and effective legal system.

That balance between leveraging AI and preserving human judgment is not just good practice, rather it鈥檚 a cornerstone of justice. While Chief Cheatham praises AI’s pattern recognition, he also cautions that it “will call in sick, frequently and unexpectedly.” In other words, AI is a powerful but imperfect tool, and those professionals who rely on it must always be prepared to intervene in those situations in which AI falls short. Moreover, the technology is improving extremely rapidly, and the models we are using today will likely be the worst models we ever use.

Naturally, that readiness is especially critical when individuals鈥 rights are on the line. 鈥淎 human cannot just rely on that machine,鈥 said Joyce King, Deputy State’s Attorney for Frederick County in Maryland. 鈥淵ou need a warrant to open that cyber tip separately, to get human eyes on that for confirmation, that we cannot rely on the machine.” Clearly, as the webinar explained, AI does not replace constitutional obligations; rather, it operates within them, and the professionals who use AI are still the guardians of due process.

The human/AI partnership is where justice is served

Bob Rhodes, Chief Technology Officer for 成人VR视频 Special Services (TRSS) echoed that sentiment with a principle that cuts across every application of AI in the justice system. “The number one thing鈥 is a human should always be in the loop to verify what the systems are giving them,” Rhodes said.

This is not a limitation of AI; instead, it鈥檚 the design of a system that works. AI identifies the patterns, and trained, experienced professionals evaluate them, act on them, and are accountable for them.

That partnership is where the real opportunity lives. AI can never replace the detective, the prosecutor, the judge, or the defense attorney. However, it can work alongside them, handling the volume and velocity of data that no human team could process alone. So that means the humans in the room can focus on what they do best: applying judgment, upholding the law, and protecting an individual鈥檚 rights.

For judicial and law enforcement professionals, this is the moment to lean in. The patterns are there, the technology to read them is here, and the opportunity to use both in service of rights 鈥 not against them 鈥 has never been greater.


Please add your voice to 成人VR视频鈥 flagship , a global study exploring how the professional landscape continues to change.

]]>
The efficiency imperative: AI as a tool for improving the way lawyers practice /en-us/posts/ai-in-courts/improving-lawyers-practice/ Wed, 18 Mar 2026 17:45:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70024

Key insights:

      • AI brings improved efficiency 鈥 AI accelerates tasks like document review and research, freeing lawyers to pursue more high-value work for clients.

      • AI does the work of a team of lawyers 鈥 AI levels the playing field for small law firms and solo practitioners by providing additional capacity without adding headcount, thereby allowing fewer lawyer to do the work of many.

      • Yet, AI still needs guardrails 鈥 Lawyers must remain accountable, however, with human oversight and review to ensure that AI outputs are accurate and correct, thereby preserving nuance and professional judgment.


Already, AI is no longer a theoretical concept for legal professionals, nor is it a nice-to-have for law firms that are seeking to impress their clients with improved efficiency and cost savings. That means, the practical question now becomes how to adopt AI in ways that improve speed and capacity of lawyers without compromising accuracy, confidentiality, or professional judgment.

The strongest near-term value shows up where modern practice is most strained: high-volume inputs and relentless timelines. In that environment, AI can be most helpful as an accelerant for the first pass through large bodies of material.

This possibilities, opportunities, and challenges of using AI in this way were discussed by a panel of experts in a recent webinar, , from the听, a joint effort by the National Center for State Courts听(NCSC) and the 成人VR视频 Institute (TRI).

One panelist, Mark Francis, a partner at Holland & Knight, described one way that AI can be an enormous help. “Anything where we’re dealing with large volume of materials that need to be reviewed [such as] large sets of documents, large sets of legal research, large sets of discovery. Obviously, AI can be leveraged in all of those circumstances.” That framing is important because it anchors AI’s utility in a familiar workflow: review, triage, and synthesis at scale.

AI also has a role earlier in the workflow than many attorneys expect. In addition to sorting and summarizing, it can help generate starting structures. For lawyers drafting motions, client advisories, demand letters, contract markups, or internal investigations memos, the hardest step can be getting traction from a blank page. 鈥淚t’s really good at content or idea generation,鈥 Francis said, adding that lawyers can ask AI to 鈥済enerate some ideas for me on this topic, or generate an outline of a document to cover a particular issue.”


“AI is definitely going to benefit some of the small law firms who cannot actually afford the workforce. AI can be an extension when it comes to the automation.”


Of course, that does not mean letting an AI model decide what the law is; rather, it means using AI to produce an initial outline, identify possible issues to consider, or propose alternate ways to organize an argument. Then, the attorney should apply their own judgment to accept, reject, refine, and verify the AI鈥檚 output.

For legal teams, the ideal mindset is that AI can compress the time between intake and a workable first draft, whether that draft is a research plan, a deposition outline, a set of contract fallback positions, or a motion framework. However, speed is only valuable if it facilitates careful lawyering, not just taking shortcuts.

Efficiency that scales down, not just up

AI’s impact is not limited to large law firms with dedicated tech & innovation budgets. In fact, the benefits may be most transformative for smaller legal organizations that feel every hour of administrative drag and every unstaffed matter. Panelist Ashwini Jarral, a Strategic Advisor at IGIS, underscores how broad the current level of AI adoption already is. “AI is already being used in a lot of legal research, contract analysis, and in office operations,鈥 Jarral explained. 鈥淲hether that’s in a small law firm or a large law firm, everybody can benefit from that automation with this AI.”

For many practices, that list maps directly onto the work that consumes lawyers鈥 time without always adding commensurate value: repetitive research steps, first-pass contract review, intake and scheduling, matter administration, and other operational tasks.

Historically, scale favored organizations that could hire more associates, paralegals, and support staff to push volume through the pipeline. Now, AI offers a different form of leverage: additional capacity without adding headcount. “It is definitely going to also benefit some of the small law firms who cannot actually afford the workforce,鈥 Jarral said, adding that 鈥淎I can be an extension when it comes to the automation.” For a solo or small firm, that extension can show up as faster first-pass review of contracts, quicker summarization of records, more consistent intake workflows, and reduced time spent on repetitive back-office tasks.

At the same time, it is crucial to be clear-eyed about what is being automated. While AI can help deliver efficiency, it does not offer legal judgment itself. The legal profession still must decide, matter by matter, what level of review is required and what risks are acceptable.


“Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”


And that鈥檚 where implementation discipline becomes a strategic differentiator. Law firms that treat AI as a general-purpose shortcut tend to create risk; while firms that treat AI as a workflow component, with guardrails, review steps, and clear accountability, are more likely to capture value without compromising quality.

The non-negotiable: lawyers remain accountable

Any serious conversation about AI in legal practice must address these limits, panelists agreed. The Hon. Linda Kevins, a Justice on the Supreme Court in the 10th Judicial District of New York (Suffolk County), offered the most direct articulation of the boundary line: “Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”

Indeed, legal work is saturated with nuance. The same set of facts can carry different weight depending on jurisdiction, judge, forum, procedural posture, and the client’s goals and risk tolerance. Even when the law is clear, the right action often is not. To strive for true justice requires judgment about timing, framing, business consequences, reputational risk, and settlement dynamics. Those are not merely inputs for an AI to process 鈥 they are human decisions that define legal representation.

As the webinar made clear, this is the point at which responsible use becomes practical, not abstract. If AI is used for research support, contract analysis, or document review, lawyers need an explicit approach for verification and oversight. The outputs may look polished and may sound confident; however, confidence is not accuracy, and professional responsibility does not shift to a vendor or an AI model. Human review is not a ceremonial or perfunctory step, nor is it a formality. Rather, it is the core control that protects clients and the court, and it is the inflection point that turns AI from a novelty into a defensible tool.

In practice, the human in the loop means deciding in which instances AI can assist and in what instances it cannot. It also means reserving an attorney鈥檚 time for the decisions that carry legal and ethical consequences and building repeatable habits that prevent teams from drifting into overreliance on AI, especially under deadline pressure.

The legal profession can capture real benefits from AI, including speed, scalability, and improved access, but only if it adopts the technology in a way that preserves what Justice Kevins highlighted: training, nuance, and human accountability.


You can find out more about how AI and other advanced technologies are impacting听best practices in courts and administration here

]]>
When courts meet GenAI: Guiding self-represented litigants through the AI maze /en-us/posts/ai-in-courts/guiding-self-represented-litigants/ Thu, 19 Feb 2026 18:20:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=69532

Key insights:

      • Considering courts鈥 approach 鈥 Although many courts do not interact with litigants prior to filings, courts can explore how to help court staff discuss AI use with litigants.

      • Risk of generic AI tools 鈥 AI use in legal settings can’t be simply categorized as safe or risky; jurisdiction, timing, and procedure are vital factors, making generic AI tools unreliable for court-specific needs.

      • Specialty AI tools require testing 鈥 Purpose-built court AI tools offer a safer alternative for litigants, yet these require development and extensive testing.


Self-represented litigants have always pieced together legal help from whatever sources they can access. Now that AI is part of that mix, courts are working to help people use this advanced technology responsibly without implying an endorsement of any particular tool or even the use of AI.

Many litigants cannot afford an attorney; others may distrust the representation they have or may not know where to begin. In any case, people need a meaningful way to interact with the legal system. Used carefully and responsibly, AI can support access to justice by helping self-represented litigants understand their options, organize information, and draft documents, while still requiring litigants to verify their information and consult official court rules and resources.

These issues were discussed in a recent webinar, , hosted by . The panel explored the potential benefits of AI for access to justice and the operational challenges of integrating AI into public-facing guidance for litigants.

The problem with “Just ask AI”

Angela Tripp of the Legal Services Corporation noted that people handling legal matters on their own have long relied on a mix of resources, “some of which were designed for that purpose, and some of which were not.” AI is simply a new tool in that environment, she added. The primary challenge is that court processes are rule-based and time-sensitive; and a mistake can mean missing a deadline, submitting the wrong document, or misunderstanding a requirement that affects the case.

Access to justice also requires more than just access to information in general. Court users need information that is relevant, complete, accurate, and up to date. Generic AI systems, such as most public-facing tools, are trained on broad internet text may not reliably deliver that level of specificity for a particular court, case type, or stage of a proceeding. In these cases, jurisdiction, timing, and procedure all matter. Unfortunately, AI can omit key steps or emphasize the wrong issues, and self-represented litigants may not have the legal experience to recognize what is missing.

At the same time, AI offers several potential benefits to self-represented litigants. It can explain concepts in plain language, help users structure a narrative, and produce a first draft faster than many people can on their own. The challenge is aligning those strengths with the precision that court processes demand.

A strategic pivot: from teaching litigants to equipping staff

In the webinar, Stacey Marz, Administrative Director of the Alaska Court System, described her team鈥檚 early efforts to give self-represented litigants clear guidance about safer and riskier uses of AI, including examples of how to properly prompt generative AI queries.

The team tried to create traffic light categories that would simplify decision-making; however, they found this approach very challenging despite several draft efforts to create useful guidance. Indeed, AI use can shift from low-risk to high-risk depending on context, and it was hard to provide examples without sounding like the court was endorsing a tool or sending people down a path to which the court could not guarantee results.

The group ultimately shifted to a more practical approach 鈥 training the people who already help litigants. The new guidance targets public-facing staff such as clerks, librarians, and self-help center workers. Instead of teaching litigants how to prompt AI, it equips staff to have informed, consistent conversations when litigants bring AI-generated drafts or AI-based questions to the counter.

The framework emphasizes acknowledgment without endorsement. It suggests language such as:

“Many people are exploring AI tools right now. I’m happy to talk with you about how they may or may not fit with court requirements.”

From there, staff can explain why court filings require extra caution and direct users to court-specific resources.

This approach also assumes good faith. A flawed filing is often a sincere attempt to comply, and a litigant may not realize that an AI output is incomplete or incorrect.

Purpose-built tools take time

The webinar also discussed how courts also are exploring purpose-built AI tools, including judicial chatbots designed around court procedures and grounded in verified information. Done well, these tools can reduce common problems associated with generic AI systems, such as jurisdiction mismatch, outdated requirements, or fabricated or hallucinated citations.

However, building reliable court-facing AI demands significant time and testing. Marz shared Alaska’s experience, noting that what the team expected to take three months took more than a year because of extensive refinement and evaluation. The reason is straightforward: Court guidance must be highly accurate, and errors can materially harm someone’s legal interests. In fact, even after careful testing, Alaska still included cautionary language, recognizing that no system can guarantee perfect answers in every situation.

The path forward

Legal Services鈥 Tripp highlighted a central risk: Modern AI tools can be clear, confident, and easy to trust, which can lead people to over-rely on them. And courts have to recognize this balance. Courts are not trying to prevent AI use; rather, many are working toward realistic norms that treat AI as a drafting and organizing aid but require litigants to verify claims against official court sources and seek human support when possible.

Marz also emphasized that courts should generally assume filings reflect a litigant’s best effort, including in those cases in which AI contributed to confusion. The goal is education and correction rather than punishment, especially for people navigating complex processes without representation.

Some observers describe this moment as an early AOL phase of AI, akin to the very early days of the world wide web 鈥 widely used, evolving quickly, and uneven in its reliability. That reality makes clear guidance and consistent messaging more important, not less.

This shift among courts from teaching litigants to use AI to teaching court staff and other helpers how to talk to litigants about AI reflects a practical effort on the part of courts to reduce the risk of harm while expanding access to understandable information.

As is becoming clearer every day, AI can make legal processes feel more navigable by helping self-represented litigants draft, summarize, and prepare; and for courts to realize that value requires clear guardrails, court-specific verification, and careful implementation, especially when a missed detail can change the outcome of a case.


You can find out more about how AI and other advanced technologies are impactingbest practices in courts and administrationhere

]]>
Generative AI in legal: A risk-based framework for courts /en-us/posts/ai-in-courts/genai-risk-based-framework/ Fri, 21 Nov 2025 13:57:31 +0000 https://blogs.thomsonreuters.com/en-us/?p=68524

Key highlights:

      • Risk varies by workflow and context 鈥 Practitioners should apply risk ratings based on workflow and context, such as low for productivity, moderate for research, moderate to high for drafting and public鈥慺acing tools, and high for decision-support.

      • Courts need their own developed benchmarks 鈥 Courts should develop and regularly review their own independent benchmarks and evaluation datasets instead of relying solely on vendor claims, because vendors may optimize systems for known tests.

      • Need for benchmarking to detect drift, degradation, and bias 鈥 Continuous, rigorous benchmarking of AI models is essential for courts and legal professionals to maintain confidence in these systems, since both the law and AI models change over time.


AI is not monolithic technology, and a risk-based assessment process is needed when using it. Indeed, courts and legal professionals must scale their scrutiny to match risk levels.

This approach 鈥 which balances innovation with accountability, along with other essential best practices 鈥 is detailed in a recent publication, , created as part of .

In a recent webinar, , one of the co-authors of the document, explained the purpose of the document: “The central aim of what we were thinking about in these best practices is to give courts and legal professionals a principle-based architecture when you’re thinking about the adoption of GenAI tools.”

Risk and human judgement serve as central elements

What is unique about this framework is that it categorizes risk based on key workflow actions of lawyering, for example:

      • Productivity tools carry minimal to moderate risk
      • Research tools are assigned moderate risk
      • Drafting tools range from moderate to high risk
      • Public-facing tools carry moderate to high risk
      • Decision-support tools pose high risk

The framework holds that risk is dynamic rather than static, and there can be shifts in risk levels based on use cases. For example, a scheduling tool typically poses minimal risk; however, the same tool becomes high risk when used for urgent national security cases. And translation tools can shift from lower risk research support to high-risk decision-support depending on their use.

Similarly, when tools range from moderate risk to high risk, users need to be especially discerning in order to understand the underlying risks 鈥 and if the task should be delegated to AI at all.

“You can’t just rely on categories,鈥 explains from the IP High Court of Korea. 鈥淵ou need to understand the underlying risks and ask yourself: Would I delegate this task to another person? Am I comfortable delegating it publicly? If the answer is no, then you probably shouldn’t be delegating it to an AI either.”

In addition, clear red lines around when AI should never be used and classified as unacceptable risk exist for judicial use. “I believe the clear red line is automated final decisions or AI systems that assess a person’s credibility or determine fundamental rights involving incarceration, housing, family,” says Judge Kwon, adding that fundamental rights require human judgment.


“You can’t just rely on categories. You need to understand the underlying risks and ask yourself: Would I delegate this task to another person? Am I comfortable delegating it publicly? If the answer is no, then you probably shouldn’t be delegating it to an AI either.”


The extent of human judgment also has layers. , Shareholder at Greenberg Traurig, says he believes that AI for any legal use currently requires human oversight. 鈥淭he human supervision piece鈥 is utterly critical in the real world of practicing lawyers and law firms,鈥 Greenberg says. 鈥淵ou have to supervise the lawyers in the firm that are using the technology, including young lawyers.”

To help distinguish which type of human oversight is appropriate, the framework in the Key Considerations document defines two forms of such oversight: i) human in the loop, which means active human involvement in decisions; and ii) human on the loop, which means monitoring automated processes and intervening when needed.

What the difference between what each concept could look like in a court setting shows that a human in the loop is, for example, a law clerk using AI to do research on relevant case law and checking to make sure that the references are legally sound; and a human on the loop is a clerk monitoring an established robotic process to extract data for the case management system and spot-checking for accuracy.

Practical guidance for courts

In addition to judges considering the risk level of AI tools, Judge Kwon, Greenberg, and Carpenter, noted the importance of technical AI competence as part of lawyers鈥 and judges鈥 ethical duty, especially around verification, transparency, and independent benchmarks as part of accountability, as well as the need for understandable documentation to maintain public trust. To reinforce the latter point, , Director in Government Practice for 成人VR视频 Practical Law states: “It鈥檚 very vital, especially as we usher in the age of AI, that the public be informed as much as they can be about how that decision-making process is taking place.鈥

In addition, Judge Kwon, Greenberg, and Carpenter highlighted additional guidance on the criticality of benchmarking, including:

      • Court-developed benchmarks prevent overreliance on vendor data 鈥 Courts should develop their own benchmarks and independent evaluation datasets rather than relying entirely on vendor claims and review evaluation scenarios regularly. Vendors may optimize their systems for known tests, which leads to overfitting, in which a model learns patterns specific to its training data so well that it performs poorly on new, unseen data. This gives a misleading impression of reliability.
      • Ongoing rigorous benchmarking to detect model drift & degradation 鈥 To build confidence in AI models, courts and legal professionals must approach AI model evaluation with rigor and ongoing vigilance. Continuous benchmarking is essential, and it cannot be a one-time process because the law evolves constantly and precedents shift. In addition, AI models themselves update regularly, and courts need to monitor performance over time to detect AI degradation or bias drift.

Adopting a thoughtful, risk-informed approach to GenAI in legal practice and courts will help realize its benefits for efficiency and access to justice while protecting ethical obligations, due process, and public trust in the legal system.


You can find out more about how AI and other advanced technologies are impacting best practices in courts and administration here

]]>
New guide: A three-level approach to AI readiness in state courts /en-us/posts/ai-in-courts/ai-readiness-courts-guide/ Thu, 30 Oct 2025 17:52:31 +0000 https://blogs.thomsonreuters.com/en-us/?p=68252 3 key takeaways:
      • Establish strong governance and principles first听鈥 Before implementing AI, courts must create cross-functional oversight committees, define guiding principles that align stakeholders, and develop clear AI use policies with high-quality data governance.

      • Prioritize people-centered implementation听鈥 Successful AI adoption requires engaging stakeholders early as co-creators and conducting thorough resource assessments that account for total cost of ownership (including maintenance and compliance).

      • Commit to continuous monitoring and adaptation听鈥 AI implementation requires ongoing human oversight to monitor performance, prevent data and model drift, and systematically review governance structures and policies after each project to strengthen courts鈥 overall AI readiness for future initiatives.


AI has the clear potential to revolutionize courtroom workflows, but AI itself can carry unforeseen risks. Indeed, AI solutions are complex and opaque, with inherent randomness and risk, says , Senior AI Manager in the New Jersey courts.

To help courts leverage AI safely, the with support from the State Justice Institute convened 16 experts to create an , which was featured in a recent webinar by the . This guide provides practical advice and offers a three-level approach for courts adopting AI: strategic planning (level 1); thoughtful project implementation (level 2); and continuous adaptation (level 3). These three levels guide courts from establishing governance and principles to executing measurable, people-centered projects that enhance trust and further the course of justice.

Establishing governance, principles & policies

To unlock AI’s potential while mitigating hazards, courts must first establish a strong foundation through clear governance, guiding principles, and well-defined policies. More specifically, courts should:

Establish governance with a diverse group of voices 鈥 A cross-functional committee sets policy, oversight, and feedback loops. 鈥淎I governance is鈥 really the leadership structure for all of the court’s uses of AI,鈥 says the NCSC鈥檚 , adding that AI Governance Tool in the AI Readiness guide should be used to run a structured 12鈥憁onth plan that covers level 1 readiness steps end-to-end.

Define your operating philosophy before you start 鈥 Establishing guiding principles are not bureaucratic exercises but rather essential blueprints for successful and ethical AI integration.听Without them, courts risk misalignment among stakeholders, the development of systems that do not serve their intended purposes, and the possibility of costly failures. These principles provide a constant reference point, ensuring that as AI projects evolve, the court remains true to its core values and objectives.

Indeed, the overarching mindset that directs actions and choices as part of the governing principles should align stakeholders, manage expectations, and anchor future decisions. 鈥淭he leading cause of software failures historically has been misalignment among stakeholders and changing or poorly documented requirements,鈥 says , Assistant Professor of Computer Science at George Mason University, adding that the same is true for AI projects. 鈥淲ithout these guiding principles [for AI use], there’s the same risk for misalignment among stakeholders.鈥

Another core tenet of any firm foundation is to set internal rules as part of an AI use policy that provides guardrails and clarity for staff during the transition. And because high-quality, well-governed data is fundamental, pay attention to the quality of the data. 鈥淥ne of the dirty secrets of data science is the data cleansing process,鈥 says Appavoo. 鈥淕arbage in, garbage out.鈥

Finally, pick projects using workflow analysis and by identifying pain points; then use a scoring matrix to evaluate potential projects based on criteria such as impact and feasibility.

Implementing projects that focus on practicality

After foundational planning is complete, the next stage focuses on the practical implementation of AI projects through productive change management, resource assessment, and strategic procurement. Beyond initial deployment, substantial work occurs during this stage.

The most important element in this phase is that successful AI adoption hinges on a strategic, people-centric approach that carefully considers resources and risk. “When people are engaged early and meaningfully, they stop being subjects of change and start being co-creators and co-designers of it,鈥 explains , Assistant Professor of Art and Design at Northeastern University. 鈥淎nd that sense of ownership is one of the strongest predictors of adoption.”

Indeed, effective change management and prioritizing person-centered design are paramount. Often, this means actively engaging stakeholders, fostering open communication, and providing comprehensive training and support throughout the project lifecycle.


The most important element… is that successful AI adoption hinges on a strategic, people-centric approach that carefully considers resources and risk.


Perhaps the most challenging action in this phase is that courts start moving beyond immediate costs and benefits to better understand the full financial and operational implications of AI projects. This requires an accurate assessment of both tangible and intangible costs, along with clearly defining success metrics.

“What’s really tricky about that is some of those costs are very obvious and simple,鈥 says Dr. Miller. 鈥淪ome of them are very squishy and hard to estimate, and the same goes for the benefits.”

In fact, at this stage there are common pitfalls around cost, according to , Chief of Innovation and Emerging Technologies for Maricopa County, Arizona. “Courts sometimes focus only on the upfront purchase price, or the development budget, and they ignore the updates, the retraining, the legal compliance 鈥 and that can multiply the total cost of ownership.”

Further, courts need to consider their own capabilities, the practicality of their AI solution, its long-term sustainability, and potential risks such as transparency and vendor dependency. If the decision is to buy a product off the shelf, the procurement process and vetting vendors will be key. “If we don’t clarify who’s responsible when the system makes a mistake, we expose ourselves to reputational and legal risk,” Judy notes.

Continuous improvement and preparing for the next AI initiative

After implementing an AI project, the journey does not end. Indeed, it evolves the critical importance of incorporating those lessons learned back into court operations through post-project review.

“It is not about getting in the game when it comes to AI, it is about staying in the game,鈥 says Appavoo. 鈥淭he complexity is actually after you productionize a solution 鈥 that is what we see.鈥 You have to have a human in the loop, stay on top of things in terms of observability, constantly monitor the performance, constantly check the data or the model are not drifting, or the business context is changing, Appavoo explains.

To help put this into practice, the AI readiness guide has comprehensive feedback checklists courts can use to systematically review the foundational AI program elements for ongoing adaptation. More specifically, the post-project review process should examine whether governance structures remain effective, if guiding principles need refinement, and whether internal policies require updates. This continuous improvement approach transforms each AI implementation into a learning opportunity that strengthens the court’s overall AI readiness for its subsequent initiatives.


You can access the from the National Center for State Courts and the State Justice Institute here

]]>
Reducing invisible burdens in court administration through automation /en-us/posts/government/reducing-burdens-automation/ Thu, 02 Oct 2025 17:18:59 +0000 https://blogs.thomsonreuters.com/en-us/?p=67716

Key insights:

      • Automation and AI can significantly alleviate administrative burdens in courts 鈥 Court professionals may be able to reclaim up to nine hours per week over the next five years, according to research.

      • Courts are under pressure to modernize and meet the expectations of digital natives 鈥 Courts are facing a generational shift in expectations that is pressuring them to adopt more modern tools and technology.

      • Successful implementation of technology requires a thoughtful and collaborative approach 鈥 Collaboration between judges, administrators, and IT staff is essential, and external-facing tools should prioritize user experience to reduce complexity and increase access to justice.


Bringing automation and AI-powered tools to data entry, case-filing processing, and updating court management systems over the next few years could help court professionals use their time more efficiently, according to the听Staffing, Operations and Technology: A 2025 survey of State Courtsfrom the听成人VR视频 Institute and the National Center for State Courts (NCSC).

Indeed, the report found that alleviating this invisible administrative burden could help professionals reclaim as much as nine hours per week over the next five years. As private sector law firms embrace automated technology, public sector legal departments and courts risk falling further behind.

The time for innovation is now, as caseloads mount, case complexity increases, and retirements and staffing shortages continue to plague courts. Fortunately, administrative professionals are beginning to warm up to targeted automation efforts and AI-powered tools to expand their efficiency.

The cost of administrative burdens

A produced for the Administrative Conference of the United States defines administrative burdens as 鈥渙nerous experiences people encounter when interacting with public services.鈥 And unfortunately, many people do not access the rights or benefits to which they are entitled because of these onerous administrative processes within stressful, frustrating, and overwhelming government systems. In a legal context, administrative burdens hinder access to justice. In fact, low-income Americans did not receive any legal help or enough legal help for 92% of the problems that impacted their lives, according to the Georgetown study.

Recent years have seen a in civil cases. Given this, the processes that were designed for navigation by attorneys and legal and court professionals need to be simplified to reflect the needs of non-professional court users. A on experiences with state courts in particular notes that court users strongly desire courts to be easier to navigate. Even among those who had previous court experience, 50% indicated that it was a little hard or very hard to navigate court paperwork and steps in a case.

A modernizing court workforce

Millennial-aged workers constitute approximately and are the most prevalent court users today and in the foreseeable future. As digital natives, this generation expects modern tools when navigating the legal system.

A commissioned by the NCSC last year found that large percentages of registered voters surveyed support increased use of AI chatbots to answer court FAQs (with 63% saying this), using AI to translate court documents into other languages (64%), and using AI to break down complex legal jargon and make information more accessible (71%).

Further, this lack of modernization in courts has consequences for judges and court professionals as well. Court staff are feeling strained by their workload, and many report simply not having enough time to catch up. More than half (57%) of court professionals and administrative staff reported not having enough time, according to according to the听Staffing, Operations and Technology report.

The report also found that 91% of court staff report working more than 40 hours each week, with about one-third of them working more than 46 hours per week.

automation

Given all this, the pressure courts are under to modernize is understandable; however, it should be looked at as an impetus for improvement: Courts face a once-in-a-generation opportunity to reimagine their workflows.

Resources available to fund statewide technology improvements

Several states leveraged one-time resources available through the to fund major investments in court technology. The , for example, used $38 million to update a two-decade-old in-house case management system. (The AOC is the operations arm of the state court system, which supports 3,000 employees and more than 400 elected justices, judges, and circuit court clerks.) Kentucky courts鈥 AOC selected that offers online tools for judges, circuit court clerks, attorneys, as well as a tool for pro-se litigants.

On the other hand, opted to build its own in-house court management system, as the cost was significantly less than vendor rates. Initial estimates to upgrade a legacy system were $70 million, and Arkansas was able to build its own for $20 million, funded through that came from the state legislature. Indeed, Arkansas has been a leader in court technology for more than 20 years and signed contracts for automated document redaction more than a decade earlier.

The state courts new customized cloud-based solution incorporates multiple vendors, and the development process (now two years underway) has launched Contexte Case Management, an internal facing tool, and , a public-facing case information tool. All and nearly half of district and juvenile courts already have implemented the system.

Moving forward, slowly and thoughtfully

While private sector legal technology has advanced quickly, courts face unique challenges that often make off-the-shelf solutions an inadequate fit. Investment in court modernization must balance the efficiency gained with fiscal responsibility around such investment.

Successful implementation in courts will take cultural, procedural, and budgetary shifts. Internally, collaboration between judges, administrators, and IT staff is essential; and externally, any public-facing tools should center around user experience and ease-of-use, perhaps offering a dedicated customer service team to guide users so that technology reduces complexity rather than adding to it.

The real return on investment in court systems will be realized when all users can access justice more easily, equitably, and reliably.


You can download a full copy of theStaffing, Operations and Technology: A 2025 survey of State Courts from the听成人VR视频 Institute and the National Center for State Courts听AI Policy Consortium听for Law and Courts here

]]>
Cultivating practice readiness: New report highlights need for radical change in law school and bar admissions /en-us/posts/government/lawyer-readiness/ Thu, 07 Aug 2025 01:47:42 +0000 https://blogs.thomsonreuters.com/en-us/?p=67079

Key highlights:

      • Education and licensing misalignment 鈥 Legal education and attorney licensing are misaligned with the real-world skills and practical competencies new lawyers need to serve clients and address the nation鈥檚 growing access to justice crisis.

      • Strong support for licensing reform 鈥 There is strong momentum and support for reforming traditional pathways to legal licensure, according to research conducted by a body of chief justices and state court administrators.

      • Change will require leadership 鈥 Lasting, systemic change requires leadership and collaboration among state supreme courts, law schools, bar examiners, and the practicing bar.


For decades, cracks have widened in the nation鈥檚 promise of justice for all, with millions of people every year unable to find or afford legal help when they need it most. As the legal system in the United States faces a reckoning, one outline for change has emerged with the recently released (CLEAR), a body of chief justices and court administrators from a variety of states across the country. (CLEAR cited support from the 成人VR视频 Institute in the production of the report.)

The CLEAR group is calling for a radical change in how lawyers are taught and licensed. The report cites several factors driving the need for reform, including:

Increases in legal deserts and self-represented litigants 鈥 Judges in courtrooms across the country routinely see self-represented litigants, while so-called legal deserts, especially in rural areas, leave entire communities with few or no attorneys at all. Indeed, according to the American Bar Association, are considered legal deserts, with less than one lawyer per 1,000 people. As a result, most litigants are left to navigate a complex court system with inadequate or no legal assistance in family, probate and estate, housing, consumer, and criminal matters, according to the .

Declining interest in public sector work 鈥 The public interest sector, which includes civil legal aid, public defenders, and prosecutors, is buckling under the weight of crushing caseloads, stagnant federal and state funding, and a persistent shortage of lawyers. Indeed, students face numerous barriers to pursuing a career in public interest law, according to the CLEAR report, from less predictable career paths as compared to private practice, to a perceived lack of prestige in many schools, to the prospect of managing educational loans on a public interest lawyer鈥檚 salary.

Rapid technology changes 鈥 Compounding these challenges, advanced technology and especially AI are rapidly reshaping the legal profession. This, in part, is leading to that are essential for skill development because AI 鈥 which excels in tasks like legal research, writing, and drafting 鈥 now is handling work that had been historically assigned to associates and was a big part of how they learned their craft.

Defining practice readiness and minimum competence

Against this backdrop, the CLEAR report calls for overhauling how law schools educate attorneys and how bar admissions assess attorney readiness. More specifically, the report recommends a sharper, modern definition of practice readiness that more clearly defines the blend of knowledge, skills, and professional abilities that new lawyers must possess to competently serve clients from day one across four essential pillars. These pillars are i) foundational legal knowledge and analytical skills; ii) strong ethics and professionalism; iii) durable communication and interpersonal abilities; and iv) practical legal skills like advocacy, negotiation, and client management.

For the report, CLEAR surveyed of more than 4,000 judges, 4,000 attorneys, and 600 law students; and the committee鈥檚 findings consistently reveal that new lawyers struggle with practical legal skills, which include effective client communication, negotiation, and courtroom advocacy in addition to 17 other skills.

Feedback from survey participants points to the fact that these skills, which are crucial for the daily realities of legal practices, are not taught in law schools to a large degree. For example, only 7%听of experienced attorneys with more than five years of practice report that newly admitted attorneys, most of which are right out of law school, were very well or extremely well prepared to communicate effectively with clients. Likewise, 61%听of experienced attorneys said new lawyers were not well prepared or only slightly well prepared in negotiation, and 55%听of experienced attorneys said the same about new lawyers when it came to questioning and interviewing witnesses.

In addition, 66%听of judges say that new attorneys in their first five years of practice sometimes, rarely, or never competently conducted direct and cross examinations.

New pathways to licensure beyond the bar exam

Meanwhile, an additional insight from the CLEAR report highlights how the bar exam continues to focus heavily on theoretical knowledge and memorization, rather than the practical, day-to-day skills that define minimum competence. At the same time, the is more focused on foundation skills, including legal research, legal writing, and issue-spotting and analysis.

To address the dissatisfaction with the traditional bar exam, some states have been piloting innovative licensure pathways that better align with the skills new lawyers need. Such approaches include curricular pathways, such as in the in New Hampshire, and at the University of Wisconsin鈥檚 law school. Other methods are supervised practice models, such as in Oregon鈥檚 , , and temporary pandemic-era alternatives that provided graduates with the ability to prove their competence under the guidance of experienced attorneys.

Top recommendations for state supreme courts

The CLEAR group advocates for state supreme courts, as the profession鈥檚 primary regulators, to lead and foster innovation in licensure and practice readiness. The report urges state supreme courts to take such action as:

Lead collaborative efforts to realign legal education, bar admissions, and new lawyers鈥 readiness with public needs 鈥 State supreme courts are uniquely well-positioned to lead efforts to create a legal system that better addresses the legal needs of the communities they serve.

Encourage law school accreditation that serves the publicState supreme courts should encourage an accreditation process that promotes innovation, experimentation, and cost-effective legal education geared toward the goal of having lawyers meet the legal needs of the public.

Reform bar admissions processes to better meet public needs 鈥 This reform includes adjusting bar admission by setting passing scores based on evidence and piloting alternative pathways to passing the exam or equivalent assessment.

To put CLEAR鈥檚 recommendations for state supreme courts into practice, however, bold, coordinated action by law school administrators and the American Bar Association (as the accreditor of law schools) are critical as well. In particular, there is a need for expansion of experiential learning, such as clinics, externships, and simulation courses, to help students gain meaningful, hands-on experience and have direct responsibility with clients. In addition, aligning curricula with the realities of practice by integrating practical skills, ethics, and professional identity formation throughout, rather than relegating those factors to optional or add-on courses is another necessary reform.

Legal education and licensing must rapidly evolve to meet the nation鈥檚 urgent access-to-justice challenges, the CLEAR report notes. Law schools and state supreme courts must work together with renewed urgency and vision to lead this transformation. The failure to act by both law schools and courts means the justice gap in the US will only widen. Only with urgent, collaborative innovation to enact these changes can the legal profession deliver on the promise of justice for all in the decades to come.


You can access the full here

]]>
Courts grapple with AI revolution amid staffing crisis /en-us/posts/ai-in-courts/courts-staffing-crisis/ Thu, 10 Jul 2025 14:54:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=66572

Key highlights:

      • Staffing crisis and rising caseloads 鈥 US courts are struggling with significant staffing shortages and increasing caseloads, leading to operational delays and overworked employees.

      • Cautious but growing AI adoption 鈥 Most courts are proceeding carefully with limited training and a strong focus on protecting confidentiality, ethical standards, and human judgment.

      • Human-centered, governance, and education needed 鈥 Experts stress that successful AI integration requires leadership, comprehensive education for court personnel, and robust governance policies to ensure AI supports, rather than replaces, human roles in the justice system.


A staggering 91% of professionals surveyed at state courts said they believe AI will have a moderate to transformative impact on judicial operations. Indeed, the mainstreaming of AI technology arrives at a critical juncture for America’s court systems. Today, courts in the United States are wrestling with the dual challenges of severe staffing shortages coupled with the urgent need to drive adoption of AI tools to alleviate the pain, according to findings from a recent webinar that was part of the , a joint initiative from of the National Center for State Courts (NCSC) and the 成人VR视频 Institute (TRI).

More than two-thirds (68%) of judges and court professionals surveyed for the Consortium鈥檚 2025 Survey of State Courts report said their courts had experiencing staffing shortages in the past year, with 61% anticipating continued shortfalls. Indeed, these challenges are forcing judges and court administrators to navigate between immediate operational needs and long-term technological transformation.

, Executive Officer and Clerk of Court for the Superior Court of Los Angeles County, says that there is also 鈥渢he challenge of people leaving. We have people in our workforce in Los Angeles that have been here 35, 40, 45 years. Well, they are getting to the end of their careers. They are ready to leave, and they are going to take a lot of knowledge with them. And at the same time, we have the challenge of getting newer folks to the organization to come work for the court.鈥

Current state of court staffing

Because of this staffing crisis, critical positions across the judicial system remain unfilled as courts struggle to hire enough court reporters, interpreters, and administrative staff. These roles are essential to maintain citizens鈥 constitutional rights to due process and access to justice.

The operational impact of these shortages is severe and far-reaching, the data shows. Despite having fewer staff members, 45% of courts report increasing caseloads, which is enabling a perfect storm of overwhelming demand and diminished capacity. For example, 77% of courts encounter hearing delays weekly; and 38% of the court workforce now work between 46 and 50 or more hours per week 鈥 yet only 52% say they feel they have adequate time to fulfill their responsibilities effectively.

“Of the people who are working 50-plus hours, only about 12% said they are feeling like they have enough time to do the things needed,鈥 explains , Enterprise Content Manager for risk, compliance, government and courts at TRI. 鈥淚t seems like we need to find a way to both decrease the hours that people are working, and also make sure that they feel like they’re able to get done the things that they need to get done to successfully handle their jobs.”

Careful implementation of AI necessary

Only 25% of court systems currently offer AI training to their personnel, according to the report. This cautious approach reflects the judiciary’s measured response to emerging technology by prioritizing careful implementation over rapid adoption. Despite acknowledging AI’s importance, courts remain focused on addressing immediate operational needs rather than rushing into wholesale technological transformation.

The promise of AI applications in court operations extends across multiple domains, from making administrative efficiency improvements in document processing, scheduling, and case management to enhancing public services through AI-powered chatbots for basic inquiries. For example, Slayton听says that the Superior Court of Los Angeles County has deployed an AI-powered jury duty rescheduling system that allows citizens to interact conversationally with backend systems.


Join the AI Policy Consortium 鈥 a joint initiative from of the National Center for State Courts and the 成人VR视频 Institute 鈥 for its next webinar on


Still, technical barriers loom large, as courts struggle with limited technology budgets, dependence on external IT support, and the critical need for private, secure AI systems rather than public tools that could compromise sensitive case information. Ethical concerns also weigh heavily on judicial leaders who must balance innovation with fundamental principles of judicial independence, the critical role of humans in decision-making, confidentiality and maintaining public trust.

Add in to that the worries that rising attrition among court staff is another potential barrier to wide-scale AI adoption. However, the , Chief Administrative Judge of the New York State Unified Court System says he does not see this as a threat. 鈥淚 really do not anticipate job loss being an issue, because AI is something that requires human monitoring,鈥 Judge Zayas explains, adding that AI tools can help address staffing shortages, since they are capable of assisting with tasks like transcription and language translation, 鈥渁nd there’s just not enough people for us to hire to handle these things.鈥

The goal in deploying AI, he continues, is 鈥渘ot [to] avoid hiring other people but filling the void that has [been] created by the sort of mass resignations that we see happening just generally with certain generations.”

Looking ahead

To leverage the opportunity of AI and address some of the concerns involved, , Founder & CEO of Creative Lawyers, recommends that state courts adopt a measured human first-AI forward philosophy and that court leadership can move forward with implementation. Specifically, Leonard observes that leadership matters more than specific tools in addressing mounting operational challenges and achieving successful implementation. More specifically, Judge Zayas, Slayton, and Leonard recommend that courts start with these actions as part of any successful AI integration efforts:

Begin with training & education 鈥 Comprehensive training programs are needed across all court levels to build AI literacy among judges, staff, and attorneys. “I think the best way to overcome concerns [about AI] is through education and training in the use of AI technology, even if it just means getting familiar with that technology 鈥 for everyone involved in the daily work of the court system,鈥 including judges, court staff, attorneys, and unrepresented litigants, says Judge Zayas.

Focus on the human-centered approach 鈥 Centering human needs around technology ensures that AI serves as a supportive tool rather than a replacement for human judgment.

Institute good governance 鈥 The development of policies and procedures remains key, as courts need governance mechanisms for establishing clear AI usage guidelines. The Judicial Council of California, for example, is adopting statewide rules requiring every court to develop AI policies, addressing ethical considerations including confidentiality, bias prevention, and appropriate use boundaries.

As courts navigate unprecedented staffing challenges, AI emerges not as a replacement for human workers, but as a critical tool to fill gaps and enhance efficiency. Success will depend on thoughtful implementation, comprehensive training, and maintaining the delicate balance between innovation and judicial integrity.


You can access a full copy of the 2025 Survey of State Courts report from the听, a joint initiative from of the National Center for State Courts and the 成人VR视频 Institute here

]]>
AI in court translation: Navigating opportunities, risks & the human factor /en-us/posts/ai-in-courts/navigating-language-translation/ Fri, 27 Jun 2025 13:23:13 +0000 https://blogs.thomsonreuters.com/en-us/?p=66460

Key insights:

      • Using AI-assisted translation tools鈥 Such tools听are being utilized in court systems to address language barriers, improve efficiency, and maintain public trust through effective governance and human involvement.

      • Courts developing proprietary systems鈥斕Orange County Superior Court developed its CAT听system to address translation issues, starting with Spanish and Vietnamese languages.

      • Establishing ethical guardrails 鈥 This is an essential method for the successful deployment of AI-assisted translation tools to build confidence and maintain public trust.


New ways of utilizing AI are showing up in the nation鈥檚 court system regularly. Recently, an made its way into the courtroom at a sentencing hearing. Still, less publicized AI innovations, such improving internal workflows within courts and influencing evidence in trials, are arising across court systems around the world every day.

Yet one of most debated areas of AI use in courts in the United States involves the fundamental challenge of providing timely and accurate language access for individuals with limited English proficiency. The National Center for State Courts (NCSC) and the 成人VR视频 Institute, through their , hosted a recent webinar focused on how AI can assist in the translation of written documents from one language to another. (The webinar did not address in detail how AI can support interpretation, or spoken-language conversion from one language to another.) However, the webinar captured key insights from experts on how AI-assisted translation of documents can enhance court services, the pros and cons of using these tools, and the critical risks that must be managed.

Indeed, a core challenge facing courts today is a critical shortage of qualified human translators and interpreters. Not meeting these language demands creates substantial barriers to due process and potentially affects many individuals’ liberties, housing rights, access to justice, and other fundamental legal protections, while simultaneously undermining public trust in the judicial system.

“There’s a very high demand for translators and interpreters, and a shortage of both, particularly in less common language pairs and more rural areas,” said , an American Translators Association certified translator and CEO of Transcend Translations. 鈥淚f there isn’t a translator and interpreter available, that can mean that hearings have to be postponed… [and] people may spend more time in limbo.”

Using AI to pioneer translation

To address this long-standing translation issue, courts are turning to AI-powered tools that are specifically designed for use by court systems 鈥 something the Orange County Superior Court saw firsthand. After testing other tools to address language barriers in the justice system, the Superior Court took the initiative to develop its Court Application for Translation (CAT) system, powered by Microsoft Azure, according to Deputy COO Blanca Escobedo.

Orange County developed the system with the Spanish and Vietnamese languages first and trained the model using court-specific terms and words. The court system took a thoughtful approach with robust governance, oversight, and input from multidisciplinary teams, said Escobedo, adding that the project鈥檚 first phase focused on low-risk use cases, mainly translating educational materials and video scripts. Later phases will concentrate on collaborative court essays and juvenile reports that typically run longer than 100 pages.

Escobedo explained how rigorous quality control has been a key pillar of CAT outputs since its inception, which was essential to ensure trust and confidence in the output. Each output then is reviewed by certified translators and given a score to assess the performance of the AI-assisted translation. Results showed 80% of Spanish translations were usable as-is (with 17% requiring minor corrections, and 3% containing major errors); while Vietnamese translations achieved 57% accuracy (with 39% needing minor adjustments, and 4% with major errors). The difference reflects the greater availability of Spanish training materials.

Governance, human involvement & ethical guardrails

As the webinar pointed out, effective governance, human participation, and ethical guardrails are all key ingredients for the effective deployment of AI-assisted translation tools to build confidence and maintain the trust of the public. , Principal Court Management Consultant at NCSC, outlined the consequences of using AI-assisted translation tools without deploying such safeguards. 鈥淚f people do not feel the information they are getting from the courts is accurate and reliable, people will not trust the courts,鈥 Spulak said. 鈥淭hey won’t look to the courts as a source of authority, and they won’t use information that they get from the courts.鈥

Spulak, Russ, and Escobedo outlined the necessary mechanisms to ensure public trust is maintained with translated documents using AI, including:

Effective governance 鈥 Spulak, who is leading NCSC鈥檚 efforts to provide guidance on AI translation, recommends that courts have policies in place 鈥渋f they are going to use AI in any context for translation, so that people understand what it is the court is doing and what things are being translated.” These protocols should have 鈥渃lear limits on how AI is used, how it will be reviewed, and how the court will ensure that it’s providing quality translations to folks,鈥 she adds. In addition, there should be established guidelines for implementation, human review processes, and feedback mechanisms from internal and external stakeholders.

Humans in the loop for accuracy and quality control 鈥 Having individuals review outputs and oversee development, testing, and ongoing quality control are vital mechanisms for using AI to translate documents. “There has to be a human in the loop who has read and has done machine-translation, post-editing to ensure that that those translations are accurate鈥 for effective use in a legal context, explained Russ, of Transcend Translations.

Transparency 鈥 The concept of transparency forms the cornerstone of ethical AI translation implementation in courts. Escobedo described how the Orange County Superior Court is purposeful in how it deploys AI assisted translation. In the court鈥檚 process, information is shared with executive and judicial leadership to ensure there is a level of comfort with the solutions the court is developing. This includes clearly informing users when AI translation has been used through disclaimers on documents and maintaining open communication with staff and the judiciary. In this way, transparency is enhanced when there are 鈥渃lear guidelines for how a document translated with AI-assisted translation has to be disclosed, where it is disclosed, and whether it be a watermark or an oral declaration,鈥 Russ said.

Privacy & security 鈥 Privacy and security considerations are equally critical. Orange County’s approach demonstrates best practices with an on-premises solution that鈥檚 protected by firewalls, ensuring that confidential information remains secure. Additionally, reviewers sign confidentiality agreements to provide another layer of protection.

Using AI鈥檚 powerful capability鈥 with caveats

AI-assisted translation has improved language access in courts and addressed resource constraints while enhancing efficiency. Indeed, Orange County’s CAT tool resulted in “a reduction in translation expenditures” and significant improvement in turnaround times, noted Escobedo.

For other courts, the Orange County Superior Court鈥檚 approach to AI-assisted translation can serve as a guide for thoughtful implementation through transparency, phased rollout, and continuous quality assessment. Courts modelling their efforts based on the experience of Escobedo and the CAT application will need to balance innovation in order to address a lack of language access within court systems while still actively identifying and mitigating the risks associated with such innovation.

With careful planning and commitment to quality, courts can harness AI to enhance justice for all community members, but as former Minnesota State Supreme Court Chief Justice Bridget Mary McCormack said in concluding the webinar: “We are still at a stage where we want humans supervising this work, and we want humans who are certified and experts to be the ones supervising the work.”


You can view the most recent webinar about , featuring insights from the latest joint research by the TR Institute and the NCSC here.

]]>
Technology and staffing are the biggest challenges facing courts, says new report /en-us/posts/government/2025-state-courts-survey/ Wed, 28 May 2025 16:53:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=65941 Courts stand at a crossroads across the nation with state and county and municipal courts facing significant staffing shortages and operational inefficiencies, such as delays and continuances, according to the听from the听成人VR视频 Institute and the National Center for State Courts . At the same time, however, courts are also challenged on how to implement technologies, including generative AI (GenAI), which could potentially address many of those staffing and efficiency problems.

Jump to 鈫

Staffing, Operations and Technology: A 2025 survey of State Courts

 

The report was gleaned from a survey of 443 state, county and municipal court judges and court professionals, conducted in March and April.

Shortages of skilled labor are widespread, with nearly half of survey respondents considering it having a transformational or high impact on their court鈥檚 operations. Respondents also said they expect staffing shortages to continue over the next year, particularly in court clerk and clerk staff roles. Court employees are typically working long hours yet struggling to manage their workloads, respondents said.

The report notes that efficiency continues to be a major challenge for courts. While respondents say they are handling a higher volume of caseloads with decreasing case backlogs, they are also more likely to report increases in case delays and continuances.

Technology challenges

The majority of courts have adopted many key automated tools, but technology gaps remain and budgets for additional investments may be limited, even as AI and GenAI increasingly take hold across the legal landscape, respondents said.

On one end of the technology scale, use of virtual hearings is now widespread, for example, with the vast majority of respondents (80%) saying that their court conducts or participates in virtual hearings. And a similar percentage feel that the use of virtual procedures can increase litigants鈥 access to justice, giving them the ability to attend hearings online without having to miss work or find childcare or caregiving assistance.

In addition, the majority of respondents (58%) said they believe that virtual hearings have improved efficiency by reducing the number of litigants that fail to appear in court, thus easing the need for rescheduling.

At the same time, courts must deal with the same transformational GenAI tidal wave that is facing the rest of the legal ecosystem. The majority of court respondents (55%) rate the rise of AI and GenAI as the most significant trend that they are facing, saying it will have a transformational or high impact on their work over the next five years.

Despite the potential for significant efficiency improvements and time savings, courts have generally been slow to adopt AI and GenAI. Currently, only 17% of respondents said their court was using GenAI. In fact, the vast majority of respondents (70%) said their court currently does not allow employees to use AI-based tools for court business. Further, just 17% said their court was planning to adopt GenAI over the next year. This means the vast majority of courts may not currently have plans or strategies to evaluate or implement GenAI-driven tools and solutions. Given what respondents see as the transformational impact of GenAI to potentially streamline their operations and improve efficiency, this raises major questions about courts鈥 timelines for adopting GenAI.

Generational workforce shifts

The ability of courts to incorporate GenAI into their operations also affects another major trend facing the courts, the report shows, such as the generational shifts in workforce and leadership that are taking place. As Baby Boomers and Gen Xers leave the workforce, Gen Zers are entering, and Millennials are increasingly moving into leadership positions. Respondents rated these shifts nearly as high as they did the rise of AI and GenAI as to their impact on the courts.


Courts must deal with the same transformational GenAI tidal wave that is facing the rest of the legal ecosystem.


This carries major implications around courts鈥 ability to adopt technology to improve efficiency. As Gen Zers increasingly move into the workforce and Millennials into leadership positions, it is significant that both groups are considered digital natives 鈥 those people who grew up after the birth of Internet 鈥 so they are very comfortable using technology and may find it easier to manage automated workflows. In addition, they are likely to transition faster and require less training as courts continue to move from manual to automated workflows.

At the same time, they may be resistant to jobs and tasks that still rely heavily on manual tasks, such as entering and managing case information and other data. A requirement to handle high levels of manual tasks could hinder talent recruitment and retention, the report notes.

In all, these factors give even more reason for courts to continue, if not accelerate, their adoption of advanced AI-driven technology.

Looking ahead

The听Staffing, Operations and Technology: A 2025 survey of State Courts shows that our nation鈥檚 courts are facing an unprecedented convergence of major waves of change, especially the far-reaching impacts of both GenAI and generational shifts in workforce and leadership roles. And courts must deal with these changes while continuing to face challenges around managing staff shortages and case backlogs. Resources, including often-limited budgets, must be strategically balanced between current operations and investments in technology that could improve future operations.

Within a few years, courts will likely look and operate much differently than they do today. The question is whether courts will be able to successfully manage these changes, or, if not, continue to face growing struggles with workloads, staffing, backlogs, and delays. At the same time, judges and court professionals hope they can move their courts forward to emerge on the other side of these changes with more efficient, technology-driven operations that will facilitate faster handling of cases and result in improved access to justice for all citizens.


You can access

a full copy of the from the 成人VR视频 Institute and the National Center for State Courts AI Policy Consortium for Law and Courts here

]]>