AI Governance Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/ai-governance/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Thu, 05 Mar 2026 14:27:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Architecting the data core: How to align governance, analytics & AI without slowing the business /en-us/posts/technology/architecting-data-core-aligning-ai-governance-analytics/ Thu, 12 Feb 2026 19:02:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=69436

Key takeaways:

      • Legacy data architectures can’t keep up with modern demands 鈥 Traditional, centralized data cores were designed for stable, predictable environments and are now bottlenecks under continuous regulatory change, rapid M&A, and AI-driven business needs.

      • AXTent aims to unify modern data principles for regulated enterprises 鈥 The modern AXTent framework integrates data mesh, data fabric, and composable architecture to create a data core built for distributed ownership, embedded governance, and adaptability.

      • A mindset shift is required for lasting success 鈥 Organizations must move from project-based data initiatives to perpetual data development, focusing on reusable data products and decision-aligned outcomes rather than one-off integrations or platform refreshes.


This article is the second in a 3-part blog series exploring how organizations can reset and empower their data core.

For more than a decade, enterprises have invested heavily in data modernization 鈥 new platforms, cloud migrations, analytics tools, and now AI. Yet, for many organizations, especially in regulated industries, the results remain underwhelming. Data integration is still slow because regulatory reporting still requires manual remediation, M&A still exposes hidden data liabilities, and AI initiatives struggle to move beyond pilots because trust and reuse in the underlying data remains fragile.

The problem is not effort, it is architecture. Since 2022, the buildup around AI has been something out of science fiction 鈥 self learning, easy to install, displace workers, autonomous, even Terminator-like. Moreover, while AI may indeed revolutionize research, processes, and profits, the fundamental challenge is not the advancing technology, rather it is the data used to train and cross-connect these exploding capabilities.

Most data cores in use today were designed for an earlier operating reality 鈥 one in which data was centralized, reporting cycles were predictable, and governance could be applied after the fact. That model breaks down under the modern pressures of continuous regulation, compressed deal timelines, ecosystem-based business models, and AI systems that consume data directly rather than waiting for curated outputs.

So, why is the AI hype not living up to the anticipated benefits? Why is the data that underpinned process systems for decades failing to scale across interconnected AI solutions? The solution requires not another platform refresh, but rather, a structural reset of the data core itself.

That reset uses data meshes, data fabrics, and modern composable architecture as a single, integrated system, and aligns it to the AXTent architectural framework, which is designed explicitly for regulated, data-intensive enterprises.

Why the traditional data core no longer holds

Legacy data cores were built to optimize control and consistency. Data flowed inward from operational systems into centralized repositories, where meaning, quality, and governance were imposed downstream. That approach assumed there were stable data producers, limited use cases, human-paced analytics, and periodic regulatory reporting.

Unfortunately, none of those assumptions hold today. Regulatory expectations now demand traceability, lineage, and auditability at all times (not just at quarter-end). M&A activity requires rapid integration without disrupting ongoing operations. And AI introduces probabilistic decision-making into environments built for deterministic reporting, with business leaders expecting insights in days, not months.

The result is a growing mismatch between how data is structured and how it is used. Centralized teams become bottlenecks, pipelines become brittle, and semantics drift. Compliance then becomes reactive, and the cost of change increases with every new initiative.

The AXTent framework starts from a different premise: The data core must be designed for continuous change, distributed ownership, and machine consumption from the outset. Indeed, AXTent is best understood not as a product or a platform, but as an architectural framework for reinventing the data core. It combines three design principles into a coherent operating model:

      1. Data mesh 鈥 Domain-owned data products
      2. Data fabric 鈥 Policy- and metadata-driven connectivity
      3. Data foundry 鈥 Composable, evolvable data architecture

Individually, none of these ideas are new. What is different 鈥 and necessary 鈥 is treating them as a single system, rather than independent initiatives as conceptually illustrated below:

data core

Fig. 1: The AXTent model of operation

The 3 operating principles of AXTent

Let鈥檚 look at each of these three design principles individually and how they interact with each other.

Data mesh: Reassigning accountability where it belongs

In regulated enterprises, data problems are rarely technical failures. Instead, they are accountability failures. When ownership of data meaning, quality, and timeliness sits far from the domain that produces it, errors propagate silently until they surface in regulatory filings, audit findings, or failed integrations.

A structured framework applies data mesh principles to address this directly. Data is treated as a product, owned by business-aligned domains that are then accountable for semantic clarity, quality thresholds, regulatory relevance, and consumer usability.

This is not decentralization without guardrails, however. AXTent enforces shared standards for interoperability, security, and governance, ensuring that domain autonomy does not fragment the enterprise. For executives, the benefit is practical: faster integration, fewer semantic disputes, and clearer accountability when things go wrong.

Data fabric: Embedding control without re-centralization

However, distributed ownership alone does not solve enterprise-scale problems. Without a unifying layer, decentralization simply recreates silos in new places.

A proper framework addresses this through a data fabric that operates as a control plane across the data estate. Rather than moving data into a single repository, the fabric connects data products through shared metadata, lineage, and policy enforcement.

This allows the organization to answer critical questions continuously, such as:

      • Where did this data come from?
      • Who owns it?
      • How has it changed?
      • Who is allowed to use it 鈥 and for what purpose?

In this way, governance is no longer a downstream reporting activity; rather, it is embedded into how data is produced, shared, and consumed. Compliance becomes a property of the architecture, not a periodic remediation effort.

And in M&A scenarios, the fabric enables incremental integration, which allows acquired data domains to remain operational, while being progressively aligned rather than forcing immediate and costly consolidation.

Composable architecture: Designing for evolution, not stability

The third pillar of the AXTent model is a modern data architecture that鈥檚 designed to absorb change rather than resist it. Traditional architectures usually rely heavily on rigid pipelines and tightly coupled schemas. While these work when requirements are stable, but they may collapse under regulatory change, new analytics demands, or AI-driven consumption.

AXTent replaces pipeline-centric thinking with composable services, including event-driven ingestion and processing, API-first access patterns, versioned data contracts, and separation of storage, computation, and governance.

This approach supports both human analytics and machine users, including AI agents that require direct, trusted access to data. The result is a data core that evolves without constant re-engineering, which is critical for organizations operating under continuous regulatory scrutiny or frequent structural change. AXTent allows acquired entities to plug into the enterprise architecture as domains while preserving context and enabling progressive harmonization.

The architectural compass

This framework exists for one purpose: to provide a practical, business-oriented methodology for building a reusable, decision-aligned, compliance-ready data core. It is not a product nor a platform. It is a vocabulary that鈥檚 backed by building blocks, patterns, and repeatable workflows 鈥 and it鈥檚 one that executives can use to organize data around outcomes instead of systems.

data core

Overall, the AXTent model prioritizes data clarity over system modernization, decision alignment over model sophistication, continuous compliance over intermittent remediation, reusable data products over disconnected pipelines, and enterprise knowledge codification over one-off integration work.

In essence, organizations should move away from project thinking and toward perpetual data development, in which every output contributes to a compound knowledge base. This is the mindset shift the industry has been missing as it prioritizes AI engineering over business purpose.


In the final post in this series, the author will explain how to shift from 鈥渂uild and operate鈥 to 鈥渂uild and evolve鈥 via a data foundry. You can find more blog postsby this author here

]]>
2026 AI in Professional Services Report: AI adoption has hit critical mass, but now comes the tough business questions /en-us/posts/technology/ai-in-professional-services-report-2026/ Mon, 09 Feb 2026 13:05:35 +0000 https://blogs.thomsonreuters.com/en-us/?p=69356

Key findings:

      • AI adoption accelerates across professional services听鈥 Organization-wide use of AI in professional services almost doubled to 40% in 2026, with most individual professionals now using GenAI tools, and many preparing for the next wave of tools such as agentic AI.

      • Strategic integration and measurement lag behind usage 鈥 While AI use is widespread, only 18% of respondents say their organization tracks ROI of AI tools, and even fewer measure AI’s impact on broader business goals such as client satisfaction or revenue generation.

      • Communication around AI use remains inconsistent听鈥 While most corporate departments want their outside firms to use AI on client matters, less than one-third are aware whether their firms are doing so. Meanwhile, firms report receiving conflicting instructions from clients about AI use, highlighting a need for clearer dialogue and shared strategy around AI adoption.


Over the past several years, AI usage within professional services industries has come into focus. As we enter 2026 in earnest, the early adoption phase of generative AI (GenAI) has come and gone. Today, most professionals have experimented with some form of GenAI, and many organizations integrated GenAI into their workflows 鈥 and now, a number are preparing for the next wave of technological innovation such as agentic AI.

Given this, the question for professionals and organizational leaders has now become: What will be AI鈥檚 long-term impact on my business?

Jump to 鈫

2026 AI in Professional Services Report

 

To delve into this question further, the 成人VR视频 Institute has released its 2026 AI in Professional Services Report, which takes a broad view into the current usage and planning, sentiment towards, and business impact of AI for legal, tax & accounting, corporate functions, and government agencies. Taken from a survey of more than 1,500 respondents across 27 different countries, the report finds a professional services world that has embraced AI鈥檚 use but is continuing to evolve business strategy around its implementation.

For instance, the report shows that to 40% in 2026, compared to 22% in 2025 鈥 and for the first time, a majority of individual professionals reported using publicly-available tools such as ChatGPT. Additionally, a majority of respondents said they feel either excited or hopeful for GenAI鈥檚 prospects in their respective industries, and about two-thirds said they felt GenAI should be applied to their work in some manner.

At the same time, however, many are exploring GenAI tools without much guidance as to how that use will be quantified or measured. Only 18% of respondents said they knew their organization was tracking return-on-investment (ROI) of AI tools in some manner, roughly the same proportion as last year. And even among those tracking AI metrics, most are tracking mainly internally-focused, operational metrics; and only a small proportion analyzed AI鈥檚 impact on their organization鈥檚 larger business goals 鈥 such as client satisfaction, external revenue generation, and new business won.

AI in Professional Services

This slow move to strategic thinking also impacts client-firm relationships. Although more than half of both corporate legal departments and corporate tax departments want their outside firms to use AI on client matters, less than one-third said they were aware whether their firms were doing so or not. From the firm standpoint, meanwhile, confusion reigns: 40% of firm respondents said they have received orders both to use AI on matters and not to use AI on matters from various clients.

Indeed, bout three-quarters of corporate respondents and firm respondents agreed that firms should be taking the lead in starting these conversations around proper AI use. Yet these discussions have not yet happened en masse. 鈥淔irms are reluctant 鈥 they claim it would compromise quality and fidelity,鈥 said one U.S.-based corporate chief legal officer. 鈥淚 think they are threatened by it.鈥

All the while, technological innovation progresses ever quicker. This year鈥檚 version of the report measures agentic AI use for the first time, finding that already 15% of organizations have adopted some type of agentic AI tool. Perhaps more interesting, however, is that an additional 53% report their organizations are either actively planning for agentic AI tools or are considering whether to use them, indicating perhaps an even more rapid pace of adoption than we鈥檝e already seen with the speedy rise of GenAI.

AI in Professional Services

Overall, the report makes it clear that most professionals do understand that change, driven by AI in the workplace, is undoubtedly here. Even compared with 2025, a higher proportion of professionals said they believe that AI will have a major impact on jobs, billing and revenue, and even the need for legal or tax & accounting professionals as a whole. The percentage of lawyers calling AI a major threat to the unauthorized practice of law rose to 50% in 2026 from 36% in 2025.

Further, this report paints the picture of a professional services world that has embraced AI, begun to see its impact, and realized that it will have broader business and industry implications than previously imagined. As a result, the time for professionals and organizations to begin planning in earnest for an AI future has already arrived.

As a corporate general counsel from Sweden noted: 鈥淲e cannot keep up with the modern-day corporations鈥 demands unless we also develop and adapt our way of working.鈥

You can download

a full copy of the 成人VR视频 Institute’s 2026 AI in Professional Services Report听here


]]>
Responsible AI use for courts: Minimizing and managing hallucinations and ensuring veracity /en-us/posts/ai-in-courts/hallucinations-report-2026/ Wed, 28 Jan 2026 10:51:10 +0000 https://blogs.thomsonreuters.com/en-us/?p=69181

Key insights:

      • AI usage in courts needs verifiable reliability鈥 Unlike other fields, errors and hallucinations caused by AI in a court setting can create due-process issues.

      • Skepticism is professional responsibility鈥 Judges’ interrogation of AI sources and accountability concerns are vital guardrails to minimizing these problems.

      • Governance over perfection鈥 Courts and legal professionals should focus on systematic management of AI hallucinations through clear protocols, human oversight, and mandatory verification to ensure veracity.


AI hallucinations have become one of the most urgent and most misunderstood issues in professional work today; and as generative AI (GenAI) moves from and interesting experiment to common usage in many workplace infrastructures, these issues can cause significant problems, especially for courts and the professionals and individuals that use them.

Jump to 鈫

Responsible AI use for courts: Minimizing and managing hallucinations and ensuring veracity

 

Today, AI can be used in everything from assisted research to guided drafting of documents, court briefs, and even court orders. With the development of tools supported by GenAI and agentic AI, the very infrastructure of professional work has shifted to include these offerings.

Yet, in most business settings, a wrong answer is an inconvenience. It requires minor corrections and has minimal impact. In the justice system, a wrong answer can be a due-process problem that strongly underscores the need for courts and legal professionals to ensure that their AI use is verifiably reliable when it counts.

At the same time, the direction of travel is clear: AI adoption isn’t a fad we can simply wait out, and it isn’t inherently at odds with high-stakes decision-making. Used well, these tools can reduce administrative burden, speed up access to relevant information, and help court professionals navigate large volumes of material more efficiently. The real question is not whether courts will encounter AI in their workflows, but how they will define responsible use, especially in moments in which accuracy isn’t a feature, it’s the foundation.


鈥淲hether you are a judge [or] an attorney, credibility is everything, particularly when you come before the court.鈥

鈥 Justice Tanya R. Kennedy Associate Justice of the Appellate Division, First Judicial Department of New York


To examine these issues more deeply, the 成人VR视频 Institute has published a new report,听, which frames hallucinations not as a sensationalistic gotcha, but as a practical risk that must be managed with policy, process, and professional judgment. The report also features valuable insight on this subject from judges and court stakeholders who today are evaluating AI in the real operating environment of legal proceedings, courtroom expectations, and the daily administration of justice.

This perspective is essential. Technical teams can explain how models generate language and why they sometimes produce confident-sounding errors. However, judges and court staff can explain something equally important 鈥 what accuracy actually means in practice. In courts, accuracy isn’t just about getting the gist right; rather, it’s about precise citations, faithful characterization of the record, correct procedural posture, and language that withstands scrutiny. As the report points out, relied-upon hallucinated information isn鈥檛 merely bad output, it can lead to a potential distortion of justice.

Managing AI as professional responsibility

Crucially, the report reflects that judicial skepticism about AI is not simple technophobia 鈥 it’s professional responsibility. Judges are trained to interrogate sources, weigh credibility, and understand the downstream consequences of errors. Judges may ask, What is the provenance of this information? Can I reproduce it independently? And who is accountable if it’s wrong? These questions aren’t barriers to innovation; indeed, they are the guardrails that this innovation requires.

What emerges is a pragmatic middle ground that embraces the upside of AI use in courts while treating hallucinations as a predictable occurrence that can be managed systematically. Rather than concluding AI hallucinates, therefore AI can’t be used, the more workable conclusion is AI can hallucinate, therefore AI outputs must be designed, handled, and verified accordingly, likely with other advanced tech tools. As the report points out, courts don’t need a perfect AI; rather, they need repeatable protocols that keep human decision-makers in control and keep the record clean.

As the report ultimately demonstrates, managing hallucinations in courts isn’t about chasing perfection, it’s about protecting veracity. It’s about using the right advanced tech tools to build workflows in which the technology consistently supports the truth-finding process instead of quietly eroding it. And it’s about recognizing that in the legal system, responsibility doesn’t disappear when a new tool arrives 鈥 it becomes even more important to ensure the new tool doesn鈥檛 erode that either.


You can download

a full copy of the 成人VR视频 Institute’s 听here

]]>
AI literacy: The courtroom’s next essential skillset /en-us/posts/ai-in-courts/ai-literacy-court-skillset/ Fri, 12 Dec 2025 14:04:03 +0000 https://blogs.thomsonreuters.com/en-us/?p=68733

Key insights:

      • AI literacy is role-specific and essential 鈥 Courts need to move beyond general AI conversations and focus on concrete, role-based strategies that support AI readiness.

      • Balanced AI adoption is crucial 鈥 The goal for courts is not to automate blindly but rather should adopt a balanced AI-forward mindset.

      • Ongoing education and adaptability are vital 鈥 AI literacy requires continuous learning and upskilling that focus on building managers’ comfort and capability to lead their teams.


For today鈥檚 court system, AI literacy is quickly becoming a core professional skill, not just a technical curiosity. In the recent webinar AI Literacy for Courts: A New Framework for Role-Specific Education, panelists emphasized that courts need to move from holding abstract conversations around AI to enacting concrete, role-based strategies that support judicial officers and court professionals throughout their AI journey.

The webinar is part of a series from the听, a joint effort by the National Center for State Courts (NCSC) and the 成人VR视频 Institute (TRI).

The need for AI literacy is great

Courts are being urged to treat AI literacy as a foundational pillar of AI readiness, not as an optional add-on training. AI literacy is “the knowledge, attitudes, and skills needed to effectively interact with, critically evaluate, and responsibly use AI systems,” said the NCSC鈥檚 , adding that it cannot be one-size-fits-all. “The important thing to know about the definition of AI literacy is it’s going to be different for every single personnel role.”

Building a serious AI literacy strategy therefore begins with defining what success looks like for each role, and then aligning recruitment, training, and evaluation practices around those expectations.


You can find out more about here


To support this, policy and security concerns must come before (and alongside) AI use. Webinar panelist , Chief Human Resources Officer at Los Angeles County Superior Court, described how the court started by clarifying the sandbox for safe AI use. First, the court鈥檚 generative AI (GenAI) policy sets parameters, such as prohibiting staff from using court usernames or passwords to create accounts on external AI tools. Only then, after those guardrails were in place, did the training really lean into the technical how-to of writing prompts and experimenting with tools. Policy development and skills development happened in tandem, Griffin explained.

To make space for learning in an already overloaded environment, her team lit a creativity spark with managers first, she said, giving them concrete use cases 鈥 such as drafting performance evaluations, coaching documents, and job aids. As a result, these managers, in turn, feel motivated to create room for their teams to experiment.

This, Griffin added, is all anchored in a clear, people-centered message from leadership: “We have a lot of work to do, and not enough people to do our work 鈥 and so AI is going to help us serve the court users and help us provide access to justice.”


You can register for here


How to make AI 鈥渨ork鈥

On the webinar, the conversation repeatedly returns to what lawyers and court professionals are actually doing with AI tools today and where they’re getting stuck. , Founder of Creative Lawyers, noted that despite AI’s rapid advance, many professionals are still at a surprisingly basic stage in how they use it. For example, Leonard said that users tend to treat AI as a one-way question-and-answer box instead of using it as an expertise extractor that asks them targeted questions. To combat this, she suggested that users ask AI to ask them questions to extract from their expertise.

When thinking about how to interact with AI generally, users should treat it like a smart colleague and ask themselves (and implicitly the AI) these questions:

      • What information would this colleague need from me to do the assignment well?

      • What questions would I want them to ask me?

      • What specific task do I actually want them to execute?

      • What feedback would I give them to make the work product better?

As the webinar examined, leadership messaging needs to be explicit. AI is being adopted to augment human work, reduce burnout, and expand access to justice 鈥 not to eliminate jobs, particularly in courts that are already understaffed. For example, LA Superior Court has been meeting with unions around their GenAI policy, repeatedly affirming that they are not using AI to replace court staff, Griffin said. Instead, they show how AI can be used to demonstrate use cases, and offload repetitive tasks that will make remaining work more meaningful.

At the same time, managers themselves often feel unprepared to talk about AI, which is why building their comfort and capability 鈥 especially around explaining where the court is going 鈥 is becoming a critical managerial competency, panelists noted.

Supporting the journey

To support all of this, the TRI/NCSC AI Policy Consortium has built practical training resources that courts can plug into their own strategies. For example, the offers curated materials mapped to specific roles such as judges, court administrators, court reporters, clerks, and interpreters. Courts can use these resources as targeted supplements when rolling out AI projects to better prepare staff members who are just starting their AI journey.

Complementing this is the , an environment in which staff can safely experiment with GenAI tools without sending data back to the open internet. This gives judges and staff a place to practice prompt-writing, ask follow-up questions, and give feedback, all while staying inside a controlled environment and within the bounds of most court AI policies.

Looking ahead, the panelists argued that the most durable 鈥渇uture skills” may not be specific technical proficiencies but human capabilities, such as adaptability, creativity, critical thinking, and change leadership. In fact, HR leaders across industries largely agree they cannot predict exactly which tools or skill sets will dominate in a few years, Griffin said, and instead, courts should focus on helping managers to craft better prompts, interpret outputs critically, and lead their teams through repeated waves of technological change.

Leonard similarly urged legal organizations to move beyond basic, adoption use cases 鈥 such as document summarization and email refinement 鈥 and start exploring more creative, transformative uses that could redesign legal services and court systems to be more responsive to the public.

Finally, the webinar stressed that AI literacy cannot be a one-and-done initiative. The , published by NCSC, encourages courts to treat AI projects as catalysts for revisiting their overall literacy strategy and HR practices.


You can find out more about the work that NCSC is doing to improve courts here

]]>
Reducing invisible burdens in court administration through automation /en-us/posts/government/reducing-burdens-automation/ Thu, 02 Oct 2025 17:18:59 +0000 https://blogs.thomsonreuters.com/en-us/?p=67716

Key insights:

      • Automation and AI can significantly alleviate administrative burdens in courts 鈥 Court professionals may be able to reclaim up to nine hours per week over the next five years, according to research.

      • Courts are under pressure to modernize and meet the expectations of digital natives 鈥 Courts are facing a generational shift in expectations that is pressuring them to adopt more modern tools and technology.

      • Successful implementation of technology requires a thoughtful and collaborative approach 鈥 Collaboration between judges, administrators, and IT staff is essential, and external-facing tools should prioritize user experience to reduce complexity and increase access to justice.


Bringing automation and AI-powered tools to data entry, case-filing processing, and updating court management systems over the next few years could help court professionals use their time more efficiently, according to the听Staffing, Operations and Technology: A 2025 survey of State Courtsfrom the听成人VR视频 Institute and the National Center for State Courts (NCSC).

Indeed, the report found that alleviating this invisible administrative burden could help professionals reclaim as much as nine hours per week over the next five years. As private sector law firms embrace automated technology, public sector legal departments and courts risk falling further behind.

The time for innovation is now, as caseloads mount, case complexity increases, and retirements and staffing shortages continue to plague courts. Fortunately, administrative professionals are beginning to warm up to targeted automation efforts and AI-powered tools to expand their efficiency.

The cost of administrative burdens

A produced for the Administrative Conference of the United States defines administrative burdens as 鈥渙nerous experiences people encounter when interacting with public services.鈥 And unfortunately, many people do not access the rights or benefits to which they are entitled because of these onerous administrative processes within stressful, frustrating, and overwhelming government systems. In a legal context, administrative burdens hinder access to justice. In fact, low-income Americans did not receive any legal help or enough legal help for 92% of the problems that impacted their lives, according to the Georgetown study.

Recent years have seen a in civil cases. Given this, the processes that were designed for navigation by attorneys and legal and court professionals need to be simplified to reflect the needs of non-professional court users. A on experiences with state courts in particular notes that court users strongly desire courts to be easier to navigate. Even among those who had previous court experience, 50% indicated that it was a little hard or very hard to navigate court paperwork and steps in a case.

A modernizing court workforce

Millennial-aged workers constitute approximately and are the most prevalent court users today and in the foreseeable future. As digital natives, this generation expects modern tools when navigating the legal system.

A commissioned by the NCSC last year found that large percentages of registered voters surveyed support increased use of AI chatbots to answer court FAQs (with 63% saying this), using AI to translate court documents into other languages (64%), and using AI to break down complex legal jargon and make information more accessible (71%).

Further, this lack of modernization in courts has consequences for judges and court professionals as well. Court staff are feeling strained by their workload, and many report simply not having enough time to catch up. More than half (57%) of court professionals and administrative staff reported not having enough time, according to according to the听Staffing, Operations and Technology report.

The report also found that 91% of court staff report working more than 40 hours each week, with about one-third of them working more than 46 hours per week.

automation

Given all this, the pressure courts are under to modernize is understandable; however, it should be looked at as an impetus for improvement: Courts face a once-in-a-generation opportunity to reimagine their workflows.

Resources available to fund statewide technology improvements

Several states leveraged one-time resources available through the to fund major investments in court technology. The , for example, used $38 million to update a two-decade-old in-house case management system. (The AOC is the operations arm of the state court system, which supports 3,000 employees and more than 400 elected justices, judges, and circuit court clerks.) Kentucky courts鈥 AOC selected that offers online tools for judges, circuit court clerks, attorneys, as well as a tool for pro-se litigants.

On the other hand, opted to build its own in-house court management system, as the cost was significantly less than vendor rates. Initial estimates to upgrade a legacy system were $70 million, and Arkansas was able to build its own for $20 million, funded through that came from the state legislature. Indeed, Arkansas has been a leader in court technology for more than 20 years and signed contracts for automated document redaction more than a decade earlier.

The state courts new customized cloud-based solution incorporates multiple vendors, and the development process (now two years underway) has launched Contexte Case Management, an internal facing tool, and , a public-facing case information tool. All and nearly half of district and juvenile courts already have implemented the system.

Moving forward, slowly and thoughtfully

While private sector legal technology has advanced quickly, courts face unique challenges that often make off-the-shelf solutions an inadequate fit. Investment in court modernization must balance the efficiency gained with fiscal responsibility around such investment.

Successful implementation in courts will take cultural, procedural, and budgetary shifts. Internally, collaboration between judges, administrators, and IT staff is essential; and externally, any public-facing tools should center around user experience and ease-of-use, perhaps offering a dedicated customer service team to guide users so that technology reduces complexity rather than adding to it.

The real return on investment in court systems will be realized when all users can access justice more easily, equitably, and reliably.


You can download a full copy of theStaffing, Operations and Technology: A 2025 survey of State Courts from the听成人VR视频 Institute and the National Center for State Courts听AI Policy Consortium听for Law and Courts here

]]>
Law at the speed of innovation: Thinking beyond our systems and structures /en-us/posts/ai-in-courts/law-at-the-speed-of-innovation/ Thu, 11 Sep 2025 16:48:41 +0000 https://blogs.thomsonreuters.com/en-us/?p=67512

Key insights:

      • The rapid pace of AI development is testing the limits of a legal system built for process and deliberation 鈥 The shortcomings create uncertainty and challenges for both lawmakers and innovators.

      • Specialized tribunals might offer a solution 鈥 They could provide a forum for faster, more precise guidance and directives, while also ensuring decisions are grounded in expertise and applied to concrete facts.

      • A balanced approach is needed to regulate AI 鈥 If AI is constrained too much, it risks stifling innovation and reducing access to justice; however, if AI is not constrained enough, it could create serious risks.


AI is moving fast 鈥 faster than our legal system is built to move. Our courts and legislatures are designed to be contemplative, cautious, and process driven. When technology moves at lightning speed, this deliberative process creates a gap between the questions being raised and the answers we have available. Lawmakers are left to catch up and patch up, and innovators are discouraged by the legal uncertainty.

To bridge that gap, we have to think outside the box.听If we don鈥檛, AI-related disputes could pile up. Indeed, some are already in court. And claims of algorithmic bias, AI-generated harm to vulnerable individuals, and discriminatory outcomes from automated systems, are growing.听Meanwhile, courts and legislatures are working at their necessary pace鈥攁nd it鈥檚 not fast. This means AI systems will continue to be developed and deployed without clear rules or remedies.

This isn鈥檛 the first time technology has outpaced the law. The industrial revolution forced lawmakers to confront unprecedented questions of workplace safety and labor rights. Then-existing laws, systems, and structures were ill-suited to address those issues, and we were forced to adapt.

The same can be said about the AI revolution. Our laws, systems, and structures will have to adapt. If they don鈥檛, we may face serious risks that may one day become existential risks. On the other hand, if we do too much to constrain AI, we risk slowing biomedical advances, missing educational opportunities, reducing access to justice, compromising national security, and limiting the prosperity that might flow from these technologies. The balance is delicate, and the consequences are profound.

Learning from established models

In the wake of the industrial revolution, Congress created the Occupational Safety and Health Review Commission (OSHRC) as part of the Occupational Safety and Health Act of 1970. The OSHRC, an Article I independent federal administrative agency court, adjudicates disputes between the U.S. Secretary of Labor and employers, when the Occupational Safety and Health Administration (OSHA) issues citations on behalf of the Secretary for violations of the Act.


The industrial revolution forced lawmakers to confront unprecedented questions of workplace safety and labor rights. Then-existing laws, systems, and structures were ill-suited to address those issues, and we were forced to adapt.

The same can be said about the AI revolution.


Under this structure, federal administrative law judges (ALJs) issue decisions utilizing a structured system that ensures fairness and due process. The ALJs do not have regulatory or enforcement authority 鈥 that rests with the Secretary of Labor and OSHA 鈥 but their decisions have significant impact. They interpret the law, resolve disputes, and guide employers, employees, and OSHA in their understanding and application of the law.

There may be lessons to draw from this and other models such as the U.S. Tax Court and the Court of Appeals for Veterans Claims, because specialized tribunals that respond to emerging needs have proven effective.

The need for specialization & speed

What sets AI apart from other challenges is the combination of speed and reach.

It was less than three years ago, in November 2022, that ChatGPT captured public attention. In just a few months, ChatGPT reached monthly active users, becoming the fastest-growing consumer application in history. Within a year, companies began harnessing AI for scientific and medical breakthroughs, and today over a billion people use AI chatbots on a regular basis. The conversation has also shifted from basic text-based models to fully agentic AI systems. Some even warn that is on the horizon, if we鈥檙e not careful.

This trajectory underscores a simple truth: even if foundational models never achieve the kind of artificial general intelligence that some proponents predict, the systems we have today are already very powerful, adaptable, and certain to be leveraged in ways their developers may never have intended. Moreover, the speed of AI development means we only have a short window within which to build the right guardrails.

Yet, our current systems make it nearly impossible to put these guardrails in place at the pace of innovation. There is no comprehensive federal legislation addressing AI, and efforts in that direction have met resistance out of concern that broad rules could stifle innovation. In the absence of a unified framework, states have begun to act on their own, creating a patchwork of laws that address some issues while leaving significant gaps in others. At the same time, individual disputes that could shape the legal landscape move slowly through our backlogged courts, where judges 鈥 generalists by design 鈥 must divide their attention among a wide range of cases and cannot realistically conduct detailed inquiries into every emerging technology that comes before us.

Beyond our current structures

A tribunal with the right expertise and built for efficiency might be a useful tool for building the right guardrails.

Certainly not every AI-related dispute requires AI expertise. However, when a dispute concerns the guardrails on AI development, deployment, and use, adjudicators would benefit from learning how these systems work, where they fail, and the societal risks they pose. Training on ethical frameworks, human-centered design, the evolving legal and regulatory landscape, and the dynamics of AI innovation could help adjudicators appreciate both the benefits and risks of imposing certain limitations. Additionally, with some fluency, adjudicators would be better positioned to ask the right questions, recognize when expert testimony is needed, and issue decisions that are not only legally sound, but technologically informed.


Even if foundational models never achieve the kind of artificial general intelligence that some proponents predict, the systems we have today are already very powerful, adaptable, and certain to be leveraged in ways their developers may never have intended.


To be sure, designing jurisdiction for a specialized tribunal would require great care. The sheer breadth of the AI revolution may make a federal agency adjudication structure 鈥 like the OSHRC 鈥 inadequate. A specialized tribunal could also be built on a consent model, however, which would allow it to handle private disputes with mutual agreement from both parties for expert and speedy resolution. Or perhaps a specialized tribunal could serve as a resource for existing courts, who could certify technical questions for non-binding resolution, allowing them to tap into the tribunal鈥檚 expertise without surrendering their authority to decide cases.

There is certainly much to consider, including the potential drawbacks of a specialized tribunal. But this is not a call for a set system or framework. Instead, it is an invitation to think beyond the current limits of our system and ask, 鈥淲hat will it take for our legal system and institutions to keep up with AI advances? And how can we mitigate major risks, while continuing to promote and support innovation?鈥

If we do not explore solutions beyond the limits of our current system, we risk: i) delays that allow unsafe AI practices to advance unchecked; ii) fragmentation in the absence of comprehensive legislation; and iii) overbroad, one-size-fits-all regulations that could inhibit critical innovation.

Looking for balance

A specialized AI tribunal might offer something for everyone. For those who worry that regulation is too sparse or too slow, it would provide a forum for faster, more precise guardrails and guidance. For those concerned that sweeping regulations would limit innovation or miss the mark, a specialized tribunal could deliver narrowly tailored decisions, rather than broad, over-inclusive rules.

A specialized tribunal might also spare us the impossible task of trying to legislate for every hypothetical future problem. Instead, issues could be resolved as they come 鈥 with decisions grounded in expertise and applied to concrete facts.

No framework is perfect, but when the pace of change is unprecedented, the competing interests critical, and the consequences profound, we need fresh ideas that bring all concerns to the table.

Judge Braswell wishes to thank Judge Patrick Augustine for his OSHRC insights


You can find here

]]>
AI & human rights: The importance of explainability by design for digital agency /en-us/posts/sustainability/explainability-by-design-digital-agency/ Thu, 15 May 2025 14:56:23 +0000 https://blogs.thomsonreuters.com/en-us/?p=65829 AI systems increasingly shape access to rights, services, and opportunities, which makes the ability to understand, evaluate, and respond to AI-driven decisions a structural requirement for exercising human rights. This condition, called digital agency, ensures that individuals retain autonomy and accountability in environments governed by automated systems.

, a recognized AI governance and data protection expert and Co-Founder of Women in AI Governance, calls for the formal recognition of digital agency as a fundamental human right. Securing digital agency requires embedding explainability into AI systems at the design level, making system outputs understandable, accessible, and actionable. Without digital agency, individuals are exposed to systems that decide without visibility, affect without consent, and deny the possibility of meaningful redress.


Join us for a free online听Webinar: World Day Against Trafficking in Personsto learn more about the complexities of human trafficking, the impact on victims, and effective strategies for prevention and intervention


Today, many AI systems operate without meaningful explanation, creating an explainability gap that prevents individuals from recognizing or responding to the impact AI-driven decision may have on their lives. This unchecked deployment of opaque AI can systematically displace individual agency, creating environments in which decisions are made without visibility or contest, Rosenberg warns.

Current legal frameworks, including the European Union鈥檚 AI Act, attempt to mitigate systemic risks through classification and documentation requirements. However, they do not secure operational explainability for individuals affected by AI-driven decisions. Rosenberg argues that recognizing digital agency as a human right is essential to correcting this failure. She advocates embedding explainability into AI systems as a condition for preserving autonomy within increasingly automated governance structures.

Preserving digital agency through explainability

AI governance frameworks often conflate transparency with explainability, although the two concepts serve different functions. Transparency provides limited information about systems鈥 existence or purpose; while explainability ensures that individuals can understand how decisions are made, what influences them, and how they can respond. Most legal frameworks mandate transparency but do not compel explainability, leaving individuals without the means to navigate or challenge AI-driven outcomes.

Embedding explainability by design requires systems to support functional understanding from the outset. Rosenberg defines this threshold as minimum viable explainability: ensuring that AI systems make influencing factors and decision outcomes intelligible enough for individuals to assess, understand, and act upon meaningfully, if necessary. Systems designed without explainability embed opacity as a structural feature, cutting individuals off from seeing how decisions affect them, questioning outcomes, and seeking correction when needed.

Mandating minimum viable explainability ensures that individuals retain agency within AI-mediated environments. Digital agency must serve as the foundation of regulatory frameworks because, without such agency, legal protections remain abstract and unenforceable, Rosenberg explains.

Learning from the history of privacy

The human right to privacy was recognized internationally in 1948, but it did not meaningfully shape digital regulation before systemic harms emerged. AI systems now operate in a similarly underregulated space, necessitating a way to anchor AI regulation to digital agency, warning that without this foundation, systemic harms will again outpace regulatory response, Rosenberg says.

In the area of privacy, for example, the United Nations鈥 Special Rapporteur role helped consolidate regulatory momentum already underway. A Special Rapporteur for AI & Human Rights would be tasked with accelerating global recognition and protections that have yet to fully emerge. Establishing this role requires a UN Human Rights Council resolution that has not been formally proposed, reflecting the delayed global response to technologies already impacting individual rights.

Privacy protections emerged reactively, and digital agency protections must be built proactively to prevent further erosion of autonomy. Recognizing digital agency as a human right is a crucial step to ensure that digital agency protections are established before dependencies erode autonomy beyond repair.

Enshrining digital agency

As AI evolves, protecting human agency becomes imperative. However, recognition must come first: enshrining digital agency as a human right will create the foundation for systemic accountability.

To get there, we need to pursue a three-part strategy that includes:

      1. Recognizing the right to digital agency 鈥 Concerned individuals and organizations need to advocate for the establishment of a UN Special Rapporteur for AI Governance and the formal recognition of digital agency as a protected human right. Advocates should also mobilize support from human rights organizations, policymakers, and legal experts to initiate and advance a UN Human Rights Council resolution affirming digital agency as fundamental to autonomy and dignity.
      2. Establishing minimum viable explainability standards 鈥 Next, supporters should define standards for AI systems that set clear guidelines for what individuals need to preserve agency. International collaboration is essential to develop these standards and integrate them into certification and compliance processes.
      3. Mandating explainability by design 鈥 Requiring that new AI systems embed explainability from the outset, ensuring usability and intelligibility, is a critical step. Regulatory frameworks must ensure that explainability becomes a baseline condition for AI deployment, with voluntary leadership strengthening early adoption.

Today, AI is reshaping the systems that govern individuals, determine rights, and affect autonomy. Protecting digital agency ensures that individuals can understand, navigate, and challenge the decisions that shape their lives. Securing digital agency now is essential to ensuring that technology strengthens human dignity rather than eroding it.


You can find more information here about where current regulations are going concerning AI and its impact

]]>
Scaling Justice: Bridging the justice gap with advanced technology /en-us/posts/ai-in-courts/scaling-justice-bridging-justice-gap/ Fri, 02 May 2025 14:14:02 +0000 https://blogs.thomsonreuters.com/en-us/?p=65687

This article is part of an ongoing series titled听Scaling Justice, by Maya Markovich and others in consultation with the 成人VR视频 Institute. This series aims to not only explore how justice technology fits within the modern legal system, but how technology companies themselves can scale as businesses while maintaining their access to justice mission.


Millions of people worldwide face barriers when seeking legal help, and this justice gap disproportionately affects low- to middle-income individuals and members of historically excluded communities.

The United States ranks 107th of 142 countries in affordability of legal support, and approximately 92% of low-income individuals receive inadequate or no legal assistance for their civil legal problems. Worse yet, in 75% to 95% of civil cases, at least one party is unrepresented, leaving more than 120 million people each year navigating the US legal system without support. And these broad numbers mask the racial and socioeconomic disparity that pervade the legal system, which results in unjust outcomes and an overrepresentation of those without access to legal services in the justice system.

Moreover, our criminal and civil justice systems feed into each other in a negative loop for many people. An unpaid fine can lead to crushing debt and criminal liability, while the wait for public representation for a criminal offense can prevent a person from dealing with life-changing personal issues like maintaining housing, employment, or financial stability. The ripple effect of this extends beyond individuals to impact families, communities, and entire demographics.

Factors contributing to the justice gap

Certain factors within society 鈥 both currently and in the past 鈥 may contribute to the continual rise in the justice gap, including:

Economic disparities

Financial constraints are one of the most significant barriers to accessing justice. Attorney fees and court costs make legal representation inaccessible to most of those who need it, leaving them no alternative but to navigate complex legal issues alone. Financial burdens associated with legal disputes can also deter people from filing, defending, or following through with legitimate legal claims.

Implicit bias within the legal system

Implicit bias refers to unconscious attitudes and stereotypes that can affect decision-making within the legal system. Indeed, marginalized groups such as ethnic minorities, women, and immigrants often face systemic discrimination in both civil and criminal justice systems across the world. This bias can lead to unequal treatment, harsher penalties, and diminished trust in the legal system.

In addition, an entrenched bias towards pro se litigants impacts their experience of the legal system and can also influence the outcome. Unrepresented parties have reported that even when their documents are flawless, the statute is clear, and a letter citing the relevant law is included in their filings, some judges, prosecutors, and clerks assume they are incorrect when the other side is represented by counsel. When those represented by legal counsel are presumed more likely to have meritorious claims, unrepresented parties are denied justice. And while pro bono services and legal aid organizations are powerful drivers of justice equity, they are chronically under-resourced and overwhelmed.

Geographic limitations and digital divides

Geographic location can significantly impact access to legal services, especially for individuals that live in remote or underserved areas. These so-called legal deserts have limited availability of legal professionals and court facilities, often forcing community members to expend significant resources to travel for legal assistance. The digital divide further complicates these challenges 鈥 individuals with unreliable internet access or lower digital literacy are often restricted in their ability to access support for their legal problems.

How tech can bridge access to justice

Fortunately, justice tech 鈥 which encompasses an expansive array of digital tools that include online legal platforms, document automation software, virtual courtrooms, AI-powered legal assistance, and much more 鈥 has emerged as a way to address these disparities by helping expand access to legal resources, streamline processes, and improve outcomes.

In addition, there are several specific legal areas that can be well served by justice tech tools, such as civil justice, in which document-preparation tools can help individuals navigate issues including family law, tenant rights, and small claims cases; or criminal justice, in which litigants can access digital solutions that can improve interactions with law enforcement, offer support for incarcerated individuals, and facilitate post-incarceration reintegration; or family law, in which digital platforms can transform how individuals navigate family and estate matters, such as divorce, child custody, bankruptcy, and trust management.

Other tools also offer comprehensive litigation support for unrepresented litigants to help manage their cases through guided legal education, document preparation, and case strategy. And some digital platforms support entrepreneurs, immigrants, and civil rights advocates by providing legal information, compliance tools, and resources for addressing discrimination and harassment.

This explosion of justice tech tools and platforms even offers the opportunity to reduce the likelihood of recidivism by connecting returning citizens with training, employment, housing, and other services 鈥 or by streamlining expungement and record-sealing to help users overcome legal obstacles that often hinder employment, housing, and reintegration.

Conclusion

The justice tech sector is actively transforming outdated and costly legal systems, while helping individuals overcome financial, geographic, and systemic barriers to level the playing field. Indeed, AI-driven solutions can provide more affordable and streamlined legal support, clearly demonstrating that the use of AI in a legal context must be developed and delivered with a laser focus on consumer benefit, mitigating consumer harm, and ensuring transparent and unbiased results.

In this way, justice tech is not just an instrument for efficiency. It also presents a fundamental shift and a true alternative to questions around our approach to legal access. With a culture of innovation, mission focus, and accountability, justice tech can and should be part of the solution for a more accessible and fair legal system.


You can find out more about听the impact of justice techhere

]]>
Legal training in the age of AI: A leadership imperative /en-us/posts/ai-in-courts/legal-training-ai-leadership/ Wed, 30 Apr 2025 12:34:35 +0000 https://blogs.thomsonreuters.com/en-us/?p=65728 AI has the potential to transform work across various industries. Stanford University鈥檚 most recent reflects significant advancement in AI capabilities, as well as increased adoption rates.

The legal profession is no exception. The recent 2025 Generative AI in Professional Services Report from the 成人VR视频 Institute, shows that legal and other professionals are increasingly positive about AI, with greater feelings (55%) of excitement and hope, compared to hesitation and concern. Almost two-thirds of respondents (62%) said they believe AI should be used for work, and most (89%) said they can think of specific use cases. The report also suggests that as AI tools become more integrated into professional workflows, they鈥檒l reduce costs and free up professionals for higher-value activities. Indeed, 95% of respondents said they believe AI 鈥渨ill be central to their organization鈥檚 workflow within the next five years.鈥

To me, this suggests AI has staying power.

As AI becomes more integrated into legal workflows, it stands to reason that the traditional training ground for lawyers and other professionals will narrow. Legal tasks such as basic research, first drafts of memos, cite-checking, and document review 鈥 once rites of passage for junior attorneys 鈥 will be largely automated. This is especially true with the rise of agentic AI systems, which go beyond prompt-based assistance to much more autonomous assistance. Some predict these digital agents will have the power to absorb entire tasks and orchestrate full workflows in the future.

In conversations I鈥檝e had with law students and early-career attorneys, I sense a healthy degree of unease about this shift. They often ask:

鈥淲ill entry-level jobs even exist in five years?鈥

鈥淎m I learning the right things in the age of AI?鈥

鈥淗ow will I learn if AI is doing the work I鈥檓 supposed to learn from?鈥

As new lawyers search for employment opportunities, they may be looking for more than just a paycheck 鈥 indeed, they may want stability and growth potential. They may want teams that are investing in the future 鈥 not just the future of technology, but of people. They may look for leaders who are committed to training the next generation of lawyers, not replacing them.

What is the role of leaders in this evolving landscape?

Broadly speaking, I see three types of leaders when it comes to AI adoption:

      1. Those who are eager to adopt AI;
      2. Those who are uncomfortable with AI and may resist adoption; and
      3. Those who are curious but unsure where to begin

While it may seem obvious, the organizations and professionals that are AI-resistant could potentially fall behind as AI advances. But the hyper-enthusiastic, tech-first adopters may also risk missing something crucial: the parallel responsibility to develop the next crop of lawyers. If the pursuit of efficiency overshadows professional development, we could lose something fundamental.

How do we cultivate good judgment, for example, in an era in which AI could reduce opportunities for practical experience? If we optimize everything for speed and scale, how will the next generation of lawyers develop basic skills?

Leaders who understand the evolving landscape will be most effective in forecasting professional development needs, adapting training strategies, and attracting top talent that can excel alongside AI.

AI for productivity and for training

Could the problem created by AI also be solved by AI? With intention and vision, leaders might deploy AI not just to do the work, but to help teach it. These methods could include:

AI-powered simulations 鈥 What if simulations could help junior lawyers develop and test skills in a controlled environment? For instance, a new litigator might practice cross-examining a virtual witness, with AI offering real-time critiques on tone, evidence use, and questioning style. Or perhaps a junior lawyer could conduct a virtual negotiation with an AI tool that offers real-time feedback. Exercises like this could help build confidence and competence.

AI draft assistants that coach 鈥 Could drafting assistants go beyond offering clauses, and instead offer interactive drafting experiences? For example, AI programs that explain the reasoning behind certain clause structures, ask questions about the lawyer鈥檚 intent, or pose hypotheticals to help a lawyer think through the implications of different language, might help a junior lawyer prepare an agreement, and understand why certain language does or does not work.

AI as a Socratic partner 鈥 AI company Anthropic recently released an educational model for academic settings that asks probing questions instead of offering an immediate answer. Could a similar model train new lawyers, too? For example, before beginning legal research, a junior lawyer might engage with an AI tool designed to challenge their assumptions, test their reasoning, and probe for gaps in their logic. In other words, instead of jumping straight into legal research, the associate might begin by discussing their plan with an AI tool that pushes them to think more critically about the questions, helps them carefully consider the structure of their approach to the research, and perhaps identifies potential pitfalls. This kind of dialogue could foster deeper learning and sharper thinking.

If tools like this don鈥檛 exist already, they will soon.

The tools themselves, however, are not as important as a commitment to training and development. In a profession that鈥檚 contemplating its future in the face of AI, legal organization leaders who demonstrate a genuine desire to invest in the next generation of legal professionals will undoubtedly set themselves apart.

Looking forward

I don鈥檛 have all the answers 鈥 I can鈥檛 imagine anyone does. However, we can at least keep asking the right questions. How might AI transform the practice of law? What tasks might be most susceptible to automation? What aspects of our work lend themselves to automation, but should nevertheless remain in the hands of a human? What skills will be important in a future in which AI is increasingly integrated into our workflows?

Clearly, now is the time for critical questions and great legal leadership 鈥 the kind that fosters a culture of continuous learning and that recognizes training is not a cost, it鈥檚 an investment.


You can download a full copy of the听2025 Generative AI in Professional Services Reporthere

]]>
Risk assessment and ethical guardrails: A how-to guide for courts to implement AI responsibly /en-us/posts/ai-in-courts/guide-ai-implementation/ Thu, 10 Apr 2025 16:25:45 +0000 https://blogs.thomsonreuters.com/en-us/?p=65481 According to the 成人VR视频 Future of Professionals 2024 report, more than three-quarters (77%) of professionals surveyed said they believe that AI will have a high or transformational impact on their work over the next five years.

What is interesting is there has been a shift in sentiment towards AI, with professionals moving from their initial fears around using AI or of AI eliminating legal industry jobs to an increasingly optimistic view of AI as a transformative force in their professions.

AI implementation

Framework for responsible AI听

Even though there is a trend of optimism towards AI, using AI responsibly is a critical ingredient for courts in order to take advantage of the opportunities AI brings while mitigating the risks and concerns of individuals. More specifically, the integration of AI into court systems demands a comprehensive ethical framework to ensure justice is served while the public trust is maintained.

AI implementation

As , Director for Responsible AI Strategic Engagements at 成人VR视频, emphasized in hosted by the 成人VR视频 Institute (TRI) and the National Center for State Courts (NCSC): “AI systems should be designed and used in a way by courts that promotes fairness and avoids discrimination” based on an ethical AI framework, which includes:

      • Privacy and security that serve as foundational elements, requiring robust systems with proper data protection measures. Courts must implement encryption, secure storage protocols, and establish rigorous access controls to safeguard the highly sensitive information they manage.
      • Transparency represents another critical pillar, combined with thorough testing and monitoring of AI tools and continuous communications with the communities it impacts.
      • Human oversight stands as perhaps the most crucial element because it ensures that AI augments rather than replaces human judgment. “Human oversight of AI is vital to prevent bias,” particularly in contexts in which decisions impact individuals’ rights and liberties, Lam explains.
      • Societal impact means that courts must conduct impact assessments to understand the consequences for their constituents.

Assessing risk is a necessary part of any ethical framework

Implementing an ethical framework for AI in the judicial system is crucial to ensure that judges, court administrators, and legal professionals can use technology competently and ethically. To help with this process, the Governance & Ethics working group (created as part of the ) recently that essentially acts as a How-to guide for judges, court staff, and legal professionals as they seek to responsibly use AI in courts.

As the paper makes clear, the central premise of an ethical framework involves understanding the levels of risk 鈥 from minimal to unacceptable 鈥 associated with AI and then applying these insights to determine appropriate use cases.


Join us for the next TRI-NCSC webinar on April 16


Assessing risk and classifying the impact into low, moderate, high, and unacceptable categories results in a structured framework based on their application and the context in which they are used. For example, classifying risks through the lens of their impact would necessarily include:

Low-risk applications 鈥 This includes predictive text and basic word processing, which require minimal human oversight in which a supervisor intervenes only when necessary.

Moderate-risk applications 鈥 These tools include those used for drafting legal opinions and demand more direct human involvement to ensure accuracy and reliability.

High-risk applications 鈥 These can significantly impact human rights and necessitate stringent human oversight to prevent errors and biases.

Unacceptable-risk applications 鈥 Lastly, the tools with unacceptable risks, such as those automating life-and-death decisions, should be avoided altogether.

However, , Director of Law and Technology Initiatives at Northwestern Pritzker School of Law & McCormick School of Engineering and fellow member of the TRI-NCSC Governance & Ethics working group, cautions that risks need to be monitored no matter what level of applications you are using.

鈥淓ven among tools that may seem low impact, there may be in your particular context usages where there could be impacts, errors that could actually create greater harm than may have been recognized early on,鈥 Linna explains. 鈥淪o, when you鈥檙e engaging in these discussions, you should be talking about, 鈥榃ell, how is this tool going to be used? What do we think its accuracy is going to be? And if it makes an error, is it going to be a low impact error?鈥欌

Community engagement is key to responsible AI

The use of AI in courts presents both opportunities and challenges; and understanding these are crucial for responsible implementation. This is why community engagement is critical for maintaining public trust in the judicial system, particularly when implementing high-risk AI solutions.

By involving the community in the decision-making process and being transparent about responsible AI implementation, courts can ensure that the public understands the benefits and risks of AI solutions and can trust the judicial system to use these technologies responsibly.

As , who is responsible for technology and fundamental rights at Microsoft and also a member of the TRI-NCSC Governance & Ethics working group, explains: 鈥淲e have an opportunity with use of AI in the courts to be leaders in the criminal justice system around transparency, community engagement, and ensuring that the community is part of the conversation.” Indeed, the ethical concept of transparency reinforces the importance of building and maintaining trust of court constituents, and courts must be open about their use of AI technology, especially in high-risk areas that impact individual liberties.

In addition, responsible implementation of AI helps to avoid other ethical pitfalls, such as:

Overreliance on AI systems 鈥 This is one of the significant pitfalls that can lead to a lack of human oversight in critical decision-making areas. “We can’t use these tools as if in a deterministic way,鈥 Linna cautions, adding that users need to realize that any AI-provided answer is simply 鈥渢he output you got that one time that you tried it with that specific prompt.鈥 Having a human in the loop is important to address the fact that AI tools based on probabilistic models do not always yield consistent results.

Privacy issues 鈥 Breaches in privacy is a critical concern for courts. Indeed, sensitive legal data must be protected to maintain public trust and meet ethical obligations.

Presence of biases 鈥 Further, AI can amplify existing biases if not carefully managed, leading to unfair outcomes. Therefore, understanding the difference between probabilistic and deterministic tools is essential for judicial professionals to use AI effectively while safeguarding justice and fairness.

Looking ahead at the promise of AI in courts

As the judicial system continues to evolve, AI can help bridge gaps for self-represented litigants, provide language support, and ensure faster case resolutions, all while maintaining the integrity and fairness that underpin the legal system. However, it is crucial for courts to approach AI integration with caution, ensuring that ethical frameworks guide its use in order to prevent bias, protect privacy, and uphold public trust.

鈥淛udges and court administrators should be empowered to use technology competently and consistently with ethical obligations to best serve the public,鈥 says , Administrative Director of the Courts of the Idaho Supreme Court and TRI-NCSC AI Governance & Ethics working group member. 鈥淭he checklist provided as part of the white paper is a practical tool that can be used to ensure those who work in the courts are meeting this goal.”

And this also helps to ensure AI is a tool to advance justice while safeguarding the fundamental rights of individuals.


You can find more about how courts are using AI-driven technology here

]]>