Human in the Loop Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/human-in-the-loop/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Mon, 19 Jan 2026 14:47:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The Human Layer of AI: How to build human rights into the AI lifecycle /en-us/posts/sustainability/ai-human-layer-building-rights/ Mon, 24 Nov 2025 16:33:36 +0000 https://blogs.thomsonreuters.com/en-us/?p=68546

Key takeaways:

      • Build due diligence into the process 鈥 Make human-rights due diligence routine from the decision to build or buy through deployment by mapping uses to standards, assess severity and likelihood, and close control gaps to prevent costly pullbacks and reputational damage.

      • Identify risks early on 鈥 Use practical methods to identify risks early by engaging end users and running responsible foresight and bad headlines

      • Use due diligence to build trust 鈥 Treat due diligence as an asset and not a compliance box to tick by using it to de鈥憆isk launches, uncover user needs, and build durable trust that accelerates growth and differentiates the product with safety-by-design features that matter to buyers, regulators, and end users.


AI is reshaping how we work, govern, and care for one another. Indeed, individuals are turning to cutting-edge large language models (LLMs) to ask for emotional help and support in grieving and coping during difficult times. 鈥淯sers are turning to chatbots for therapy, crisis support, and reassurance, and this exposes design choices that now touch the right to information, privacy, and life itself,鈥 says , co-Founder & Principal at , a management consulting firm that specializes in human rights and responsible technology use.

These unexpected uses of AI are reframing risk because in these instances, safeguards cannot be an afterthought. Analyzing who might misuse AI alongside determining who will benefit from its use must be built into the design process.

To put this requirement into practice, a human rights lens must be applied across the entire AI lifecycle from the decision to build or buy to deployment and use, to help companies anticipate harms, prioritize safeguards, and earn durable trust without hampering innovation.

Understanding human rights risks in the AI lifecycle

Human rights risks can surface at every phase of the AI lifecycle. In fact, they have emerged in efforts to train these frontier LLMs in content moderation functions and now, are showing up elsewhere. For example, data enrichment workers who refine training data, and data center staff, who power these systems, are most likely to face labor risks. Often located in lower鈥慽ncome markets with weaker protections, they face low wages, unsafe conditions, and limits on other freedoms.

During the development phase, biased training sets and the probabilistic nature of models can generate misinformation or hallucinations, and these can further undermine rights to health and political participation. Likewise, design choices often can translate into discriminatory outcomes.

Unfortunately, the use of AI-enabled tools also can compound these harms. Powerful models can be misused for fraud or human trafficking. In addition, deeper integration with sensitive data can heighten privacy and security risks.

A surprising field pattern exacerbates the risk when people increasingly use AI for therapy鈥憀ike support and disclose issues related to emotional crises and self鈥慼arm. In particular, this intimacy widens product and policy obligations, which include age鈥慳ware safeguards and clear limits on overriding protections.

Why human rights due diligence is urgent

That鈥檚 why human rights due diligence must start with people, not the enterprise. By embedding human rights due diligence into the lifecycle of AI, development teams can begin to understand the technology and its intended uses, then map those uses to international standards. Next, a cross functional team gathers to weigh benefits alongside harms and to consider unintended uses. Primarily, they need to answer the question, 鈥淲hat happens if this technology gets in the hands of a bad actor?”

From there, the process demands an analysis of severity 鈥 which assesses scale, scope, and remediation, and the likelihood of each use. The final step involves evaluating current controls across supply chains, model design, deployment, and use-phases to identify gaps.

The biggest barrier in layering in a human rights lens into to AI is the need for speed to market. The races to put out minimally viable products accompanied by competitive pressure can eclipse robust governance, yet early due diligence may prevent costly pullbacks and bad headlines. Article One鈥檚 Poynton notes that no one wants to see their product on the front page for enabling stalking or spreading disinformation. Building safeguards early “ensures that when it does launch, it has the trust of its users,” she adds.

How to embed safeguards without slowing teams

The most efficient path in translating human rights into the AI product lifecycle is to turn policy principles, goals, and ambitions into actionable steps for the engineers and the product teams. This requires the 鈥渆ngineers to analyze how they do their work differently to ensure these principles live and breathe in AI-enabled products,鈥 Poynton explains. More specifically, this includes:

Identifying unexpected harms 鈥 One of the most critical, yet difficult components of the human rights impact assessment is brainstorming potential harms. Poynton recommends two ways to make this happen: First, engage with end users to help identify potential harms by asking, 鈥淲hat are some issues that we may not be considering from the perspectives of accessibility, trust, safety and privacy?鈥 Second, run responsible foresight workshops at which individuals play the parts of bad actors to better identify harms and uncover mitigation strategies quickly. Pair that with a bad headlines exercise that can be used to anticipate front鈥憄age failures. Then, ship with these protections in place, pre鈥憀aunch.

Implementing concrete controls 鈥 Embedding safety-by-design should cover both content and contact, a lesson from gaming in which grooming risks require more than just filters. Build age鈥慳ware and self鈥慼arm protocols, including parental controls and principled policies on overrides. Govern sales and access with customer vetting, usage restrictions, and clear abuse鈥憆esponse pathways. In the supply chain, set supplier standards for enrichment and data center work that include fair wages, safe conditions, freedom of association, and grievance channels.

Treating due diligence as value-creating, not box-checking 鈥 Crucially, frame due diligence as an asset rather than a liability. 鈥淢ake your product better and ensure that when it does launch, it has the trust of its users,” Poynton adds.

Additional considerations

Addressing equity must be front and center. Responsible strategies include diversifying training sets without exploiting communities and giving buyers clear provenance statements on data scope and limits.

Bridging the digital divide is equally urgent. Bandwidth and device gaps risk amplifying inequality if design and deployment assume privileged contexts. In the workplace, Poynton stresses that these impacts will be compounded, from entry-level to expert roles.

Finally, remember that AI鈥檚 environmental footprint is a human rights issue. “There is a human right to a clean and healthy environment,” Poynton notes, adding that energy and water demands must be measured, reduced, and sited with respect for local communities, even as AI helps accelerate the clean energy transition. This is a proactive mandate.


You can find out more about the ethical issues facing AI use and adoption here

]]>
The false comfort of AI engineering: Building the reusable enterprise /en-us/posts/technology/ai-engineering-building-reusable-enterprise/ Thu, 20 Nov 2025 13:49:21 +0000 https://blogs.thomsonreuters.com/en-us/?p=68471

Key takeaways:

      • Shifting from engineering to architecture 鈥 Focusing solely on building better AI models and engineering solutions leads to isolated, non-reusable outputs. Instead, organizations should build AI into the broader enterprise, emphasizing reusable, machine-readable intelligence that integrates with business operations and data structures.

      • Regulation as opportunity for reusability and efficiency 鈥 Regulatory frameworks are not just compliance burdens; they also are catalysts for sustainable AI. By mandating standardized, machine-readable data, these regulations force organizations to design systems for reuse, enabling operational efficiency and scalable innovation.

      • Reusable enterprise is the path to sustainable reinvention 鈥 The future of AI leadership lies in building adaptable, reusable data and AI infrastructures. When standardized data, AI models, and regulatory compliance reinforce each other, organizations can continuously reinvent themselves, support multiple business outcomes from the same information assets, and achieve compound returns on their investments.


Across industries, executives are confronting an uncomfortable truth: AI projects are delivering outputs, not outcomes.

For years, organizations have poured time and capital into the mechanics of AI 鈥 the algorithms, the computation power, the data pipelines, and the engineering teams to support them. Yet results remain uneven. Models keep getting larger, but lasting, reusable business value hasn鈥檛 followed.

The problem isn鈥檛 the math, it鈥檚 the mindset.

Too many enterprises have tried to engineer AI into existence instead of architecting it into the enterprise. The focus has been on perfecting models, not integrating them into the broader data and operational fabric of the business. The assumption has been that a technically superior model naturally creates a competitive edge. It doesn鈥檛.

Without consistent governance, shared definitions, and reusable data structures, every AI initiative becomes its own isolated experiment. One line of business builds a credit-risk model. Another develops an environmental, social, and governance (ESG) classifier. A third deploys a generative assistant for customer support. Each team moves fast, but none build on each other鈥檚 work. The result is a proliferation of proofs of concept 鈥 impressive on paper but disconnected in practice.


For years, organizations have poured time and capital into the mechanics of AI 鈥 the algorithms, the computation power, the data pipelines, and the engineering teams to support them. Yet results remain uneven.


And this fragmentation carries a financial cost. Every new model adds complexity 鈥 new pipelines, new monitoring requirements, and additional governance checkpoints. These systems rarely scale together, and as integration demands grow, executives find themselves in a paradox: Make massive investments in AI infrastructure yet see declining agility and uncertain ROI.

The AI engineering mindset has optimized the structural parts, not the whole when it comes to a production solution set. In general, it has produced models that predict, but not organizations that learn.

In short, the AI engineering mindset has reached its limit 鈥 a sign that AI is entering sustainable growth cycles. Many leaders are beginning to realize that they don鈥檛 need more AI engineers, rather they need system designers who can embed intelligence into reusable business frameworks 鈥 all while navigating a regulatory environment increasingly defined by machine-readable data standards such as the Financial Data Transparency Act (FDTA) and Standard Business Reporting (SBR).

Regulation as catalyst, not constraint

At first glance, FDTA and SBR may appear to be just another layer of regulatory complexity. They are not. In fact, they represent one of the most powerful architectural opportunities available to organizations today.

By mandating machine-readable data standards, these frameworks force companies to design for reuse. They turn what once felt like a compliance exercise into an infrastructure strategy 鈥 one that connects regulatory requirements directly to operational efficiency. Build once. Reuse often.

For decades, compliance has been treated as a cost of doing business. Under FDTA and SBR, it can become the scaffolding of reinvention. Machine-readable, standardized data provides the foundation for models that are verifiable, shareable, and reusable across domains. Reporting ceases to be an afterthought and becomes a living data layer that fuels forecasting, stress testing, and product innovation.

When viewed through this lens, regulation isn鈥檛 an obstacle; it鈥檚 the blueprint for sustainable AI. It forces clarity, consistency, and interoperability 鈥 qualities every enterprise says it wants, but few achieve voluntarily. Regulation may finally deliver what AI engineering alone could not: The discipline of reusability.

From proofs of concept to proofs of architecture

For most organizations, AI success has been measured by the number of proofs of concept completed, or how fast a model moves into production. However, the real test of maturity isn鈥檛 how many experiments you run, it鈥檚 how easily those experiments can be scaled, reused, or extended.

That鈥檚 where the next evolution lies. We are now shifting from proofs of concept to proofs of architecture. And that means the question leaders should be asking isn鈥檛, 鈥Did it work once?鈥 but 鈥Can it work again, and with half the effort?鈥 Only when a single domain鈥檚 data can serve multiple regulatory, compliance, and analytical purposes, can the enterprise start to gain compound returns on its information assets.


When viewed through this lens, regulation isn鈥檛 an obstacle; it鈥檚 the blueprint for sustainable AI. It forces clarity, consistency, and interoperability 鈥 qualities every enterprise says it wants, but few achieve voluntarily.


This approach turns data from a static resource into a dynamic capability. AI is no longer something you deploy; rather, it鈥檚 something you design for reuse.

Engineering adaptability

Organizations that embrace this shift are learning to engineer adaptability rather than one-off innovation. Their data and AI systems act like interchangeable components, each capable of supporting new regulations, mergers, or market disruptions without starting from scratch.

Some industry examples of this development include:

      • Financial services 鈥 Stress-testing data used for regulatory compliance can also inform pricing analytics and liquidity simulations, reducing cycle time between audit and strategy.
      • Healthcare 鈥 Patient outcome models built for quality reporting can be reused to predict staffing needs or optimize clinical supply chains, extending beyond compliance and into operations.
      • Legal and compliance sectors 鈥 AI used for document classification under discovery protocols can be repurposed for internal policy audits or ESG disclosure mapping, turning regulatory data into a strategic asset.
      • Manufacturing and supply chain 鈥 Sensor and maintenance data initially used for safety reporting can drive predictive production planning and carbon-emission forecasting under emerging sustainability standards.
      • Public sector and critical infrastructure 鈥 Data collected for transparency and open-data mandates can be reused to model risk exposure across utilities, cybersecurity, and climate resilience programs.

In each of these cases, the same information infrastructure supports different outcomes. That鈥檚 the hallmark of a reusable enterprise.

AI engineering

The above chart鈥檚 interconnected components illustrate how standardized data, reusable AI, and regulatory compliance can reinforce one another to create a continuous cycle of enterprise reinvention 鈥 standardized data supports reusable AI, which in turn enhances reporting and regulatory alignment. The result is a virtuous loop that replaces isolated projects with scalable, data-driven reinvention.

A call to reusable leadership

The next phase of digital leadership won鈥檛 be defined by how sophisticated a company鈥檚 models are, but instead by how seamlessly those models integrate into decision-making.

The leaders who succeed will be those who align AI investments with evolving regulatory and data standards. Their organizations will speak a common data language in which AI, compliance, and analytics operate within a shared architectural framework.

As FDTA and SBR converge globally, the line between compliance and competitiveness will blur. What once felt like regulatory overhead will become the foundation of reusable intelligence. Reinvention, in this sense, isn鈥檛 a campaign or initiative 鈥 it鈥檚 a discipline. This is not AI as a project; it鈥檚 AI as infrastructure and the architecture of continuous reinvention.

For executives navigating 2026鈥檚 convergence of regulation, consolidation, and automation, the difference between thriving and merely surviving will depend on whether they can build organizations that learn, adapt, and continuously reinvent themselves through data.


You can find more blog postsby this author here

]]>
2025 Emerging Technology and Generative AI Forum: Human creativity and feedback drive ethical AI adoption /en-us/posts/technology/emerging-technology-generative-ai-forum-ethical-ai-adoption/ Tue, 30 Sep 2025 14:45:38 +0000 https://blogs.thomsonreuters.com/en-us/?p=67743

Key takeaways:

      • Embrace value, risk, and execution 鈥 for good and bad 鈥斕齈rofessional services firms must weigh the value of AI applications against potential risks, embracing both successes and failures as learning opportunities to improve responsible adoption.

      • Ethical oversight is everyone鈥檚 responsibility 鈥斕鼸nsuring responsible AI use in professional services requires active participation from all members of an organization, not just legal or IT teams.

      • Human creativity and feedback remain essential 鈥斕齏hile AI can generate ideas and accelerate processes, human judgment, creativity, and continuous feedback provide the proper pathways for ethical decision-making and successful integration.


AUSTIN, Texas 鈥 With the professional services world now squarely into the AI era, it鈥檚 clear that the speed of business is quicker than ever. Clients expect results in hours or even minutes rather than days, while generating documents can happen at the click of a button. Ask a research question, and a machine can intuit what you鈥檙e looking for with striking accuracy.

Alongside these business changes, however, it鈥檚 clear that the ethics of technology usage within professional services is shifting just as quickly. 鈥淓very time you come and do a talk with a group of people, within four weeks if not sooner, it鈥檚 changed,鈥 says Betsy Greytok, Associate General Counsel in Responsible Technology at IBM. 鈥淪o, it really does require you to keep on your toes.鈥

Ensuring that AI is used responsibly is paramount within professional services than in other professions, given the ethical and regulatory constraints placed on legal, tax, audit & accounting, financial services and risk, and more. During a recent session, A Unified Field: Ethical Considerations amid AI Development and Deployment, at the 成人VR视频 Institute鈥檚 2025 Emerging Technology and Generative AI Forum, panelists describe an ethical world that should be tackled as a challenge, rather than shied away from as an unsolvable risk.

Or, as Paige L. Fults, Head of School at the AI-centric Alpha School & 2-Hour Learning program, put it: 鈥淣ot being afraid of replacement, but leaning into repurpose.鈥

Embracing success 鈥 and failure

John Dubois,听the Americas AI Strategy Leader at Big 4 consultancy Ernst & Young, says he regularly gets questions from customers about AI and how they should use it, given that there are new AI applications arising seemingly every day. 鈥淭he way we describe it is a balance,鈥 Dubois explains. 鈥淟et鈥檚 start with value. If we know there鈥檚 value in something, then we can figure out the risk behind it, then we can figure out how we can execute.鈥

Just as importantly, however, this focus on value, risk, and execution can also aid professional services firms when an AI plan fails. For example, Dubois cites an MIT report from August 2025 that showed , often because of flawed integration. Embracing the value, risk, and execution strategy from the beginning not only allows for better chances of success, but even in the event of failure, 鈥渨e actually have a better shot at mitigating, when it does fall down.鈥

This sort of planning is not limited to just one group, Dubois says, noting that ethical oversight is seen as a key responsibility of everyone in the organization. He explains that E&Y has an internal implementation of OpenAI that has 150,000 distinct users each month. Because of an internal process called SCORE that removes customer data at the source, E&Y鈥檚 instance of OpenAI is largely clear of customer data 鈥 but it鈥檚 still not perfect.

E&Y has set a culture so that if someone sees proprietary data when using GenAI to develop a proposal or create a PowerPoint, they not only delete the data before use, but work to scrub it from the system entirely. 鈥淚t is all of our job to ensure that whatever you鈥檙e putting into that system or extracting out of that system, you鈥檙e cleansing,鈥 Dubois says. 鈥淚t鈥檚 not the job of the general counsel, or the risk team, or the IT team, it鈥檚 all of our job.鈥


When it comes to keeping up with AI ethics in a rapidly advancing space, professionals can rely on the same methods they have been employing for years to solve ethical quandaries: human creativity.


IBM鈥檚 Greytok agreed, noting that she鈥檚 part of an internal review board that examines major AI-related projects for ethical issues. There is a board review at the beginning of the development process to determine how risky a use case is, and then the system will give a response, considerations, and steps. If there is an issue, the board is empowered to stop development, even on a major project.

She drew an analogy to writing a paper in high school, in which there is a marked difference between simply turning in the paper, proofreading your own work, and asking a friend for peer review feedback. 鈥淭hat鈥檚 what you want, is that disagreement, because that鈥檚 critical thinking.鈥

She adds: 鈥淭he researchers sometimes get so excited about what they鈥檝e discovered that they forget to look at the other side of what can happen. You should want that. You shouldn鈥檛 be punished for saying, Is this the right thing or not?

The importance of feedback

Fults says that at the Alpha School, AI is not only baked into the curriculum, it . Students spend just two hours a day on academics, led by AI tools that are supplemented by off-line learning on a variety of subjects by in-person instructors that fill in the gaps that AI is not able to provide.

It鈥檚 a revolutionary concept but not a static one. Fults notes that 鈥渢he two-hour learning model has already changed so much since I鈥檝e been part of the school,鈥 and the instructors have a Slack channel on ways to find improvement that receives hundreds of messages a day.

It鈥檚 through this marrying of human intuition and the possibilities of the technology that Fults says she believes the school has found success and used AI ethically within education. 鈥淓ven though we have this tool, the human levers, the motivational levers that are happening day to day, actually make it work,鈥 she says, insisting that she 鈥渃an鈥檛 just hand [the technology] to any school鈥 without the corresponding processes in place.

Dubois and Greytok also call feedback a crucial part of the process in order to overcome AI barriers. Dubois tells the story of a large retailer that bought satellite images to determine footfall within a store. Shoppers, however, felt that was a privacy risk, and the idea was almost scrapped. Then, however, the legal and IT teams worked together to come up with an idea: Can you track clothing, but not faces, to get the same information of where within the store shoppers were going?

鈥淚t鈥檚 a creative workaround to get us to the same thing,鈥 Dubois explains. 鈥淲hen you have a constraint, what鈥檚 a clever way to work around this so we鈥檙e not taking a brand risk or a compliance risk?鈥

Indeed, when it comes to keeping up with AI ethics in a rapidly advancing space, professionals can rely on the same methods they have been employing for years to solve ethical quandaries: human creativity. AI can provide information and context more rapidly than ever before, but ultimately, professionals themselves will be the ones relied upon to make sure AI is used ethically and responsibly.

鈥淎I is an idea generator,鈥 Greytok says. 鈥淭he solution comes from the human.鈥


You can find out more about how emerging technologies are impacting professional services here

]]>
The role of humans: Integrating human judgment in court systems in the AI era /en-us/posts/government/human-judgment-ai-court-systems/ Mon, 24 Mar 2025 06:02:51 +0000 https://blogs.thomsonreuters.com/en-us/?p=65337 Interacting with the court as a pro se litigant is inherently challenging, and it would be significantly more difficult without compassion. While the law is written in black and white, its applications often involve nuanced interpretations.

Relying on an entity that operates strictly within these binary constraints to make complex decisions affecting real lives underscores the necessity of human involvement in the legal system. Therefore, it is imperative that any technology integrated into the judicial process incorporates human oversight, which should then ensure that regardless of the extent of AI integration, human participation remains essential to respect these nuances and uphold the integrity of the legal proceedings.

The judicial process encompasses various roles, some of which could theoretically be replaced or supported by AI. And this is even more probable when considering generative AI (GenAI), and its ability to complete tasks that can often be done by humans with little to no assistance.

While clerks, paralegals, and court administrators might employ AI to direct individuals to necessary information more promptly and consistently, it is conceivable that technology could be developed in which judges, juries, and mediators could be replaced by algorithms capable of analyzing facts and rendering decisions. Further, case workers, forensic experts, and probation officers could utilize AI to standardize pretrial decision-making processes. Transcripts may also achieve higher accuracy if generated by AI rather than traditional court reporters. Of course, all of this raises the question of whether the benefits outweigh the risks.


…It is conceivable that technology could be developed in which judges, juries, and mediators could be replaced by algorithms capable of analyzing facts and rendering decisions.


While each of these options presents certain advantages, it is crucial to maintain human involvement in the judicial process, explains of Louisiana’s Fifth Circuit Court of Appeal. “The practice of law isn’t simply about applying rules to facts,鈥 he says. 鈥淪imilar to the intricate political maneuvering , it requires a nuanced understanding, careful navigation of precedent, and the ability to craft arguments that resonate with human experience. It necessitates what lawyers often refer to as the feel of a case 鈥 an intuitive grasp of the issues derived from years of experience and critical analysis.”

How many humans?

In the current economy and political climate, courts 鈥 like many other government departments and agencies 鈥 are faced with the dilemma of providing greater service with less fiscal ability. Quite simply, as the number of staff declines, the number of cases is remaining steady or, in some jurisdictions, increasing. Enter the shiny toy of GenAI, having been developed in private industry and seemingly ripe to help the public sector. As adaptations begin, the first question is a compound one: What ought to be done? And how many people are necessary to get it done properly?

The initial step is to ensure a fully staffed IT department and a well-funded budget to develop and implement an effective program within the court system. This requires allocating resources to develop technology that can function properly within government programs. An assessment of this sort will provide a clearer indication of the number of personnel needed to implement those programs that could benefit each system.

Where in the loop?

The application of GenAI in the legal sector is demonstrated, for example, by the chatbots, created by the People’s Law School in Canada. The chatbot can answer basic legal questions, directing a person to the correct statute, rule, form, or other resource in seconds. Although the People’s Law School is not a court system, it examines the functionality of the court system to improve user interaction. Human involvement in the development and testing of such GenAI systems could help ensure that AI supports legal processes while maintaining human oversight.

As the use of chatbots progresses, it requires human review and verification of the outputs generated by these bots. This may involve programming the bots to refer specific issues to humans for resolution and periodic human assessment of the chatbot’s output. This places humans at the beginning of the loop, allowing for control of the output.

In other instances, AI can be used as a research or drafting aid. In these situations, the AI acts as a paralegal or first-year associate, meaning the human in the loop is a more experienced or well-trained individual. Humans can serve as intermediaries in this process, and it is crucial that humans do not become complacent with the work performed by the system and must always rely on their own expertise with the justice system.

Closing the loop

The fear voiced by most people in this process is over the other instances in which AI can be used in the legal process. For example, one key fear is that AI can become the final arbiter of the case 鈥 although it鈥檚 not a likely outcome, it is one which runs contrary to what most judges want.


In the current economy and political climate, courts 鈥 like many other government departments and agencies 鈥 are faced with the dilemma of providing greater service with less fiscal ability.


Judge Schlegel , noting that every day, courts determine who raises children, whether someone is evicted, and who goes to jail or receives a second chance at life. These aren’t abstract data points or business metrics; rather, they’re profound decisions that demand empathy, experience, and the kind of nuanced judgment that comes only from years of practice.

To this end, we have to be careful with new iterations of AI, such as agentic AI 鈥 which operates autonomously, making decisions and adapting to changes, similar to a human employee, while performing tasks with minimal supervision. Indeed, we have to prevent agentic AI from taking over the final part of the litigation process. The growth of agentic AI alone necessitates an important for discussion around maintaining human oversight in AI operations.

Examples of agentic AI include autonomous vehicles, virtual assistants, robotic process automation, AI in gaming characters, industrial robots, and algorithmic trading systems. Although these programs are advancing, their involvement in courts remains a distant prospect, thus far.

Conclusion

There will always be a human involved in the judicial process. From technical support to referral attorneys, human presence is essential to verify the work completed by AI. Therefore, it is crucial to train attorneys not only in their legal professions but also as proficient users of new technologies. Indeed, developing new AI models must prioritize both clarity and user-friendliness 鈥 this is not optional, but rather it is imperative for an effective system.


You can find more abouthow courts are using AI-driven technologyhere

]]>
More than data: AI, law & the indispensable human /en-us/posts/ai-in-courts/ai-law-indispensable-human/ Wed, 19 Mar 2025 22:59:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=65307 A recent post by Thomas Wolf of Hugging Face, challenging certain points made by Dario Amodei of Anthropic, has captured the attention of many. The debate is focused on the future of scientific discoveries, but it holds some relevance for the legal profession.


You can hear more insights from Judge Maritza Dominguez Braswell on here


Let’s start with a brief overview of the two posts.

In an essay entitled,, Amodei describes 鈥渁 country of geniuses in a datacenter.鈥 He envisions very powerful AI that is 鈥渟marter than a Nobel Prize winner across most relevant fields,鈥 skilled enough to 鈥減rove unsolved mathematical theorems,鈥 and capable of directing experiments and executing many tasks fully autonomously. Noting the current slow pace of groundbreaking discoveries in biology and medicine, Amodei lands at his central point: 鈥減owerful AI could at least 10x the rate of these discoveries, giving us the next 50-100 years of biological progress in 5-10 years.鈥

He refers to this as the 鈥渃ompressed 21st century鈥 because the progress of the entire 21st century will be possible within a few years. According to Amodei, this progress includes reliable prevention and treatment of nearly all natural infectious diseases, elimination of most cancers, prevention of Alzheimer鈥檚, and the potential to double our lifespan. Amodei admits his vision is radical, but believes most people underestimate 鈥渏ust how radical the upside of AI could be.鈥

Wolf then challenges Amodei鈥檚 vision. In his blog post, , Wolf argues that today鈥檚 systems are fundamentally constrained by their training data. He describes AI models as very 鈥渙bedient students,鈥 but not genius revolutionaries capable of true paradigm shifts. To create a data center of true geniuses 鈥 Einstein-level geniuses 鈥 Wolf argues, 鈥渨e don’t just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask.鈥

What does this debate say about law?

The Amodei-Wolf debate, while focused on the technological possibilities of AI, led me to a different question: beyond what AI can do, what will we choose for it to do in our legal system?

Even assuming Amodei is right 鈥 that AI models will become capable of true genius 鈥 do we want those geniuses deployed in all aspects of law? Amodei ponders whether powerful AI could 鈥渋mprove our legal and judicial system by making decisions and processes more impartial[.]鈥 However, impartiality alone doesn’t guarantee justice. Can AI, however brilliant, truly understand the human condition and context that shapes legal outcomes? And what relevance do Wolf鈥檚 observations hold in the legal context? Is there a need for more than just answers? Do we also need legal professionals who ask the questions that 鈥渘obody else has thought of or dared to ask鈥?

Think of Brown v. Board of Education, for example. Thurgood Marshall鈥檚 triumph was far more than a mechanical application of the law. It was a strategic challenge to decades of entrenched precedent, driven by a profound moral imperative. This required more than information-processing, it demanded the uniquely human courage to confront the status quo and the ingenuity to forge a new path.

The inherent duality of the law

This is where the legal field reveals its inherent duality. On the one hand, our work involves the methodical processing of vast datasets. We collect data (facts, evidence, legal precedent), we analyze this data, and then we generate new data 鈥 a process that mirrors the capabilities of generative AI models like ChatGPT and suggests that certain aspects of our work are ripe for automation.

On the other hand, many of our core functions 鈥 deeply rooted in human qualities 鈥 decisively counsel against AI overreliance. For instance:

      • Judgment & discretion 鈥 The law is not a rigid set of rules. Indeed, many of our balancing tests call for the inexact weighing of various factors, requiring the careful exercise of judgment and discretion.
      • Advocacy & persuasion 鈥 Legal practice is about representing clients and persuading others. This requires empathy, emotional intelligence, and the ability to connect with human decision-makers 鈥 a judge, a jury, or opposing counsel. AI might mimic human connection, but it cannot truly achieve it because of course, it is not human.
      • Adapting to novelty 鈥 New technological, economic, and social frameworks require legal professionals to think creatively and adapt to situations with no precedent. Consider the rise of social media and its impact on the collection of evidence. Or the need to adapt established legal and regulatory frameworks to entirely new financial instruments. Current AI is trained on past data and may struggle with the truly new.
      • Careful reasoning & societal attunement 鈥 While the legal system must not be swayed by fleeting whims of public opinion, it also must possess the capacity to evolve alongside shared norms. This adaptation requires careful reasoning about justice and fairness; and, in light of changing social structures and values, we need to ensure the law remains both grounded in principle and responsive to societal needs.

Thus, while many of our tasks are amenable to automation, much of our work demands a uniquely human perspective. As AI becomes more capable and integrated into our workflows, the defining question will not be, 鈥淲hat can AI do in the legal profession?鈥 But rather, 鈥淲hat should AI do in the legal profession?鈥

The Amodei-Wolf debate is a helpful reminder that AI is advancing quickly, and our choices will be critical. We cannot resign ourselves to the inevitability of AI; instead, we must approach it with a clear-eyed understanding of our agency. By defining clear boundaries, establishing ethical frameworks, and carefully integrating AI where appropriate, we can ensure it serves as a powerful instrument for justice, not a force that undermines it.


You can find out more about howcourts are using AI to improve efficiency and access to justicehere

]]>
Chatbots for justice: Building AI-powered legal solutions step by step /en-us/posts/ai-in-courts/chatbots-for-justice-building-ai-powered-legal-solutions/ Wed, 12 Mar 2025 22:39:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=65222 Low-income people in the United States can’t afford adequate legal help in 92% of civil matters, and the promise of AI could potentially make legal services more affordable, according to the . In fact, several court systems and nonprofits are demonstrating this promise, a couple of which were recently highlighted in a webinar series hosted by the .

For example, the developed the chatbot Beagle+ to assist people with step-by-step guidance on everyday legal problems. , Digital & Content Lead at the People鈥檚 Law School, led the efforts to create Beagle+ with technical assistance from, Founder of Tangowork. And the Alaska Court System (ACS) and听 using a grant from the NCSC to develop an AI-powered chatbot called the听Alaska Virtual Assistant, or AVA.听Jeannie Sato, Director of Access to Justice Services of ACS worked with , CEO and Founder of LawDroid, to develop the tool.

How courts can successfully experiment with AI

Jackson, McGrath, Sato, and Martin all offered their step-by-step guidance on how courts and nonprofits can experiment and use AI successfully within courts systems.

Step 1: Determine the problem

When starting a generative AI (GenAI) legal assistance project, it is crucial to first pinpoint the specific legal needs and challenges faced by your target audience. McGrath noted that he sees several common examples, including providing public access to legal information, creating internal resources like bench books for judges, and automating court document preparation.

To properly identify the problem, conduct thorough user research to understand pain points related to accessing and applying legal information. For instance, Martin suggests starting by speaking with court staff. “I think we sometimes get caught up in the excitement about wanting to throw AI at the problem and create a solution,鈥 Martin explains. 鈥淎nd there are many use cases, but I think the part that’s really important is to meet with your staff, meet with everyone who’s being impacted by the burden of work, and then determine, based on that, what is the best choice.”

Taking the time upfront to clearly define the problem will help ensure that any AI solution being developed is truly meeting a demonstrated need.

Step 2: Craft a vision

Shifting from problem identification to crafting a vision for the GenAI-powered solution is crucial. The People鈥檚 Law School鈥檚 Beagle+ chatbot illustrated this well. 鈥淏egin with the end in mind,鈥 says Jackson. 鈥淲hen you begin a project, keep in mind what you’re trying to achieve and what success looks like because that’s going to be different for each person.鈥

Jackson further described how in 2018, the initial vision was to create a chatbot capable of intelligently answering questions about consumer and debt law in British Columbia. Today, while that vision is realized, the ability of GenAI technology to adapt and improve over time necessitates a continuous and evolving vision.

Step 3: Allocate realistic resources

Assessing available resources is crucial before embarking on a GenAI project, with a realistic evaluation considering such factors as existing legal content, technological capabilities, staff expertise and capacity, and budget.

It鈥檚 important to examine the state of the organization鈥檚 existing legal information, including its documents and web pages, to determine the quality and consistency. Indeed, conflicting information across sources often can confuse GenAI models.

For staff capacity, Sato explains how the ACS started with a small team of people, which included the court administrator, the chief technology officer, a webmaster, and two to three staff attorneys, who were necessary for content review, testing, and feedback. It is not uncommon for an initial project to consume about 30% of each team member鈥檚 time.

Technological expertise is also a key consideration in resource assessment. In fact, Martins says this underscores the importance of working with a technology partner that can help navigate the different choices and options available, including the need to understand options for AI model selection, vector databases, and embedding strategies. While some may consider using large language models (LLMs)to reduce costs, the expenses for setup and maintenance often outweigh the benefits compared to using established services like OpenAI.

Financial resources are also a consideration, of course; however, it is worth noting that the cost of OpenAI tokens is often surprisingly low compared to other project expenses. For the creators of Beagle+, for example, using OpenAI鈥檚 tool has cost no more than $75 per month, according to Tangowork鈥檚 McGrath.


Courts can explore the possibilities of AI tools in tackling their specific legal challenges by experimenting within


Addressing common concerns

Our experts say that two common concerns often arise when considering the use of GenAI to solve justice gaps: one is the need for multilingual capabilities; and the second is how to handle AI-generated inaccurate information, or so-called hallucinations.

鈥淎dvanced LLMs like GPT-4 demonstrate impressive multilingual capabilities and are able to understand and respond in numerous languages on-the-fly without requiring additional training or configuration,鈥 explains McGrath. 鈥淢ultilingual support is a key advantage of modern LLMs, enabling chatbots to serve diverse populations with minimal additional development effort.鈥

However, hallucinations are a significant concern when using LLMs for legal applications. Fortunately, the combination of several advanced strategies can mitigate hallucinations:

      • First, grounding responses in providing context through techniques like retrieval-augmented generation can help tether outputs to verified source material.
      • Second, careful prompt engineering and relevancy scoring can further constrain responses.
      • And finally, automated checks that compare model outputs to source documents can flag potential hallucinations.

At the same time, manual expert review by humans 鈥 known colloquially as human in the loop 鈥 remains crucial, even with automated safeguards in place. Therefore it is key to periodically sample responses for human verification and focus more intensive review on higher-risk conversations.

Creating a successful AI-powered chatbot for legal information requires careful consideration of the several steps cited above. By following these actions and staying up to date with the latest developments in AI technology, courts and organizations working to close the justice gap can create effective and responsible chatbots that provide valuable legal information to those who need it most.


You can register here for the upcoming NCSC webinar on March 19, which will explore the

]]>
How to navigate ethics for common AI use cases in courts /en-us/posts/ai-in-courts/navigating-ethics/ https://blogs.thomsonreuters.com/en-us/ai-in-courts/navigating-ethics/#respond Wed, 30 Oct 2024 19:12:50 +0000 https://blogs.thomsonreuters.com/en-us/?p=63659 The integration of AI into court proceedings necessitates a careful balance between technological advancement and ethical considerations to best maintain the integrity of the justice system.

Judicial and legal professionals must stay informed about AI’s capabilities and limitations, ensure human oversight (often known as Human in the Loop methodology), address potential biases, and carefully evaluate AI tools for language interpretation, legal research, and transcription to uphold the principles of fairness and due process in the courts.

Codes of conduct help to navigate ethical maze

The adoption of AI into judicial operations requires competence and fairness to uphold the trustworthiness of the court system, says , Director at the Center of Judicial Ethics at the National Center for State Courts (NCSC). “AI might be scary to some of us, and we don’t understand it, but we are required to know about it as lawyers and as judges,鈥 Sachar says. 鈥淲e have ethical responsibilities that are written into our codes.”

More specifically, judges have an ethical responsibility to stay informed about technological advancements, including the benefits and risks associated with AI, as outlined in legal and judicial codes of conduct, Sachar explains, adding that this often involves staying informed about how generative AI (GenAI) operates, its drawbacks, and how to mitigate biases 鈥 all the while ensuring careful monitoring and supervision of outputs.

Additionally, judges must carefully evaluate AI tools, considering their accuracy, data privacy implications, and the potential impact on judicial fairness. By doing so, they can effectively integrate AI into their work while safeguarding the principles of justice.

Ethical inquiries for GenAI use cases

A recent webinar hosted by the NCSC and the 成人VR视频 Institute as part of their partnership on the analyzed how court and legal professionals should think about ethics in consideration for using AI in court operations around several use cases, which include language interpretation, legal research, and transcription.

Some of the more detailed analysis of the use cases included:

Using AI for language interpretation 鈥 The use of AI for language interpretation in courts raises several critical ethical inquiries, some of which include significant deliberations for translation accuracy and reliability, especially for rare languages, and maintaining fairness and avoiding bias. In addition, protecting privacy and data security while adhering to legal requirements for interpreters are some of the ethical factors to contemplate. 鈥淓ven the best AI interpretation tools will make errors, so courts must establish robust mechanisms of human oversight that can account for these limitations鈥 says , Lead for Data & Model Ethics at 成人VR视频. 鈥淭his could include a panel of multilingual experts to test AI interpretation tools, clear procedures for challenging and correcting AI-generated translations, and establishing processes for giving involved parties the opportunity for post-hoc corrections.” This process underscores the vital importance of human review for any AI output, including translations.

Legal research by clerks 鈥 Courts and legal professionals can ethically use AI tools, but with important caveats. Any AI-assisted legal research system should utilize Retrieval-Augmented Generation (RAG), so that specific, trusted legal databases and domain-engineered systems 鈥攔ather than the entire Internet 鈥 are consulted when the system generates its outputs, according to Sachar and Strohm. Users must be trained on and understand the drawbacks of AI, including potential biases in training data, and why they must always review any output from an AI system.

AI-generated court transcripts 鈥 Current ethical frameworks generally do not support fully replacing human court reporters with AI systems. While AI may assist human transcriptionists, it cannot yet match the accuracy and nuanced understanding provided by trained professionals, especially for complex legal proceedings. It is important for courts to maintain transcript accuracy and integrity, protect the privacy of court proceedings, and uphold existing professional standards and certifications for court reporters.

Steps to evaluate GenAI for common use cases

Implementing AI solutions requires careful planning and management to ensure ethical and effective use. The first step is to identify the specific areas where AI can enhance processes within the courts, such as automating repetitive tasks or improving decision-making accuracy.

鈥淐ourts should adopt a proactive stance in managing AI risks, from data protection to ethical implementation,鈥 says , Vice President of Data & Model Governance at 成人VR视频. 鈥淭his approach should include rigorous vetting of AI vendors, requiring them to demonstrate robust data measures, transparent AI governance frameworks, and adherence to established ethical standards. By doing so, courts can ensure the responsible use of AI technologies while maintaining the integrity of the judicial process.鈥

Once potential GenAI-assisted tasks are identified, the second step is selecting the right tools, including their compatibility with courts鈥 existing systems. It is also important to consider the ethical implications, such as data privacy and potential biases, and ensure that chosen tools comply with relevant regulations and standards.


Courts should adopt a proactive stance in managing AI risks, from data protection to ethical implementation.


Managing AI tools involves continuous monitoring and evaluation to guarantee that they deliver the expected benefits without any unintended consequences. As a result, the key third step is to establish clear guidelines for AI usage, including human oversight and accountability, as this helps courts maintain control over their AI operations.

Finally, courts also should provide regular staff training and encourage feedback loops among the staff as these can enhance understanding and proficiency in using AI tools. In fact, maintaining open communication channels for feedback ensures that any issues are promptly identified and addressed, leading to ongoing improvements and successful implementation of AI-powered solutions.

The incorporation of GenAI into court proceedings demands thoughtful examination of ethical duties for those presiding over court cases. Remaining knowledgeable about AI’s capacities and its limitations and subjecting the outputs of GenAI-driven tools to human analysis 鈥 including regularly analyzing GenAI-driven outputs for potential bias, prejudice, and unjust results 鈥 are all part of a comprehensive ethical framework.

Utilizing this approach can help courts and court professionals positively impact decision-making and case outcomes, while also maintaining the human element that is so crucial to interpreting laws and administering justice.


You can register for the next upcoming webinar, , in the TRI/NCSC AI Policy Consortium series, here.

]]>
https://blogs.thomsonreuters.com/en-us/ai-in-courts/navigating-ethics/feed/ 0
Using human-centered design to power AI for contract analysis /en-us/posts/legal/human-centered-ai-contract-analysis/ https://blogs.thomsonreuters.com/en-us/legal/human-centered-ai-contract-analysis/#respond Thu, 12 May 2022 12:52:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=51065 The incredible developments happening today in artificial intelligence (AI) and natural language processing (NLP) are allowing for an increased sophistication of innovative use in legal tech, regulatory tech, tax & accounting, and corporate work. Further, these innovations are challenging our understanding of knowledge work on one hand, and our understanding of collaboration between AI systems and human experts on the other.

The AI industry has started to use the umbrella term human-centered AI to describe the methods and research questions around such concepts as:

      • humans-in-the loop-systems;
      • how AI features are explained and understood;
      • research on trust and mental models on the interaction with AI systems;
      • balancing human domain expertise and AI analysis;
      • collective intelligence; and
      • collaborative decision-making.

Yet, how can a human-centered approach to design, data science experimentation, and agile development be applied to a real-world use case, such as AI-powered contract analysis?

Empowering contract analysis

Contract analysis itself encompasses various activities around contract review, clause extraction, comparisons of positions, deviation detection, and risk assessment.

In our research we studied how review and reporting on contracts is structured into finding answers to specific questions 鈥 such as which entities are involved in a contract, or what are different parties鈥 obligations 鈥 the answers to which are based on the interpretation and assessment of related legal language. State of the art information extraction听methods that apply NLP techniques to identify and extract specific clauses, positions, or obligations can greatly assist such task-driven review.

AI However, this review and analysis is not a one-way street. While a reviewer benefits from AI assistance, user input such as annotation, acceptance or rejection of AI powered suggestions, or flagging of potential legal issues can serve as good feedback into the system. Ideally, an end-user might engage in a dialog with the machine that does not only speed up the review but makes use of such feedback to improve extraction and analysis algorithms.

While initial definition and design of an AI-powered system ideally starts with an in-depth understanding of the problem space, user needs, and user goals, successful product innovation builds on lean experimentation and co-creation with domain experts and end-users. In this way, we can structure the design process in a participatory, human-centered way that enables various stakeholders to contribute, evaluate and shape the requirements and design of the system.

Moving to lean discovery

Design and AI communities are sharpening their toolkit for problem discovery and solution exploration. Various methods and techniques that have been borrowed from Human Computer Interaction (HCI) and User Experience (UX) also can be applied to design and experiment methods for AI innovation.

First, however, we start with a focus on an understanding of the information flow and the aspects of distributed cognition that occurs between different stakeholders and end users involved in contract review and analysis.

Through shadowing and co-creation workshops user researchers elicit crucial detail about the contract review processes, specific cognitive tasks and handling of information as well as common pain points.

When looking at contract analysis workflows, we might observe core activities in more detail, such as the comparison of a contract under review to guidance and other documents as well as legal professionals鈥 own expertise and knowledge. Lawyers or paralegals might review contracts based on internal documents such as a heads of terms to identify acceptable and unacceptable positions in comparison to a client-specific playbook. They may also compare a contract to some form of a standard or precedent contracts.

AIInnovation and research teams can benefit strongly from working closely with subject matter experts, such as legal experts on commercial real estate. Getting a grasp of the legalese and terminology involved in the review, as well as the legal weight of specific terms 鈥 such as in the example of a legal lease review the difference to put or keep a premise in good condition, a good repair or substantial repair 鈥 might prove particularly useful for the framing of AI research questions that could help provide the right answers to legal professionals.

Enabling rapid experimentation

Interdisciplinary teams of data scientists, designers, and engineers can explore various alternative solutions and evaluate different aspects of this ongoing process. Data scientists research state-of-the-art AI techniques; engineers explore aspects of production and how algorithms are put into action; and designers evaluate requirements and investigate how to best translate capabilities to the end-user.

A guiding principles for this kind of human/AI collaboration focuses its experimentation on AI-assisted workflows that 鈥渒eep the human in the loop.鈥 As pointed out by Ben Shneiderman, automation while maintaining some level of human control is not a contradiction. While automating tasks such as search for relevant clauses, clause classification, and facts extraction, ideally as much control as possible still resides with the legal professional and end-user of the system. AI features need to be made accessible and comprehensible for a non-technical audience, so that it is still entirely up to the legal professional to decide which suggestions to use in a report or further analysis. Ideally, this process would be easy-to-use and fall back on more manual workflow and simpler mechanisms, such as simple keyword search or Ctrl-F.

AIImagine the notion of a task-driven review that essentially lets the user easily select a number of questions he or she wants to analyze in a contract. Selecting specific review tasks and consciously assigning them to the machine can serve as a mediator, both to explain the capability of the system and to allow an easy interaction between user and underlying AI models. Human-centered design methodology provides a framework to run experiments, making use of mock-ups and semi-functional prototypes that allow end-users to explore data science questions, as well as interaction and display of model output.

User testing and productionization

It can be particularly helpful to evaluate any contract analysis system as early as possible, with input from domain experts and legal professionals. Indeed, you might want to focus both on the assessment of the quality of AI models, the evaluation of end-user experience, and the perceived quality of the system. Experimental user studies showed a gain in efficiency and increased levels of perceived task support when using an AI-powered system, as compared to a manual workflow. And of course, early user testing with law firms and legal professionals can inform the design and iterations of any new product.

However, it is crucial to leave enough lead time for designers, researchers, and business stakeholders to flesh out the details of the solution and define the requirements in the first place 鈥 before they move on to development. With sufficient resource and time, further questions can be explored as the development cycle goes forward.

AIThe idea to 鈥渟tart small and scale up later鈥 is yet another core concept that might help users focus resources early on. In the context of contract analysis, users might want to focus on specific document review use cases, such as due diligence, re-papering, or contract negotiation; or in specific practice areas or domains, such as real estate, service license agreements, or employment records. Once a system works, it can always be scaled to other use cases, capabilities can always be added post-launch.

The future of human-centered AI

AI systems offer fantastic opportunities to support and assist professional workflows. It is crucial, however, not to ignore the incredible value of human expertise and professionals鈥 ability to relate information to a broader context and to 鈥渃onnect the dots鈥.

Taking a human-centered approach, we can inform the way forward into a future for knowledge work and professional services that builds on collective intelligence and human/AI collaboration.

For legal tech innovation, this human-centered approach and a focus on systems that 鈥渒eep the human in the loop鈥 seems particularly appropriate. Legal evaluation, risk assessment, and legal advice require assisting systems that are explainable and interpretable. Audit trails and design patterns that allow end-users to overwrite machine suggestions, offer feedback, and re-train models will go far to support better understand and trust in the process 鈥 and ultimately increase the adoption of AI systems that assist legal work.

]]>
https://blogs.thomsonreuters.com/en-us/legal/human-centered-ai-contract-analysis/feed/ 0