Ethics Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/ethics/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Mon, 23 Mar 2026 17:13:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Scaling Justice: Unlocking the $3.3 trillion ethical capital market /en-us/posts/ai-in-courts/scaling-justice-ethical-capital/ Mon, 23 Mar 2026 17:12:28 +0000 https://blogs.thomsonreuters.com/en-us/?p=70042

Key takeaways:

      • An additional funding stream, not a replacement 鈥 Ethical capital has the potential to supplement existing access to justice infrastructure by introducing a justice finance mechanism that can fund cases with measurable social and environmental impact.

      • Technology as trust infrastructure 鈥 AI and smart technologies can provide the governance scaffolding required for ethical capital to flow at scale, including standardizing assessment, impact measurement, and oversight.

      • Capital is not scarce; allocation is 鈥 The true bottleneck is not the availability of funds; rather it鈥檚 the disciplined, investment-grade legal judgment required to evaluate risk, ensure compliance, and measure impact in a way that makes justice outcomes investable.


Kayee Cheung & Melina Gisler, Co-Founders of justice finance platform Edenreach, are co-authors of this blog post

Access to justice is typically framed as a resource problem 鈥 the idea that there are too few legal aid lawyers, too little philanthropic funding, and too many people navigating civil disputes alone. This often results in the majority of individuals who face civil legal challenges doing so without representation, often because they cannot afford it.

Yet this crisis exists alongside a striking paradox. While 5.1 billion people worldwide face unmet justice needs, an estimated $3.3 trillion in mission-aligned capital 鈥 held in donor-advised funds, philanthropic portfolios, private foundations, and impact investment vehicles 鈥 remains largely disconnected from solutions.

Unlocking even a fraction of this capital could introduce a meaningful parallel funding stream 鈥 one that鈥檚 capable of supporting cases with potential impacts that currently fall outside traditional funding models. Rather than depending on charity or contingency, what if justice also attracted disciplined, impact-aligned investment in cases themselves, in addition to additional funding that could support technology?

Recent efforts have expanded investor awareness of justice-related innovation. Programs like Village Capital鈥檚 have helped demystify the sector and catalyze funding for the technology serving justice-impacted communities. Justice tech, or impact-driven direct-to-consumer legal tech, has grown exponentially in the last few years along with increased investor interest and user awareness.

Litigation finance has also grown, but its structure is narrowly optimized for high-value commercial claims with a strong financial upside. Traditional funders typically seek 5- to 10-times returns, prioritizing large corporate disputes and excluding cases with significant social value but lower monetary recovery, such as consumer protection claims, housing code enforcement, environmental accountability, or systemic health negligence.

Justice finance offers a different approach. By channeling capital from the impact investment market toward the justice system and aligning legal case funding with established impact measurement frameworks like the , it reframes certain categories of legal action as dual-return opportunities, covering financial and social.

This is not philanthropy repackaged. It鈥檚 the idea that measurable justice outcomes can form the basis of an investable asset class, if they鈥檙e properly structured, governed, and evaluated.

Technology as trust infrastructure

While mission-aligned capital is widely available, the ability to evaluate legal matters with the necessary rigor remains limited. Responsibly allocating funds to legal matters requires complex expertise, including legal merit assessment, financial risk modeling, regulatory compliance, and impact evaluation. Cases must be considered not only for their likelihood of success and recovery potential, but also for measurable social or environmental outcomes.

Today, that assessment is largely manual and capacity-bound by small teams. The result is a structural bottleneck as capital waits on scalable, trusted evaluation and allocation.

Without a way to standardize and responsibly scale analysis of the double bottom line, however, justice funding remains bespoke, even when resources are available.

AI-enabled systems can play a transformative role by standardizing assessment frameworks and supporting disciplined capital allocation at scale. By encoding assessment criteria, decision pathways, and compliance safeguards and then mapping case characteristics to impact metrics, technology can enable consistency and allow legal and financial experts to evaluate exponentially more matters without lowering their standards.

And by integrating legal assessment, financial modeling, and impact alignment within a governed tech framework, justice finance platforms like can function as the connective tissue. Through the platform, impact metrics are applied consistently while human experts remain responsible for final determinations, thereby reducing friction, increasing transparency, and supporting auditability.

When incentives align

It鈥檚 no coincidence that many of the leaders exploring justice finance models are women. Globally, women experience legal problems at disproportionately higher rates than men yet are less likely to obtain formal assistance. Women also control significant pools of global wealth and are more likely to . Indeed, 75% of women believe investing responsibly is more important than returns alone, and female investors are almost twice as likely as male counterparts to prioritize environmental, social and corporate governance (ESG) factors when making investment decisions, .

When those most affected by systemic barriers also shape capital allocation decisions, structural change becomes more feasible. Despite facing steep barriers in legal tech funding (just 2% goes to female founders), women represent in access-to-justice legal tech, compared to just 13.8% across legal tech overall.

This alignment between lived experience, innovation leadership, and capital stewardship creates an opportunity to reconfigure incentives in favor of meaningful change.

Expanding funding and impact

Justice financing will not resolve the justice gap on its own. Mission-focused tools for self-represented parties, legal aid, and court reform remain essential components of a functioning justice ecosystem. However, ethical capital represents an additional structural layer that can expand the range of cases and remedies that receive financial support.

Impact orientation can accommodate longer time horizons, alternative dispute resolution pathways, and remedies that extend beyond monetary damages. In certain matters, particularly those involving environmental harm, systemic consumer violations, or community-wide injustice, capital structured around impact metrics may identify and enable solutions that traditional litigation finance models do not prioritize.

For example, capital aligned with defined impact frameworks may support outcomes that include remediation programs, compliance reforms, or community investments alongside financial recovery. These approaches can create durable benefits that outlast a single judgment or settlement.

Of course, solving deep-rooted inequities and legal system complexity requires more than new tools and new investors. It requires designing capital pathways that are repeatable, accountable, and aligned with measurable public benefit.

Although justice finance may not be a fit for every case and has yet to see widespread uptake, it does have the potential to reach cases that currently fall through the cracks 鈥 cases that have merit, despite falling outside traditional litigation finance models and legal aid or impact litigation eligibility criteria.


You can find other installments of our Scaling Justice blog series here

]]>
The Human Layer of AI: How to build human rights into the AI lifecycle /en-us/posts/sustainability/ai-human-layer-building-rights/ Mon, 24 Nov 2025 16:33:36 +0000 https://blogs.thomsonreuters.com/en-us/?p=68546

Key takeaways:

      • Build due diligence into the process 鈥 Make human-rights due diligence routine from the decision to build or buy through deployment by mapping uses to standards, assess severity and likelihood, and close control gaps to prevent costly pullbacks and reputational damage.

      • Identify risks early on 鈥 Use practical methods to identify risks early by engaging end users and running responsible foresight and bad headlines

      • Use due diligence to build trust 鈥 Treat due diligence as an asset and not a compliance box to tick by using it to de鈥憆isk launches, uncover user needs, and build durable trust that accelerates growth and differentiates the product with safety-by-design features that matter to buyers, regulators, and end users.


AI is reshaping how we work, govern, and care for one another. Indeed, individuals are turning to cutting-edge large language models (LLMs) to ask for emotional help and support in grieving and coping during difficult times. 鈥淯sers are turning to chatbots for therapy, crisis support, and reassurance, and this exposes design choices that now touch the right to information, privacy, and life itself,鈥 says , co-Founder & Principal at , a management consulting firm that specializes in human rights and responsible technology use.

These unexpected uses of AI are reframing risk because in these instances, safeguards cannot be an afterthought. Analyzing who might misuse AI alongside determining who will benefit from its use must be built into the design process.

To put this requirement into practice, a human rights lens must be applied across the entire AI lifecycle from the decision to build or buy to deployment and use, to help companies anticipate harms, prioritize safeguards, and earn durable trust without hampering innovation.

Understanding human rights risks in the AI lifecycle

Human rights risks can surface at every phase of the AI lifecycle. In fact, they have emerged in efforts to train these frontier LLMs in content moderation functions and now, are showing up elsewhere. For example, data enrichment workers who refine training data, and data center staff, who power these systems, are most likely to face labor risks. Often located in lower鈥慽ncome markets with weaker protections, they face low wages, unsafe conditions, and limits on other freedoms.

During the development phase, biased training sets and the probabilistic nature of models can generate misinformation or hallucinations, and these can further undermine rights to health and political participation. Likewise, design choices often can translate into discriminatory outcomes.

Unfortunately, the use of AI-enabled tools also can compound these harms. Powerful models can be misused for fraud or human trafficking. In addition, deeper integration with sensitive data can heighten privacy and security risks.

A surprising field pattern exacerbates the risk when people increasingly use AI for therapy鈥憀ike support and disclose issues related to emotional crises and self鈥慼arm. In particular, this intimacy widens product and policy obligations, which include age鈥慳ware safeguards and clear limits on overriding protections.

Why human rights due diligence is urgent

That鈥檚 why human rights due diligence must start with people, not the enterprise. By embedding human rights due diligence into the lifecycle of AI, development teams can begin to understand the technology and its intended uses, then map those uses to international standards. Next, a cross functional team gathers to weigh benefits alongside harms and to consider unintended uses. Primarily, they need to answer the question, 鈥淲hat happens if this technology gets in the hands of a bad actor?”

From there, the process demands an analysis of severity 鈥 which assesses scale, scope, and remediation, and the likelihood of each use. The final step involves evaluating current controls across supply chains, model design, deployment, and use-phases to identify gaps.

The biggest barrier in layering in a human rights lens into to AI is the need for speed to market. The races to put out minimally viable products accompanied by competitive pressure can eclipse robust governance, yet early due diligence may prevent costly pullbacks and bad headlines. Article One鈥檚 Poynton notes that no one wants to see their product on the front page for enabling stalking or spreading disinformation. Building safeguards early “ensures that when it does launch, it has the trust of its users,” she adds.

How to embed safeguards without slowing teams

The most efficient path in translating human rights into the AI product lifecycle is to turn policy principles, goals, and ambitions into actionable steps for the engineers and the product teams. This requires the 鈥渆ngineers to analyze how they do their work differently to ensure these principles live and breathe in AI-enabled products,鈥 Poynton explains. More specifically, this includes:

Identifying unexpected harms 鈥 One of the most critical, yet difficult components of the human rights impact assessment is brainstorming potential harms. Poynton recommends two ways to make this happen: First, engage with end users to help identify potential harms by asking, 鈥淲hat are some issues that we may not be considering from the perspectives of accessibility, trust, safety and privacy?鈥 Second, run responsible foresight workshops at which individuals play the parts of bad actors to better identify harms and uncover mitigation strategies quickly. Pair that with a bad headlines exercise that can be used to anticipate front鈥憄age failures. Then, ship with these protections in place, pre鈥憀aunch.

Implementing concrete controls 鈥 Embedding safety-by-design should cover both content and contact, a lesson from gaming in which grooming risks require more than just filters. Build age鈥慳ware and self鈥慼arm protocols, including parental controls and principled policies on overrides. Govern sales and access with customer vetting, usage restrictions, and clear abuse鈥憆esponse pathways. In the supply chain, set supplier standards for enrichment and data center work that include fair wages, safe conditions, freedom of association, and grievance channels.

Treating due diligence as value-creating, not box-checking 鈥 Crucially, frame due diligence as an asset rather than a liability. 鈥淢ake your product better and ensure that when it does launch, it has the trust of its users,” Poynton adds.

Additional considerations

Addressing equity must be front and center. Responsible strategies include diversifying training sets without exploiting communities and giving buyers clear provenance statements on data scope and limits.

Bridging the digital divide is equally urgent. Bandwidth and device gaps risk amplifying inequality if design and deployment assume privileged contexts. In the workplace, Poynton stresses that these impacts will be compounded, from entry-level to expert roles.

Finally, remember that AI鈥檚 environmental footprint is a human rights issue. “There is a human right to a clean and healthy environment,” Poynton notes, adding that energy and water demands must be measured, reduced, and sited with respect for local communities, even as AI helps accelerate the clean energy transition. This is a proactive mandate.


You can find out more about the ethical issues facing AI use and adoption here

]]>
New guide: A three-level approach to AI readiness in state courts /en-us/posts/ai-in-courts/ai-readiness-courts-guide/ Thu, 30 Oct 2025 17:52:31 +0000 https://blogs.thomsonreuters.com/en-us/?p=68252 3 key takeaways:
      • Establish strong governance and principles first听鈥 Before implementing AI, courts must create cross-functional oversight committees, define guiding principles that align stakeholders, and develop clear AI use policies with high-quality data governance.

      • Prioritize people-centered implementation听鈥 Successful AI adoption requires engaging stakeholders early as co-creators and conducting thorough resource assessments that account for total cost of ownership (including maintenance and compliance).

      • Commit to continuous monitoring and adaptation听鈥 AI implementation requires ongoing human oversight to monitor performance, prevent data and model drift, and systematically review governance structures and policies after each project to strengthen courts鈥 overall AI readiness for future initiatives.


AI has the clear potential to revolutionize courtroom workflows, but AI itself can carry unforeseen risks. Indeed, AI solutions are complex and opaque, with inherent randomness and risk, says , Senior AI Manager in the New Jersey courts.

To help courts leverage AI safely, the with support from the State Justice Institute convened 16 experts to create an , which was featured in a recent webinar by the . This guide provides practical advice and offers a three-level approach for courts adopting AI: strategic planning (level 1); thoughtful project implementation (level 2); and continuous adaptation (level 3). These three levels guide courts from establishing governance and principles to executing measurable, people-centered projects that enhance trust and further the course of justice.

Establishing governance, principles & policies

To unlock AI’s potential while mitigating hazards, courts must first establish a strong foundation through clear governance, guiding principles, and well-defined policies. More specifically, courts should:

Establish governance with a diverse group of voices 鈥 A cross-functional committee sets policy, oversight, and feedback loops. 鈥淎I governance is鈥 really the leadership structure for all of the court’s uses of AI,鈥 says the NCSC鈥檚 , adding that AI Governance Tool in the AI Readiness guide should be used to run a structured 12鈥憁onth plan that covers level 1 readiness steps end-to-end.

Define your operating philosophy before you start 鈥 Establishing guiding principles are not bureaucratic exercises but rather essential blueprints for successful and ethical AI integration.听Without them, courts risk misalignment among stakeholders, the development of systems that do not serve their intended purposes, and the possibility of costly failures. These principles provide a constant reference point, ensuring that as AI projects evolve, the court remains true to its core values and objectives.

Indeed, the overarching mindset that directs actions and choices as part of the governing principles should align stakeholders, manage expectations, and anchor future decisions. 鈥淭he leading cause of software failures historically has been misalignment among stakeholders and changing or poorly documented requirements,鈥 says , Assistant Professor of Computer Science at George Mason University, adding that the same is true for AI projects. 鈥淲ithout these guiding principles [for AI use], there’s the same risk for misalignment among stakeholders.鈥

Another core tenet of any firm foundation is to set internal rules as part of an AI use policy that provides guardrails and clarity for staff during the transition. And because high-quality, well-governed data is fundamental, pay attention to the quality of the data. 鈥淥ne of the dirty secrets of data science is the data cleansing process,鈥 says Appavoo. 鈥淕arbage in, garbage out.鈥

Finally, pick projects using workflow analysis and by identifying pain points; then use a scoring matrix to evaluate potential projects based on criteria such as impact and feasibility.

Implementing projects that focus on practicality

After foundational planning is complete, the next stage focuses on the practical implementation of AI projects through productive change management, resource assessment, and strategic procurement. Beyond initial deployment, substantial work occurs during this stage.

The most important element in this phase is that successful AI adoption hinges on a strategic, people-centric approach that carefully considers resources and risk. “When people are engaged early and meaningfully, they stop being subjects of change and start being co-creators and co-designers of it,鈥 explains , Assistant Professor of Art and Design at Northeastern University. 鈥淎nd that sense of ownership is one of the strongest predictors of adoption.”

Indeed, effective change management and prioritizing person-centered design are paramount. Often, this means actively engaging stakeholders, fostering open communication, and providing comprehensive training and support throughout the project lifecycle.


The most important element… is that successful AI adoption hinges on a strategic, people-centric approach that carefully considers resources and risk.


Perhaps the most challenging action in this phase is that courts start moving beyond immediate costs and benefits to better understand the full financial and operational implications of AI projects. This requires an accurate assessment of both tangible and intangible costs, along with clearly defining success metrics.

“What’s really tricky about that is some of those costs are very obvious and simple,鈥 says Dr. Miller. 鈥淪ome of them are very squishy and hard to estimate, and the same goes for the benefits.”

In fact, at this stage there are common pitfalls around cost, according to , Chief of Innovation and Emerging Technologies for Maricopa County, Arizona. “Courts sometimes focus only on the upfront purchase price, or the development budget, and they ignore the updates, the retraining, the legal compliance 鈥 and that can multiply the total cost of ownership.”

Further, courts need to consider their own capabilities, the practicality of their AI solution, its long-term sustainability, and potential risks such as transparency and vendor dependency. If the decision is to buy a product off the shelf, the procurement process and vetting vendors will be key. “If we don’t clarify who’s responsible when the system makes a mistake, we expose ourselves to reputational and legal risk,” Judy notes.

Continuous improvement and preparing for the next AI initiative

After implementing an AI project, the journey does not end. Indeed, it evolves the critical importance of incorporating those lessons learned back into court operations through post-project review.

“It is not about getting in the game when it comes to AI, it is about staying in the game,鈥 says Appavoo. 鈥淭he complexity is actually after you productionize a solution 鈥 that is what we see.鈥 You have to have a human in the loop, stay on top of things in terms of observability, constantly monitor the performance, constantly check the data or the model are not drifting, or the business context is changing, Appavoo explains.

To help put this into practice, the AI readiness guide has comprehensive feedback checklists courts can use to systematically review the foundational AI program elements for ongoing adaptation. More specifically, the post-project review process should examine whether governance structures remain effective, if guiding principles need refinement, and whether internal policies require updates. This continuous improvement approach transforms each AI implementation into a learning opportunity that strengthens the court’s overall AI readiness for its subsequent initiatives.


You can access the from the National Center for State Courts and the State Justice Institute here

]]>
5 steps for fostering ethical corporate cultures /en-us/posts/corporates/5-steps-ethical-corporate-cultures/ Thu, 30 Oct 2025 13:47:53 +0000 https://blogs.thomsonreuters.com/en-us/?p=68229 This blog post was written by Max Beilby, an organizational psychologist specializing in applying behavioral science to enhance culture and risk management within financial services; Antoine Ferrere, the CEO of lumenx.ai, and a recognized leader in applying behavioral, data science and AI for good; and Brian R. Spisak, PhD, a leading voice at the intersection of digital transformation and workforce management. The views expressed in this article are Max鈥檚 alone, and do not reflect the views or opinions of his employer.

Key insights:

      • Ethics must be embedded, not bolted on 鈥 Corporate leaders should move beyond legal compliance to proactively weave ethics into decision-making and define their legacy by how results are achieved, not just what is achieved.

      • Misconduct is usually systemic, not individual 鈥 Unethical behavior often arises from environments in which there are misaligned incentives, pressure, and self-deception. Thus, redesigning rewards and evaluations to balance short- and long-term outcomes is pivotal.

      • A practical playbook exists 鈥 Use a catalytic anchor event, measure the ethical climate rigorously, empower local teams with data and AI-driven tools, align policies and incentives globally with core values, and run small, iterative experiments to refine what works.


We鈥檙e at a pivotal moment in history, a defining moment in which rapid societal change and mounting crises are intersecting with awe-inspiring technical advancements. This convergence 鈥 undoubtedly dangerous 鈥 is also an opportunity for leaders and their teams to turn the tide and demonstrate how principled decisions can lead to transformative outcomes for business and society.

As we navigate this critical juncture, it鈥檚 clear that the path forward requires more than mere compliance with laws and regulations. It calls for proactively embedding ethics into all aspects of organizational decision-making. Indeed, this is corporate leaders鈥 opportunity to redefine their legacies 鈥 to be remembered not just for what they achieved, but for how they achieved it.

The challenge

First, we acknowledge the trade-offs and ethical dilemmas that often seem insurmountable in the corporate world. The perceived tug-of-war between people and profit, speed and safety, or quality and quantity, pose significant challenges. However, our belief is that, despite these hurdles, it is indeed possible to create ethical ecosystems that not only survive but thrive.

What is important to understand is that ethical lapses are often . Employees don鈥檛 suddenly become dishonest; however, in the absence of an evidently ethical culture, rationalization and post-justification can make it seem as if one is grappling with complex ethical dilemmas of balancing benefits against potential risks. While this dilemma is often real, it can also be fundamentally self-serving, cloaking the profit motive in the guise of the societal benefits 鈥 be it patient care or financial well-being.

Further, misconduct in business arguably most often stems not from a couple of rogue actors, but rather, the broader environment that either promotes, or fails to curb, unethical behavior. In other words, it鈥檚 , and more about a work environment that allows ordinary people to succeed professionally, through what they perceive to be acceptable compromises. Misconduct therefore doesn鈥檛 occur in a vacuum; it festers in conditions in which flawed incentives, unreasonable commercial pressures, and ethical blindness prevail. Similarly, hyper-competitive performance evaluations that incentivize individuals to compete in a zero-sum contest, rather than and their ethical conduct, can encourage unethical cultures to spread.

5 steps toward ethical cultures

Admittedly, redesigning these systems requires a paradigm shift in business. However, there are several practical steps that enlightened business leaders can take to foster ethical organizational cultures.

1: Identify an anchor

To initiate such a transformation, it鈥檚听crucial to identify an anchor 鈥 a notable event that can serve as a catalyst. For example, this could be prompted by a scandal or a change in leadership. The key is to use this event not just as a standalone occurrence, but to signal a shift in how seriously ethics is taken within the workplace.

Use your anchor event to announce your intention to enhance your organization鈥檚 ethical infrastructure. This should be the moment that captures people鈥檚 attention.

2: Establish a baseline

While data from engagement surveys may offer some insights, they often lack the rigor needed to assess the ethical climate of the typical workplace. To create a precise measure, consider other factors such as employees鈥 perceptions of fairness and trust.听These perceptions can be evaluated using methods such as anonymous surveys and confidential interviews.

By establishing a robust analytical system, you will be able to produce a clear picture of the current ethical climate across your organization, while identifying those business areas that need intervention.

3: Empower locally

Presenting data can ignite interest and spark meaningful conversations about ethics, which in turn can help shift the narrative and establish a common understanding.

Focus on creating interactive sessions in which leaders can digest data and discuss implications for their business areas.听Empower HR, Risk & Compliance, Legal, Finance, and other corporate functions with the tools and training needed to analyze and interpret the data. This could involve training modules, workshops, and the integration of AI tools to provide nuanced insights.

4: Act globally

While empowering local teams to address ethical dilemmas is crucial, it is equally important to ensure their policies and processes are aligned with the organization’s overarching values.听While such alignment can be a challenge for large multinational organizations, emphasizing these universal values can be done.

For example, revisit incentive systems and check that they promote desirable behavior,听rather than solely focusing on financial听performance. Also, ensure that these systems are transparent and communicated clearly and consistently to all employees.

5: Embrace experimentation

Finally, foster a mindset of experimentation. Run a series of small-scale pilots to test various interventions. Approach this with humility and scientific rigor, acknowledging that adjustments may be necessary, and that success is rarely straightforward.

This approach, while it may sound daunting, is actually quite manageable. By embracing the challenge with the curiosity and methodology of a scientist, you can pave the way for genuine and lasting improvements.

Moving forward to an ethical environment

Today鈥檚 business leaders are navigating a multitude of hazards, ranging from rising geopolitical tensions, rapidly evolving AI-driven technology, and society鈥檚 shifting attitudes and expectations. Yet, in these challenges lies an unprecedented opportunity for leaders to redefine their legacy by embedding ethical principles deeply into the heart of their organizations. In this way, business leaders can turn these risks into sources of long-term competitive advantage.


You can find out more on how within their workplaces here

]]>
The AI Law Professor: When you need AI governance that just works /en-us/posts/legal/ai-law-professor-ai-governance/ Tue, 28 Oct 2025 12:37:07 +0000 https://blogs.thomsonreuters.com/en-us/?p=68179

Key points:

      • Governance must be practical 鈥 Complex policies that lawyers ignore are worse than no policies at all. Governance leaders should focus on daily workflows, not academic perfection. Yet, the most elegant policy fails if it cannot adapt to the pace of AI tool evolution.

      • The four pillars still matter 鈥 Transparency, autonomy, reliability, and visibility provide a tested framework for AI governance that can be scaled from solo practitioners to AmLaw 100 firms.

      • Risk stratification drives adoption 鈥 Not every AI use case deserves the same scrutiny. Smart governance distinguishes between drafting a motion and scheduling a meeting.


Welcome back to my The AI Law Professor column. Last month, I unpacked GPT-5’s rollout and argued for maintaining human control even as AI systems become more autonomous. This month, I am delivering on my promise to outline governance that actually works, governance that lawyers will use rather than circumvent.

Good governance feels invisible until something goes wrong. In legal practice, we already have this 鈥 we use conflict check systems, document retention schedules, and billing protocols that capture time. AI governance should work the same way: structured enough to prevent problems, yet flexible enough to evolve with the technology, and practical enough that busy lawyers actually follow it.

Why most AI policies fail

The AI governance documents I see in practice fall into two categories: the overwrought and the undercooked. The overwrought policies read like academic treatises on algorithmic fairness 鈥 they鈥檙e impressive in scope, but impossible to implement. The undercooked policies amount to “don’t put client data in ChatGPT” and a prayer that nothing bad happens鈥 or worse, such as absolute bans on generative AI (GenAI).

However, both approaches miss the mark because they treat AI as either a silver bullet or a loaded gun, when the reality is somewhere in between and much more mundane. AI tools are productivity enhancers with specific strengths, specific blind spots, and the same change management challenges as any other technology adoption.

The practical problem is that lawyers need guidance on Tuesday afternoon when the brief is due Wednesday morning. Abstract principles about algorithmic bias do not help; while detailed workflows that account for real deadlines and actual capabilities do.

Building on the four pillars

In previous columns, I have argued for four deployment principles: transparency, autonomy, reliability, and visibility. These are not just theoretical constructs, rather they are the foundation of any governance framework that legal teams can actually implement successfully.

In the context of these four pillars, the most practical governance frameworks start with risk classification. Indeed, a three-tier system works well for most legal teams and includes:

High-risk uses 鈥 These include client-facing documents, substantive legal analysis, court filings, and anything involving privileged communications. You don鈥檛 want to get sanctioned for hallucinations! These tasks require mandatory human review, detailed documentation, and oftentimes, obtaining client disclosure.

Medium-risk uses 鈥 These usually cover internal research, document review, draft preparation, and administrative analysis. These tasks benefit from AI assistance but need quality checkpoints and clear limitations on autonomy.

Low-risk uses 鈥 These more mundane uses encompass scheduling, formatting, basic summarization, and routine administrative tasks. These can run with minimal oversight, although they still require basic security controls.

This framework lets legal teams deploy AI tools confidently in low-risk contexts while maintaining appropriate caution for high-stakes work. It also creates a clear path for expanding AI use as tools improve and teams gain experience. Team leaders can also choose what roles have access to each tier.

Change management as governance

AI tools evolve faster than traditional legal technology, and GPT-5’s rollout demonstrated how vendor decisions can disrupt established workflows overnight. Effective governance must account for this pace of change. It鈥檚 inconvenient, but it is our new reality.

Using specific versions of AI models (for example, when using the OpenAI鈥檚 API, you can specify 鈥榞pt-5-2025-08-07鈥 versus 鈥榞pt-5,鈥 which refers to the latest version of the model). This provides stability for mission-critical work. When you rely on specific AI behavior for document review or contract analysis, lock in the model version that delivers consistent results. Do not let automatic updates become uncontrolled experiments with client work.


For further help getting started, see here


Testing protocols create confidence in AI upgrades. Before deploying a new model or tool, run it through the same tasks you use for daily work and make it your AI model test set. Compare accuracy, consistency, and completeness against your current baseline. Determine and record what improves and what degrades.

Rollback procedures provide insurance against AI failures. When a new model produces inferior results, you need quick paths back to last-known good configurations. This may require maintaining access to legacy models or alternative tools.

Making governance stick

Even the best governance framework fails if lawyers do not follow it. Implementation requires attention to three practical realities:

      1. Integration with existing habits 鈥 This means building AI governance into systems lawyers already use. If your conflicts-check system can track AI tool usage, use it. If your document management system can flag AI-assisted work, configure it. Do not create parallel processes that compete with established habits.
      2. Training that focuses on competence 鈥 Such training teaches lawyers how to use AI tools effectively, not just safely. Include prompt engineering best practices, output validation techniques, and quality assessment skills. Lawyers who understand AI capabilities are more likely to respect AI limitations.
      3. Policies that evolve 鈥 Anticipate change rather than resisting it. Build quarterly review cycles into your governance framework and establish triggers for policy updates when new tools emerge or existing tools change capabilities. Plan for the next disruption rather than just responding to the last one.

The firms that get AI governance right will not just avoid problems, they will deliver better work more efficiently. Governance frameworks that emphasize quality control, appropriate use cases, and continuous improvement create the foundation for sustained AI value.

This requires moving beyond the defensive mindset that treats AI as a compliance burden. Instead, think of governance as the infrastructure that enables confident, reliable AI adoption. Good governance lets lawyers push AI tools harder because they have systems to catch failures and processes to maintain quality.

The legal profession has managed similar technology transitions before. We survived the shift from typewriters to word processors, from law libraries to legal databases, from paper filing to electronic court systems. Each transition required new governance approaches that balanced innovation with professional responsibility.

AI is no different in principle, although it is certainly happening at an exponential pace. The firms that adapt their governance frameworks to the speed of AI evolution, while maintaining the quality standards that clients expect, will lead the profession through this transition.

Implementation starts Monday

Governance policies work best when they start small and expand with experience. Begin with a pilot program that covers specific AI tools and specific use cases. Test the framework with real work under real deadlines. Refine the processes based on what actually happens, not what you think should happen.

Focus on the intersection of high-value tasks and low-risk scenarios. Document review for routine matters, such as contract clause libraries or research summaries for internal use 鈥 these are the sweet spots in which AI delivers clear value with manageable risk.

Build feedback loops that capture both successes and failures. I can鈥檛 emphasize this enough: Feedback loops are how we learn and improve! When AI tools work well, document why and what worked. When they fail, analyze the failure modes. Then, use this information to refine your risk categories, improve your testing protocols, and adjust your quality controls.

Most importantly, remember that governance is not a destination but rather, a process. The AI tools available next year will differ from those available today, and your governance framework must be robust enough to handle current tools and flexible enough to evolve with future capabilities.

The legal profession has always balanced innovation with responsibility. AI governance is simply the latest chapter in that ongoing story. Those firms that write that chapter thoughtfully, with practical frameworks that evolve with the technology, will shape the future of legal practice.


In my next monthly column, we’ll explore what happens when we ask, “What if AGI?,” and discover how this simple question can reshape our thinking, refocus our priorities, and position us for greater success as lawyers

]]>
How companies are fostering the creation of human & AI agent teams /en-us/posts/sustainability/creating-human-ai-agent-teams/ Wed, 22 Oct 2025 15:22:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=68124

Key takeaways:

      • AI is blurring the lines between HR and IT 鈥 The increasing integration of AI agents into workflows is prompting organizations to merge their HR and IT functions, with a significant percentage of IT leaders saying they expect this trend to continue.

      • Strategic actions for success 鈥 To effectively integrate HR and IT and leverage AI, organizations need strong, cross-functional leadership, a clear strategic direction for AI adoption, cultivation of AI literacy and adaptability, and a data-driven approach to cultural transformation.

      • Human adaptation is key 鈥 Unlocking organizational potential can only occur with thoughtful leadership that prioritizes human adaptation and intelligently orchestrates the integration of people and AI.


As AI continues to transforms the future of work, a are breaking down traditional departmental silos by merging their HR and IT functions under unified leadership 鈥 and many expect this trend to continue in the future. Indeed, 64% of IT leaders surveyed say they believe HR and IT will . This strategic convergence is a fundamental shift that is compelling organizational leaders to reimagine how work gets done while presenting other complex challenges.

The convergence of HR and IT functions

The rapid ascent of AI agents from current tools to a future state in which they are expected to be colleagues is blurring the lines between traditional technical and non-technical roles. As Skillsoft’s Chief People Officer Ciara Harrington : “There is no role that’s not a tech role.鈥

For many forward-thinking organizations, merging these departments offers a combination of benefits and paves the way for a more integrated, data-driven, and agile organization, that clearly offer some benefits including:

Holistic workforce architecture 鈥 Merging HR and IT enables leaders to design how work is done and better align human skills with hardware, software, and AI. For example, around workflows by segmenting what technology should do and where humans add irreplaceable value, explains Tracey Franklin, the company鈥檚 chief people and digital technology officer.

Streamlined innovation and agility 鈥 When HR and IT co-own transformation, organizations can adapt faster to new tech and processes. In another example, to sit within the same bigger team because they both are building systems that support the rest of the business.

Transformation brings about challenges

Navigating the risks of merging HR and IT is not without challenges, however, and organizations must carefully address several barriers to progress, which include:

Loss of specialist expertise 鈥 The most pressing concern involves diluting critical professional knowledge. “Merging the departments risks losing or diluting the specialist expertise organizations need to thrive,” warns David D’Souza from the Chartered Institute of Professional Development. Indeed, the skillsets of HR and IT professionals involve few areas of overlap.

Cognitive depletion 鈥 A risk that is just starting to get the attention it deserves is the danger of over-dependence on AI that can cause reduced cognitive capabilities. “If AI agents do everything with us, we lose skills,鈥 says Skillsoft鈥檚 Harrington. Indeed, this long-term capability risk is multifaceted, resulting in core human skills atrophying and bench strength, adaptability, and ethical discernment weakening.

New roles, metrics, and leadership 鈥 Perhaps the most challenging areas are those that will determine who will manage the AI agents, how human-AI team performance will be evaluated, and what professional development looks like for hybrid human and AI agent teams. Answering these key questions remains an area of ongoing deliberation.

Likewise, traditional HR metrics don鈥檛 fit human-AI teams. Organizations must redefine performance, learning paths, and career progression for both humans and AI. In addition, organizations still need leaders who can navigate the human side of transformation as AI integration accelerates.

Key actions for companies

To unlock real returns on AI investments and from HR and IT integration, organizations need visionary, cross-functional leadership that sets clear strategy, aligns operations, and equips people to work differently. Must-do actions include:

Developing strategic direction 鈥 Research in the recent 成人VR视频 Future of Professionals Report 2025identifies four interconnected layers for AI success, which include a clear, visible plan for AI adoption (strategy), committed leaders who model the right behaviors (leadership), adjusted workflows and roles to leverage AI (operations), and ways to empower people to learn and set personal AI goals (individuals).

While organizations that align all four layers unlock the greatest value from AI, the first layer, strategy, is the single strongest predictor of AI return on investment, according to the report. And the same is true for successful HR and IT integration. It requires leaders who can bridge both worlds without necessarily being technical experts, while also setting direction and providing vision, allocating capital effectively, removing obstacles, fostering culture, and engaging employees.

Cultivating AI literacy and adaptability听鈥 Organizations also must develop comprehensive AI training with responsible AI, including clear usage policies. This includes preparing employees to incentivize experimentation around recreating their own workflows to allow AI to execute repetitive tasks while team members can then focus on more complex problem-solving.

Data-driven cultural transformation听鈥 Success requires using data strategically to transform the culture to shift to collaborative human/AI ecosystems. Some companies are using data and building accountability mechanisms to ensure leaders are culture promoters and data stewards. Without this data-centric approach to cultural change, organizations likely will fail to realize the full potential of integrated HR and IT functions.

Yet, no matter what functions are merged, organizational potential is unlocked by thoughtful leadership that centers human adaptation and intentionally orchestrates how people and AI integrate to do the work. Now is the moment for companies to define a clear roadmap, invest in capability-building, and pilot human/AI teams with measurable guardrails to help organizations learn fast, acclimate to new work realities, and scale what works.


You can find out more about how organizations are addressing issues of talent development and management here

]]>
2025 Emerging Technology and Generative AI Forum: Human creativity and feedback drive ethical AI adoption /en-us/posts/technology/emerging-technology-generative-ai-forum-ethical-ai-adoption/ Tue, 30 Sep 2025 14:45:38 +0000 https://blogs.thomsonreuters.com/en-us/?p=67743

Key takeaways:

      • Embrace value, risk, and execution 鈥 for good and bad 鈥斕齈rofessional services firms must weigh the value of AI applications against potential risks, embracing both successes and failures as learning opportunities to improve responsible adoption.

      • Ethical oversight is everyone鈥檚 responsibility 鈥斕鼸nsuring responsible AI use in professional services requires active participation from all members of an organization, not just legal or IT teams.

      • Human creativity and feedback remain essential 鈥斕齏hile AI can generate ideas and accelerate processes, human judgment, creativity, and continuous feedback provide the proper pathways for ethical decision-making and successful integration.


AUSTIN, Texas 鈥 With the professional services world now squarely into the AI era, it鈥檚 clear that the speed of business is quicker than ever. Clients expect results in hours or even minutes rather than days, while generating documents can happen at the click of a button. Ask a research question, and a machine can intuit what you鈥檙e looking for with striking accuracy.

Alongside these business changes, however, it鈥檚 clear that the ethics of technology usage within professional services is shifting just as quickly. 鈥淓very time you come and do a talk with a group of people, within four weeks if not sooner, it鈥檚 changed,鈥 says Betsy Greytok, Associate General Counsel in Responsible Technology at IBM. 鈥淪o, it really does require you to keep on your toes.鈥

Ensuring that AI is used responsibly is paramount within professional services than in other professions, given the ethical and regulatory constraints placed on legal, tax, audit & accounting, financial services and risk, and more. During a recent session, A Unified Field: Ethical Considerations amid AI Development and Deployment, at the 成人VR视频 Institute鈥檚 2025 Emerging Technology and Generative AI Forum, panelists describe an ethical world that should be tackled as a challenge, rather than shied away from as an unsolvable risk.

Or, as Paige L. Fults, Head of School at the AI-centric Alpha School & 2-Hour Learning program, put it: 鈥淣ot being afraid of replacement, but leaning into repurpose.鈥

Embracing success 鈥 and failure

John Dubois,听the Americas AI Strategy Leader at Big 4 consultancy Ernst & Young, says he regularly gets questions from customers about AI and how they should use it, given that there are new AI applications arising seemingly every day. 鈥淭he way we describe it is a balance,鈥 Dubois explains. 鈥淟et鈥檚 start with value. If we know there鈥檚 value in something, then we can figure out the risk behind it, then we can figure out how we can execute.鈥

Just as importantly, however, this focus on value, risk, and execution can also aid professional services firms when an AI plan fails. For example, Dubois cites an MIT report from August 2025 that showed , often because of flawed integration. Embracing the value, risk, and execution strategy from the beginning not only allows for better chances of success, but even in the event of failure, 鈥渨e actually have a better shot at mitigating, when it does fall down.鈥

This sort of planning is not limited to just one group, Dubois says, noting that ethical oversight is seen as a key responsibility of everyone in the organization. He explains that E&Y has an internal implementation of OpenAI that has 150,000 distinct users each month. Because of an internal process called SCORE that removes customer data at the source, E&Y鈥檚 instance of OpenAI is largely clear of customer data 鈥 but it鈥檚 still not perfect.

E&Y has set a culture so that if someone sees proprietary data when using GenAI to develop a proposal or create a PowerPoint, they not only delete the data before use, but work to scrub it from the system entirely. 鈥淚t is all of our job to ensure that whatever you鈥檙e putting into that system or extracting out of that system, you鈥檙e cleansing,鈥 Dubois says. 鈥淚t鈥檚 not the job of the general counsel, or the risk team, or the IT team, it鈥檚 all of our job.鈥


When it comes to keeping up with AI ethics in a rapidly advancing space, professionals can rely on the same methods they have been employing for years to solve ethical quandaries: human creativity.


IBM鈥檚 Greytok agreed, noting that she鈥檚 part of an internal review board that examines major AI-related projects for ethical issues. There is a board review at the beginning of the development process to determine how risky a use case is, and then the system will give a response, considerations, and steps. If there is an issue, the board is empowered to stop development, even on a major project.

She drew an analogy to writing a paper in high school, in which there is a marked difference between simply turning in the paper, proofreading your own work, and asking a friend for peer review feedback. 鈥淭hat鈥檚 what you want, is that disagreement, because that鈥檚 critical thinking.鈥

She adds: 鈥淭he researchers sometimes get so excited about what they鈥檝e discovered that they forget to look at the other side of what can happen. You should want that. You shouldn鈥檛 be punished for saying, Is this the right thing or not?

The importance of feedback

Fults says that at the Alpha School, AI is not only baked into the curriculum, it . Students spend just two hours a day on academics, led by AI tools that are supplemented by off-line learning on a variety of subjects by in-person instructors that fill in the gaps that AI is not able to provide.

It鈥檚 a revolutionary concept but not a static one. Fults notes that 鈥渢he two-hour learning model has already changed so much since I鈥檝e been part of the school,鈥 and the instructors have a Slack channel on ways to find improvement that receives hundreds of messages a day.

It鈥檚 through this marrying of human intuition and the possibilities of the technology that Fults says she believes the school has found success and used AI ethically within education. 鈥淓ven though we have this tool, the human levers, the motivational levers that are happening day to day, actually make it work,鈥 she says, insisting that she 鈥渃an鈥檛 just hand [the technology] to any school鈥 without the corresponding processes in place.

Dubois and Greytok also call feedback a crucial part of the process in order to overcome AI barriers. Dubois tells the story of a large retailer that bought satellite images to determine footfall within a store. Shoppers, however, felt that was a privacy risk, and the idea was almost scrapped. Then, however, the legal and IT teams worked together to come up with an idea: Can you track clothing, but not faces, to get the same information of where within the store shoppers were going?

鈥淚t鈥檚 a creative workaround to get us to the same thing,鈥 Dubois explains. 鈥淲hen you have a constraint, what鈥檚 a clever way to work around this so we鈥檙e not taking a brand risk or a compliance risk?鈥

Indeed, when it comes to keeping up with AI ethics in a rapidly advancing space, professionals can rely on the same methods they have been employing for years to solve ethical quandaries: human creativity. AI can provide information and context more rapidly than ever before, but ultimately, professionals themselves will be the ones relied upon to make sure AI is used ethically and responsibly.

鈥淎I is an idea generator,鈥 Greytok says. 鈥淭he solution comes from the human.鈥


You can find out more about how emerging technologies are impacting professional services here

]]>
GenAI hallucinations are still pervasive in legal filings, but better lawyering is the cure /en-us/posts/technology/genai-hallucinations/ Mon, 18 Aug 2025 11:43:00 +0000 https://blogs.thomsonreuters.com/en-us/?p=67232

Key insights:

      • Hallucinatory case citations a concern 鈥 Hallucinations continue to be an issue for attorneys, as courts across the U.S. have sanctioned attorneys and pro se litigants for submitting filings with AI-generated, non-existent case citations.

      • Attorneys still responsible 鈥 While these hallucinations may occur as a result of improper AI usage, it is the duty of the attorney to check all facts and citations before submitting a document to court, just as it has always been.

      • Ethical guidelines and accuracy checks needed 鈥 As GenAI remains central to legal workflows, lawyers must integrate AI responsibly by following ethical guidelines and always checking the accuracy of AI-generated content before submission.


As the legal world marches on towards three years with generative AI (GenAI) in the public sphere, one key risk has risen above all others: hallucinations. These hallucinations are false 鈥渇acts鈥 generated by GenAI systems and can occur due a number of issues, including incomplete or inaccurate data sets, confusing or misworded prompts, or answers that are irrelevant to a given question.

It鈥檚 clear that no matter a hallucination鈥檚 providence, however, the possibility of a false fact has slowed GenAI鈥檚 growth among legal professionals. Among respondents to 2025 Generative AI in Professional Services Report who said they felt GenAI should not be a part of their daily work, 40% cited accuracy and reliability as their primary concern 鈥 nearly double any other major concern, including a lack of human touch (22%), generality of outputs (19%), or biased data (12%).

Perhaps this should not be a surprise, given continued press coverage around hallucinations. Even with accuracy top of mind, AI misuse is continuing with regularity in courts across the United States. In fact, a recent study of cases across the US for the month of July has found numerous false case citations, leading in many cases to attorney sanctions or discipline.

These AI errors and related sanctions are easily avoidable, however. It just takes awareness of how GenAI tools operate, and 鈥 as lawyers have had to do for years with non-AI generated research and briefs 鈥 a commitment to verifying any material before it is submitted to the court.

Hallucinations abound

According to a study conducted through 成人VR视频 Westlaw of cases between June 30 and August 1, hallucinations and citations of non-existent legal cases continue to be pervasive across courts. This search found 22 different cases in which courts or opposing parties found non-existent cases within filings, leading to discipline motions or sanctions in many instances.

Notably, although much of the discussion around AI in law has tended to be around large-scale litigation or complex corporate law, many of the AI errors came from local disputes. These include a fight between a family and a local school board (Powhatan County School Board v. Skinger, U.S. District Court for the Eastern District of Virginia), a divorce case (In re Marriage of Haibt, Colorado Court of Appeals), and a Chapter 13 bankruptcy case (In re Martin, U.S. Bankruptcy Court for the Northern District of Illinois). As the research makes clear, hallucinations are prevalent in all areas of law, which may not be a surprise given how pervasive ChatGPT and other public GenAI tools have become.

These cases also show that hallucinations could be a particular stumbling block for pro se litigants who may be looking to public GenAI tools as an easy lawyering fix. In Powhatan County School Board, a pro se defendant was found to have submitted pleadings 鈥渓aden with more than three dozen (42 to be exact) of citations to nonexistent legal authorities that made it exceedingly difficult, and often impossible, to make sense of the contentions made therein, to assess the purported 鈥榮upport鈥 for them, and properly to address them.鈥

Given the defendant鈥檚 pro se status, the court originally offered the chance to fix the filing, but after the defendant doubled down and tried to claim the court 鈥渨rongly assumed鈥 the citations were AI generated, the court dismissed the defendant鈥檚 motion to strike the original opinion from the record.


These AI errors… are easily avoidable, however. It just takes awareness of how GenAI tools operate, and 鈥 as lawyers have had to do for years with non-AI generated research and briefs 鈥 a commitment to verifying any material before it is submitted to the court.


鈥淭he fact that her citations to nonexistent legal authority are so pervasive, in volume and in location throughout her filings, can lead to only one plausible conclusion: that an AI program hallucinated them in an effort to meet whatever [the defendant鈥檚] desired outcome was based on the prompt that she put into the AI program,鈥 the court wrote in denying the defendant鈥檚 motion.

That does not mean that professional lawyers are using GenAI perfectly, however. One immigration case out of U.S. District Court in New Mexico, Deghani v. Castro, illustrates the issue with attorneys not understanding technology and its potential misuse. In this case, the plaintiff鈥檚 attorney, Felipe Millan, contracted with a freelance attorney to conduct research, and according to Millan, the freelancer returned a brief with several hallucinated cases, which he did not check. The court referred Millan to the state bar for sanctions, but he filed a motion to stay, arguing that the punishment did not fit the crime.

The court did not buy that argument, however, upholding the sanctions ruling. 鈥淢r. Millan’s primary grievance is that [the Judge] did not appropriately weigh his good intentions. He emphasizes that he himself did not invent the citations, did not expect the contracted attorney to do so, and has been candid and remorseful regarding the mistake,鈥 the court wrote.

鈥淏ut, as discussed above, the standard under Rule 11 is one of objective reasonableness 鈥 the imposition of sanctions does not require a finding of subjective bad faith by the offending attorney. An attorney who acts with 鈥榓n empty head and a pure heart鈥 is nonetheless responsible for the consequences of his actions.鈥

Check and check again

Indeed, all of these issues have one key factor in common 鈥 the lawyers in question did not check their sources. Millan may have been remorseful and attempted to correct the mistake, but a mistake was made nonetheless. And this can even happen among knowledgeable attorneys when time and deadlines get in the way.

In another case, Kaur v. Desso from U.S. District Court in the Northern District of New York, the court explicitly found that the plaintiff鈥檚 attorney 鈥渁dmits that he was aware at the time that AI tools are known to 鈥榟allucinate鈥 or fabricate legal citations and quotations,鈥 but he felt pressured to rush the pleading due to imminent deportation. Nevertheless, the court imposed a $1,000 fine and mandated CLE training on AI for the attorney, saying that the need to check whether the assertions and quotations generated were accurate trumps all.


Although much of the discussion around AI in law has tended to be around large-scale litigation or complex corporate law, many of the AI errors came from local disputes.


Some lawyers have expressed a reluctance to use GenAI tools unless they are 100% accurate. And to be sure, some tools are more accurate than others, especially tools that have access to more robust and trusted data sets. However, given the nature of how the technology predicts the next word in a sequence, including generated legal citations, by definition no GenAI tool will be accurate 100% of the time.

At the same time, however, no associate or partner will also be 100% accurate. The key to preventing errors has always been human intuition and checking research results before any brief or document is submitted. This does not change whether research is done manually with books, electronically with historic research systems, or with modern research systems that use AI.

This is true not only from an operational lens, but an ethical lens as well. A number of state bars as well as the American Bar Association (via ) have issued guidance around proper use of GenAI for attorneys. However, much of this guidance simply reframes pre-existing ethical rules for an AI world. The need for competent representation (Model Rule 1.1), to communicate with clients (Rule 1.4), to keep information confidential (Rule 1.6), and more has not changed. Lawyers need to understand how AI fits into the framework that have always been a guiding light for proper lawyering.

More than 90% of legal professionals say they believe AI will be central to their workflow within the next five years, according to the GenAI Report. Thus, the fear of hallucinations may not disappear any time soon either. That means attorneys need to adjust their workflows to properly adapt to an AI future by figuring out how AI fits into the pre-existing research workflows and preventing hallucinations from making their way into briefs or documents.


Register now for听The Emerging Technology and Generative AI Forum, a cutting-edge conference that will explore the latest advancements in GenAI and their potential to revolutionize legal and tax practices.

]]>
Preserving ethical business: What should corporations do during this period of perceived human rights de-prioritization? /en-us/posts/human-rights-crimes/preserving-ethical-business-human-rights-de-prioritization/ Tue, 29 Apr 2025 14:48:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=65722 In the first quarter of 2025, the administration of new President Donald J. Trump has cut US foreign aid by ; and in late February, the Trump administration paused enforcement of the Foreign Corruption Practices Act (FCPA) for 180 days while the new U.S. Attorney General reviews existing FCPA actions and issues new guidance for .

Both of these moves reinforce the perception that there are signs of a global rollback in human rights, underscored by the European Union moving to reduce corporate accountability in human rights due diligence.

鈥淐orruption is an enabler of human rights violations, [and] the rollbacks reduce accountability for bribery,鈥 according to human rights experts and of FTI Consulting. Indeed, a reduction in accountability could embolden companies and potentially increase human rights abuses, they explain.

Risks of relaxing FCPA compliance

Over the years, many multinational companies have invested significantly in developing robust internal compliance programs to adhere to FCPA requirements. Weakening these frameworks could lead companies to divert resources away from maintaining compliance, which could allow bad actors to exploit the reduced scrutiny and result in increased fraud, misconduct, and human rights abuses.


Join us for a free online Webinar: World Day Against Trafficking in Persons to learn more about the complexities of human trafficking, the impact on victims, and effective strategies for prevention and intervention


鈥淲hile these rollbacks in the US may indicate a temporary decrease in regulatory pressure within, it is essential for companies to recognize that global regulatory trends are moving towards greater corporate accountability,鈥 not less, says Wong and Cobb. US companies operating internationally must adhere to these emerging standards, and the pause on domestic FCPA enforcement does not eliminate companies鈥 legal and reputational risks.

Wong and Cobb point out that FCPA enforcement has historically been cyclical, and companies reducing compliance efforts now might find themselves unprepared when enforcement resumes. Indeed, the statute of limitations for FCPA violations is five years for anti-bribery offenses and six years for accounting violations.

Recommendations for companies to navigate uncertainty

As businesses face a shifting regulatory landscape, navigating the path forward requires both immediate action and strategic foresight. The following guidance from Wong and Cobb offer a framework for maintaining ethical business practices and stakeholder confidence while adapting to evolving global standards.

In the short term, for instance, companies must adopt proactive strategies to prepare for the shifting landscape created by these rollbacks, including:

      • Monitoring global regulatory trends 鈥 Companies should actively track global regulatory developments to stay ahead of compliance requirements, even if these do not originate from the United States.
      • Engaging with stakeholders 鈥 It is crucial to maintain open communication with investors and stakeholders regarding ongoing anti-corruption and human rights commitments. This engagement ensures transparency and reinforces the company’s dedication to ethical practices.

In addition, companies should that the company maintains a zero-tolerance policy for bribery and corruption. In addition, companies should keep open anonymous hotlines to report potential ethics violations in order to prevent the erosion of a culture of ethics, which often takes years of effort to build. Likewise, companies need to continue monitoring their third-party vendors, consultants, or suppliers because over the past decade, about 90% of FCPA enforcement resolutions have involved third-party representatives or consultants engaged in corruption.

Meanwhile, Cobb and Wong also suggest that companies focus on aligning with international standards and best practices. Adhering to well-recognized international frameworks is crucial to remain competitive. For example, the UN鈥檚 Guiding Principles offers a flexible approach to keeping ethics practices around human rights, according to Wong. Likewise, Cobb suggests that companies voluntarily embrace the EU鈥檚 Corporate Sustainability Due Diligence and its Corporate Sustainability Reporting Directive, once the amendments are finalized, as robust options for compliance reporting.

Regardless of whether these rollbacks had occurred, the overarching recommendation is for companies to maintain robust corporate compliance and human rights risk management programs. This proactive approach not only prepares companies for potential regulatory changes but also positions them as leaders in ethical business practices on the global stage.

By continuing to prioritize compliance and human rights, companies can navigate the evolving regulatory landscape effectively, ensuring long-term business success and sustainability.


You can find more information on how organizations are managing their regulatory obligations here

]]>
Risk assessment and ethical guardrails: A how-to guide for courts to implement AI responsibly /en-us/posts/ai-in-courts/guide-ai-implementation/ Thu, 10 Apr 2025 16:25:45 +0000 https://blogs.thomsonreuters.com/en-us/?p=65481 According to the 成人VR视频 Future of Professionals 2024 report, more than three-quarters (77%) of professionals surveyed said they believe that AI will have a high or transformational impact on their work over the next five years.

What is interesting is there has been a shift in sentiment towards AI, with professionals moving from their initial fears around using AI or of AI eliminating legal industry jobs to an increasingly optimistic view of AI as a transformative force in their professions.

AI implementation

Framework for responsible AI听

Even though there is a trend of optimism towards AI, using AI responsibly is a critical ingredient for courts in order to take advantage of the opportunities AI brings while mitigating the risks and concerns of individuals. More specifically, the integration of AI into court systems demands a comprehensive ethical framework to ensure justice is served while the public trust is maintained.

AI implementation

As , Director for Responsible AI Strategic Engagements at 成人VR视频, emphasized in hosted by the 成人VR视频 Institute (TRI) and the National Center for State Courts (NCSC): “AI systems should be designed and used in a way by courts that promotes fairness and avoids discrimination” based on an ethical AI framework, which includes:

      • Privacy and security that serve as foundational elements, requiring robust systems with proper data protection measures. Courts must implement encryption, secure storage protocols, and establish rigorous access controls to safeguard the highly sensitive information they manage.
      • Transparency represents another critical pillar, combined with thorough testing and monitoring of AI tools and continuous communications with the communities it impacts.
      • Human oversight stands as perhaps the most crucial element because it ensures that AI augments rather than replaces human judgment. “Human oversight of AI is vital to prevent bias,” particularly in contexts in which decisions impact individuals’ rights and liberties, Lam explains.
      • Societal impact means that courts must conduct impact assessments to understand the consequences for their constituents.

Assessing risk is a necessary part of any ethical framework

Implementing an ethical framework for AI in the judicial system is crucial to ensure that judges, court administrators, and legal professionals can use technology competently and ethically. To help with this process, the Governance & Ethics working group (created as part of the ) recently that essentially acts as a How-to guide for judges, court staff, and legal professionals as they seek to responsibly use AI in courts.

As the paper makes clear, the central premise of an ethical framework involves understanding the levels of risk 鈥 from minimal to unacceptable 鈥 associated with AI and then applying these insights to determine appropriate use cases.


Join us for the next TRI-NCSC webinar on April 16


Assessing risk and classifying the impact into low, moderate, high, and unacceptable categories results in a structured framework based on their application and the context in which they are used. For example, classifying risks through the lens of their impact would necessarily include:

Low-risk applications 鈥 This includes predictive text and basic word processing, which require minimal human oversight in which a supervisor intervenes only when necessary.

Moderate-risk applications 鈥 These tools include those used for drafting legal opinions and demand more direct human involvement to ensure accuracy and reliability.

High-risk applications 鈥 These can significantly impact human rights and necessitate stringent human oversight to prevent errors and biases.

Unacceptable-risk applications 鈥 Lastly, the tools with unacceptable risks, such as those automating life-and-death decisions, should be avoided altogether.

However, , Director of Law and Technology Initiatives at Northwestern Pritzker School of Law & McCormick School of Engineering and fellow member of the TRI-NCSC Governance & Ethics working group, cautions that risks need to be monitored no matter what level of applications you are using.

鈥淓ven among tools that may seem low impact, there may be in your particular context usages where there could be impacts, errors that could actually create greater harm than may have been recognized early on,鈥 Linna explains. 鈥淪o, when you鈥檙e engaging in these discussions, you should be talking about, 鈥榃ell, how is this tool going to be used? What do we think its accuracy is going to be? And if it makes an error, is it going to be a low impact error?鈥欌

Community engagement is key to responsible AI

The use of AI in courts presents both opportunities and challenges; and understanding these are crucial for responsible implementation. This is why community engagement is critical for maintaining public trust in the judicial system, particularly when implementing high-risk AI solutions.

By involving the community in the decision-making process and being transparent about responsible AI implementation, courts can ensure that the public understands the benefits and risks of AI solutions and can trust the judicial system to use these technologies responsibly.

As , who is responsible for technology and fundamental rights at Microsoft and also a member of the TRI-NCSC Governance & Ethics working group, explains: 鈥淲e have an opportunity with use of AI in the courts to be leaders in the criminal justice system around transparency, community engagement, and ensuring that the community is part of the conversation.” Indeed, the ethical concept of transparency reinforces the importance of building and maintaining trust of court constituents, and courts must be open about their use of AI technology, especially in high-risk areas that impact individual liberties.

In addition, responsible implementation of AI helps to avoid other ethical pitfalls, such as:

Overreliance on AI systems 鈥 This is one of the significant pitfalls that can lead to a lack of human oversight in critical decision-making areas. “We can’t use these tools as if in a deterministic way,鈥 Linna cautions, adding that users need to realize that any AI-provided answer is simply 鈥渢he output you got that one time that you tried it with that specific prompt.鈥 Having a human in the loop is important to address the fact that AI tools based on probabilistic models do not always yield consistent results.

Privacy issues 鈥 Breaches in privacy is a critical concern for courts. Indeed, sensitive legal data must be protected to maintain public trust and meet ethical obligations.

Presence of biases 鈥 Further, AI can amplify existing biases if not carefully managed, leading to unfair outcomes. Therefore, understanding the difference between probabilistic and deterministic tools is essential for judicial professionals to use AI effectively while safeguarding justice and fairness.

Looking ahead at the promise of AI in courts

As the judicial system continues to evolve, AI can help bridge gaps for self-represented litigants, provide language support, and ensure faster case resolutions, all while maintaining the integrity and fairness that underpin the legal system. However, it is crucial for courts to approach AI integration with caution, ensuring that ethical frameworks guide its use in order to prevent bias, protect privacy, and uphold public trust.

鈥淛udges and court administrators should be empowered to use technology competently and consistently with ethical obligations to best serve the public,鈥 says , Administrative Director of the Courts of the Idaho Supreme Court and TRI-NCSC AI Governance & Ethics working group member. 鈥淭he checklist provided as part of the white paper is a practical tool that can be used to ensure those who work in the courts are meeting this goal.”

And this also helps to ensure AI is a tool to advance justice while safeguarding the fundamental rights of individuals.


You can find more about how courts are using AI-driven technology here

]]>