成人VR视频 Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/thomson-reuters/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Fri, 10 Apr 2026 08:47:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Pattern, proof & rights: How AI is reshaping criminal justice /en-us/posts/ai-in-courts/ai-reshapes-criminal-justice/ Fri, 10 Apr 2026 08:46:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=70255

Key insights:

      • AI’s greatest strength in criminal justice is pattern recognition鈥 AI can process vast amounts of data quickly, helping law enforcement and legal professionals detect connections, reduce oversight gaps, and improve consistency across investigations and casework.

      • AI should strengthen justice, not substitute for human judgment鈥 Legal professionals are integral to evaluating AI-generated outputs, especially when decisions affect evidence, warrants, and individuals鈥 constitutional rights.

      • The most effective model is human/AI collaboration鈥 AI handles scale and speed, while judges, attorneys, and investigators provide context, accountability, and ethical reasoning needed to protect due process.


The law has always been about patterns 鈥 patterns of behavior, patterns of evidence, and patterns of justice. Now, courts and law enforcement can leverage a tool powerful enough to see those patterns at a scale at a speed no human mind could match: AI.

At its core, AI works by recognizing patterns. Rather than simply matching keywords, it learns from large amounts of existing text to understand meaning and context and uses that learning to make predictions about what comes next. In the context of law enforcement, that capability is nothing short of transformative.

These themes were front and center in a recent webinar, , from the听, a joint effort by the National Center for State Courts听(NCSC) and the 成人VR视频 Institute (TRI). The webinar brought together voices from across the justice system, and what emerged was a clear and consistent message: AI is a powerful ally in the pursuit of justice, but only when paired with the judgment, accountability, and constitutional grounding that human professionals can provide.

AI’s pattern recognition is a gamechanger

“AI is excellent,鈥 said Mark Cheatham, Chief of Police in Acworth, Georgia, during the webinar. 鈥淚t is better than anyone else in your office at recognizing patterns. No doubt about it. It is the smartest, most capable employee that you have.”

That kind of capability, applied to the demands of modern policing, investigation, and prosecution, is a genuine gamechanger. However, the promise of AI extends far beyond the patrol car or the precinct. Indeed, it cascades through the entire arc of justice 鈥 from the moment a crime is detected all the way through prosecution and adjudication.

Each step in that chain represents not just an operational and efficiency upgrade, but an opportunity to make the system more fair, more consistent, and more protective of the rights of everyone involved.

Webinar participants considered the practical implications. For example, AI can identify and mitigate human error in decision-making, promoting greater consistency and fairness in outcomes across cases. And by automating labor-intensive tasks such as reviewing body camera footage, AI frees prosecutors and defense attorneys to focus on other aspects of their work that demand professional judgment and legal expertise.

In legal education, the potential of AI is similarly recognized. Hon. Eric DuBois of the 9th Judicial Circuit Court in Florida emphasizes its role as a tool rather than a substitute. “I encourage the law students to use AI as a starting point,鈥 Judge DuBois explained. 鈥淏ut it’s not going to replace us. You’ve got to put the work in, you’ve got to put the effort in.”


AI can never replace the detective, the prosecutor, the judge, or the defense attorney; however, it can work alongside them, handling the volume and velocity of data that no human team could process alone.


Judge DuBois’ perspective aligns with broader judicial sentiment on the responsible integration of AI. In fact, one consistent theme across the webinar was the necessity of maintaining human oversight. The role of the legal professional remains central, participants stressed, because that ensures accuracy, accountability, and ethical judgment. The appropriate placement of human expertise within AI-assisted processes is essential to ensuring a fair and effective legal system.

That balance between leveraging AI and preserving human judgment is not just good practice, rather it鈥檚 a cornerstone of justice. While Chief Cheatham praises AI’s pattern recognition, he also cautions that it “will call in sick, frequently and unexpectedly.” In other words, AI is a powerful but imperfect tool, and those professionals who rely on it must always be prepared to intervene in those situations in which AI falls short. Moreover, the technology is improving extremely rapidly, and the models we are using today will likely be the worst models we ever use.

Naturally, that readiness is especially critical when individuals鈥 rights are on the line. 鈥淎 human cannot just rely on that machine,鈥 said Joyce King, Deputy State’s Attorney for Frederick County in Maryland. 鈥淵ou need a warrant to open that cyber tip separately, to get human eyes on that for confirmation, that we cannot rely on the machine.” Clearly, as the webinar explained, AI does not replace constitutional obligations; rather, it operates within them, and the professionals who use AI are still the guardians of due process.

The human/AI partnership is where justice is served

Bob Rhodes, Chief Technology Officer for 成人VR视频 Special Services (TRSS) echoed that sentiment with a principle that cuts across every application of AI in the justice system. “The number one thing鈥 is a human should always be in the loop to verify what the systems are giving them,” Rhodes said.

This is not a limitation of AI; instead, it鈥檚 the design of a system that works. AI identifies the patterns, and trained, experienced professionals evaluate them, act on them, and are accountable for them.

That partnership is where the real opportunity lives. AI can never replace the detective, the prosecutor, the judge, or the defense attorney. However, it can work alongside them, handling the volume and velocity of data that no human team could process alone. So that means the humans in the room can focus on what they do best: applying judgment, upholding the law, and protecting an individual鈥檚 rights.

For judicial and law enforcement professionals, this is the moment to lean in. The patterns are there, the technology to read them is here, and the opportunity to use both in service of rights 鈥 not against them 鈥 has never been greater.


Please add your voice to 成人VR视频鈥 flagship , a global study exploring how the professional landscape continues to change.

]]>
Helping the legal profession get AI鈥憆eady: A new advisory board takes shape /en-us/posts/legal/ai-advisory-board/ Thu, 26 Mar 2026 11:31:32 +0000 https://blogs.thomsonreuters.com/en-us/?p=70080 Key insights:

      • AI is already reshaping the legal profession 鈥 AI听is already embedded in lawyers’ day-to-day legal work with a significant share of both law firm attorneys and in-house legal teams actively using GenAI tools, with many expecting it to become central to their work within the next five years.

      • AIFLP Advisory Board was formed to prepare lawyers for an AI-reshaped profession 鈥 TRI convened 21 respected leaders from legal education, private practice, the judiciary, and AI ethics and governance to help ensure lawyers and law students are prepared for a profession reshaped by AI.

      • Human judgment remains central in an AI enabled legal future听鈥 Becoming AI ready is not simply about learning to use new tools; the Advisory Board emphasizes strengthening irreplaceable human capabilities is critical.


In today鈥檚 tech-driven environment, AI is no longer a future concept for the legal profession 鈥 it鈥檚 already here, and it鈥檚 changing how lawyers work, learn, and serve clients. Recognizing just how fast the evolution is moving, the 成人VR视频 Institute (TRI) has launched the AI and the Future of Legal Practice (AIFLP) Advisory Board, bringing together a group of respected leaders from across the legal ecosystem to help guide what comes next.

The board includes 21 accomplished voices from legal education, private practice, the judiciary, and AI ethics and governance. Their shared goal is simple but ambitious: Help ensure that both today鈥檚 lawyers and tomorrow鈥檚 law students are prepared for a profession being reshaped by AI.

Why now?

Because the shift is already underway. According to TRI鈥檚 recent 2026 AI in Professional Services Report, 41% of law firm attorneys say their organizations are already using some form of generative AI (GenAI); and nearly half of those at corporate legal departments report that AI tools are being rolled out there too. Even more telling, most professionals said they expect GenAI to become central to their day鈥憈o鈥慸ay work within the next five years.

That pace of change raises big questions about competence, ethics, education, risk, and access to justice. And those questions don鈥檛 have easy answers.

What the Advisory Board will focus on

The AIFLP Advisory Board is designed to tackle those challenges head鈥憃n. Its work will center on four key areas that are already under pressure as AI adoption accelerates:

      • Legal education and talent development
      • Ethics, professional competence, and accountability
      • Governance, risk management, and client counseling
      • Access to justice and modern service delivery

The Advisory Board鈥檚 early focus areas will look at how AI is actually changing legal practice today, what future鈥憆eady lawyers really need to know, and how legal education and real鈥憌orld practice can better align. The emphasis is not just on using AI tools, but on strengthening the human skills that matter most, such as sound judgment, critical thinking, and careful verification of AI鈥慻enerated outputs.

Shaping the future, not reacting to it

Citing the critical need for this Advisory Board鈥檚 creation, Mike Abbott, Head of the 成人VR视频 Institute, notes that the legal profession is at a crossroads, and it can either react to AI鈥慸riven disruption or take an active role in shaping how these technologies are used to support lawyers, courts, and the public.

鈥淏y assembling a board of distinguished leaders, our goal is to help practicing lawyers and the lawyers of the future navigate a rapidly evolving landscape,鈥 Abbott said. 鈥淓nsuring that legal education strengthens irreplaceable skills such as critical thinking, human judgment and effective communication helps make AI use safe and effective. The Board鈥檚 efforts will ultimately help shape a future-ready profession, leading to better outcomes for all.鈥

Meet the AIFLP Advisory Board Members

By convening experienced leaders from across the profession, TRI hopes to help lawyers navigate this landscape with confidence. Advisory Board Members include:

      • Michael Abbott, Head of the 成人VR视频 Institute
      • Soledad Atienza, Dean, IE Law School
      • The Honorable Jennifer D. Bailey, (Ret.), Partner, Bass Law
      • Benjamin Barros, Dean, Stetson University College of Law
      • Professor Sara J. Berman, University of Southern California, Gould School of Law
      • Megan Carpenter, Dean Emeritus, University of New Hampshire Franklin Pierce School of Law
      • Ronald S. Flagg, President, Legal Services Corporation
      • Donna Haddad, AI Ethics and Governance expert, and founding member, IBM AI Ethics Board
      • Johanna Kalb, Dean and Professor of Law, University of San Francisco School of Law
      • The Honorable Nelly Khouzam, Florida Second District Court of Appeal
      • The Honorable William Koch, Dean, Nashville School of Law, and former Tennessee Supreme Court Justice
      • Sheldon Krantz, retired partner, DLA Piper, and a founder, DC Affordable Law Firm
      • Stefanie A. Lindquist, Dean, School of Law, Washington University in St. Louis
      • The Honorable Mark Martin, Founding Dean and Professor of Law, Kenneth F. Kahn School of Law at High Point University, and former Chief Justice, Supreme Court of North Carolina
      • Caitlin (Cat) Moon, Professor of the Practice and founding co-director, Vanderbilt AI Law Lab, Vanderbilt Law School
      • Hari Osofsky, Myra and James Bradwell Professor and former Dean, Northwestern Pritzker School of Law; Founding Director, Northwestern University Energy Innovation Lab; and Founding Director, Rule of Law Global Academic Partnership
      • Joanna Penn, Chief Transformation Officer, Husch Blackwell
      • The Honorable Morris Silberman, Florida Second District Court of Appeal
      • The Honorable Samuel A. Thumma, Arizona Court of Appeals, Division One
      • Mark Wasserman, Partner and CEO Emeritus, Eversheds Sutherland
      • Donna E. Young, Founding Dean, Lincoln Alexander School of Law, Toronto Metropolitan University

What鈥檚 next?

The Advisory Board held its first meeting in February and will meet quarterly going forward. As the work progresses, TRI plans to publish research findings, best practices, and practical recommendations for legal educators, law firms, and courts.

In a profession built on precedent and careful reasoning, the rise of AI presents both opportunity and responsibility. The AIFLP Advisory Board is an effort to make sure the legal community meets that moment thoughtfully and on its own terms.


You can learn more about the impact of advanced technology on the legal profession here

]]>
The efficiency imperative: AI as a tool for improving the way lawyers practice /en-us/posts/ai-in-courts/improving-lawyers-practice/ Wed, 18 Mar 2026 17:45:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70024

Key insights:

      • AI brings improved efficiency 鈥 AI accelerates tasks like document review and research, freeing lawyers to pursue more high-value work for clients.

      • AI does the work of a team of lawyers 鈥 AI levels the playing field for small law firms and solo practitioners by providing additional capacity without adding headcount, thereby allowing fewer lawyer to do the work of many.

      • Yet, AI still needs guardrails 鈥 Lawyers must remain accountable, however, with human oversight and review to ensure that AI outputs are accurate and correct, thereby preserving nuance and professional judgment.


Already, AI is no longer a theoretical concept for legal professionals, nor is it a nice-to-have for law firms that are seeking to impress their clients with improved efficiency and cost savings. That means, the practical question now becomes how to adopt AI in ways that improve speed and capacity of lawyers without compromising accuracy, confidentiality, or professional judgment.

The strongest near-term value shows up where modern practice is most strained: high-volume inputs and relentless timelines. In that environment, AI can be most helpful as an accelerant for the first pass through large bodies of material.

This possibilities, opportunities, and challenges of using AI in this way were discussed by a panel of experts in a recent webinar, , from the听, a joint effort by the National Center for State Courts听(NCSC) and the 成人VR视频 Institute (TRI).

One panelist, Mark Francis, a partner at Holland & Knight, described one way that AI can be an enormous help. “Anything where we’re dealing with large volume of materials that need to be reviewed [such as] large sets of documents, large sets of legal research, large sets of discovery. Obviously, AI can be leveraged in all of those circumstances.” That framing is important because it anchors AI’s utility in a familiar workflow: review, triage, and synthesis at scale.

AI also has a role earlier in the workflow than many attorneys expect. In addition to sorting and summarizing, it can help generate starting structures. For lawyers drafting motions, client advisories, demand letters, contract markups, or internal investigations memos, the hardest step can be getting traction from a blank page. 鈥淚t’s really good at content or idea generation,鈥 Francis said, adding that lawyers can ask AI to 鈥済enerate some ideas for me on this topic, or generate an outline of a document to cover a particular issue.”


“AI is definitely going to benefit some of the small law firms who cannot actually afford the workforce. AI can be an extension when it comes to the automation.”


Of course, that does not mean letting an AI model decide what the law is; rather, it means using AI to produce an initial outline, identify possible issues to consider, or propose alternate ways to organize an argument. Then, the attorney should apply their own judgment to accept, reject, refine, and verify the AI鈥檚 output.

For legal teams, the ideal mindset is that AI can compress the time between intake and a workable first draft, whether that draft is a research plan, a deposition outline, a set of contract fallback positions, or a motion framework. However, speed is only valuable if it facilitates careful lawyering, not just taking shortcuts.

Efficiency that scales down, not just up

AI’s impact is not limited to large law firms with dedicated tech & innovation budgets. In fact, the benefits may be most transformative for smaller legal organizations that feel every hour of administrative drag and every unstaffed matter. Panelist Ashwini Jarral, a Strategic Advisor at IGIS, underscores how broad the current level of AI adoption already is. “AI is already being used in a lot of legal research, contract analysis, and in office operations,鈥 Jarral explained. 鈥淲hether that’s in a small law firm or a large law firm, everybody can benefit from that automation with this AI.”

For many practices, that list maps directly onto the work that consumes lawyers鈥 time without always adding commensurate value: repetitive research steps, first-pass contract review, intake and scheduling, matter administration, and other operational tasks.

Historically, scale favored organizations that could hire more associates, paralegals, and support staff to push volume through the pipeline. Now, AI offers a different form of leverage: additional capacity without adding headcount. “It is definitely going to also benefit some of the small law firms who cannot actually afford the workforce,鈥 Jarral said, adding that 鈥淎I can be an extension when it comes to the automation.” For a solo or small firm, that extension can show up as faster first-pass review of contracts, quicker summarization of records, more consistent intake workflows, and reduced time spent on repetitive back-office tasks.

At the same time, it is crucial to be clear-eyed about what is being automated. While AI can help deliver efficiency, it does not offer legal judgment itself. The legal profession still must decide, matter by matter, what level of review is required and what risks are acceptable.


“Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”


And that鈥檚 where implementation discipline becomes a strategic differentiator. Law firms that treat AI as a general-purpose shortcut tend to create risk; while firms that treat AI as a workflow component, with guardrails, review steps, and clear accountability, are more likely to capture value without compromising quality.

The non-negotiable: lawyers remain accountable

Any serious conversation about AI in legal practice must address these limits, panelists agreed. The Hon. Linda Kevins, a Justice on the Supreme Court in the 10th Judicial District of New York (Suffolk County), offered the most direct articulation of the boundary line: “Lawyers are trained a certain way, and AI is never going to be trained that way. AI misses nuances. We’re always going to need lawyers; we’re always going to need the human in the loop.”

Indeed, legal work is saturated with nuance. The same set of facts can carry different weight depending on jurisdiction, judge, forum, procedural posture, and the client’s goals and risk tolerance. Even when the law is clear, the right action often is not. To strive for true justice requires judgment about timing, framing, business consequences, reputational risk, and settlement dynamics. Those are not merely inputs for an AI to process 鈥 they are human decisions that define legal representation.

As the webinar made clear, this is the point at which responsible use becomes practical, not abstract. If AI is used for research support, contract analysis, or document review, lawyers need an explicit approach for verification and oversight. The outputs may look polished and may sound confident; however, confidence is not accuracy, and professional responsibility does not shift to a vendor or an AI model. Human review is not a ceremonial or perfunctory step, nor is it a formality. Rather, it is the core control that protects clients and the court, and it is the inflection point that turns AI from a novelty into a defensible tool.

In practice, the human in the loop means deciding in which instances AI can assist and in what instances it cannot. It also means reserving an attorney鈥檚 time for the decisions that carry legal and ethical consequences and building repeatable habits that prevent teams from drifting into overreliance on AI, especially under deadline pressure.

The legal profession can capture real benefits from AI, including speed, scalability, and improved access, but only if it adopts the technology in a way that preserves what Justice Kevins highlighted: training, nuance, and human accountability.


You can find out more about how AI and other advanced technologies are impacting听best practices in courts and administration here

]]>
New data reveals AI governance gap between policy and practice, creating ESG risks /en-us/posts/sustainability/ai-governance-gap-esg-risks/ Mon, 23 Feb 2026 17:03:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=69559

Key highlights:

      • The governance-implementation gap is alarming 鈥 While nearly half of companies have AI strategies and 71% include ethical principles, a massive disconnect in execution persists.

      • AI governance is now a material investor risk 鈥 AI disclosure among S&P 500 companies jumped to 72% in 2025 from 12% in 2023, and investors are treating AI governance as a critical factor in overall corporate governance.

      • Regional disparities signal competitive risks 鈥 European, Middle Eastern, and African companies are leading in AI governance (driven by regulatory pressure), while only 38% of US companies have published AI policies despite being innovation leaders.


of 1,000 companies indicates a between the speed at which businesses are embracing AI and their preparedness to govern it effectively. These findings from , which offers a panoramic view across 13 sectors, are a wake-up call for every CEO, board member, and investor.

Indeed, nearly half (48%) of the companies sampled disclosed that they had AI strategies or guidelines in place, yet significant transparency gaps related to the environmental, social and governance (ESG) impacts of AI adoption remain.

When “ethical” principles lack substance

It is encouraging to see that 71% of companies with an AI strategy include principles around AI that include concepts such as ethical, safe, or trustworthy because this signals an awareness of the critical conversations happening around responsible AI. However, the AICDI data reveals a significant gap between stated principles and actual practice, more specifically:

      • Environmental blind spots 鈥 A staggering 97% of companies failed to consider the environmental impact of their AI systems, such as energy consumption and carbon footprint, when making deployment decisions. As AI models grow in complexity and scale, their energy demands will only increase. In addition, investors are likely to adopt green AI as a non-negotiable concept in the future.
      • Narrow social lens could open up reputational issues 鈥 More than two-thirds (68%) of companies with AI strategies did not adequately assess the broader societal implications of their AI technologies. Failure to understand and mitigate potential negative impacts on communities, vulnerable populations, or democratic processes is a recipe for reputational damage and legal challenges on the full spectrum of the human side of AI. Indeed, investors are growing more sophisticated in their understanding of these systemic risks.
      • Governance on paper and not in practice听鈥 While 76% of companies with an AI strategy reported management-level oversight, only 41% made their AI policies accessible to employees or required their acknowledgement. That means these policies are just words on paper if they are not understood, embraced, and actively practiced by those on the front lines of AI development and deployment. This gap in governance can lead to inconsistencies, unforeseen risks, and a fundamental breakdown in trust, both internally and externally.

Gaps in AI governance exist across regions and sectors

The AICDI data reveals fascinating regional and sectoral differences as well. For instance, companies in Europe, the Middle East, and Africa are generally ahead in publishing AI policies and establishing dedicated AI governance teams 鈥 action that is likely driven by the European Union鈥檚 looming AI Act. This highlights the proactive stance some regions are taking and offers a glimpse into what might become a global standard.

Despite the United States being a hub for AI innovation, only 38% of companies in the Americas published an AI policy. This discrepancy suggests a potential future competitive disadvantage for those lagging in governance.

Not surprisingly, sectors also varied in corporate oversight of AI initiatives. Financial, communication services, and information technology firms were more likely to have responsible AI teams than companies in energy and materials. This makes sense given their direct engagement with data and often consumer-facing AI applications, but it again points to a broader need for cross-sectoral AI governance best practices.

How companies can meet investor expectations

AI has rapidly become a mainstream enterprise risk. Fully 72% of S&P 500 companies disclosed at least one material AI risk in 2025, up from just 12% in 2023, according to the Harvard Law School Forum on Corporate Governance.

To attract and retain investor confidence, companies need to take concrete steps, including:

      1. Conducting a comprehensive AI audit 鈥 Companies need a thorough understanding of where AI is currently deployed across their products, operations, and services. The AICDI offers a to help with this, which allows companies to evaluate current AI governance maturity and benchmark themselves against peers.
      2. Establishing robust, transparent, and accessible AI governance frameworks Companies need to move beyond vague principles by developing clear, actionable policies that address environmental impact, societal implications, data privacy, fairness, and accountability. Critically, these policies must be accessible to听all听employees, and their acknowledgement should be a requirement. Training and continuous education are paramount in order to embed these principles into daily operations.
      3. Proactively disclosing AI governance practices听Companies should seek to anticipate investors鈥 concerns by incorporating specific disclosures on AI oversight mechanisms, transparency measures (including environmental and risk assessments), and how they鈥檙e preparing for evolving regulatory landscapes. Companies that showcase their commitment to responsible A as a strategic advantage will gain stakeholder trust.
      4. Embracing industry standards and collaboration 鈥斕鼴y using global frameworks, such as the (which grounds the AICDI’s work), companies can strengthen standardization efforts. They should also participate in collaborative efforts and industry forums to share best practices and collectively raise the bar for responsible AI.
      5. Comparing your performance with peers 鈥擟ompanies can benchmark their responses against sector and regional peers. Also, they need to identify leaders and laggards to understand where a company stands and where it needs to improve. AI is an evolving field, and therefore, corporate AI governance frameworks must evolve as well 鈥 and the key ingredient for this is responsible innovation.

By any measure, AI is transforming our world; however, its benefits will only be fully realized if companies prioritize their responsible governance. For investors, AI governance is fast becoming a material risk and opportunity. And for companies, it’s no longer an option but rather a strategic imperative that can go a long way toward building trust, mitigating risks, and securing a sustainable future.


You can learn more about the , the corporate foundation of 成人VR视频, here

]]>
When courts meet GenAI: Guiding self-represented litigants through the AI maze /en-us/posts/ai-in-courts/guiding-self-represented-litigants/ Thu, 19 Feb 2026 18:20:08 +0000 https://blogs.thomsonreuters.com/en-us/?p=69532

Key insights:

      • Considering courts鈥 approach 鈥 Although many courts do not interact with litigants prior to filings, courts can explore how to help court staff discuss AI use with litigants.

      • Risk of generic AI tools 鈥 AI use in legal settings can’t be simply categorized as safe or risky; jurisdiction, timing, and procedure are vital factors, making generic AI tools unreliable for court-specific needs.

      • Specialty AI tools require testing 鈥 Purpose-built court AI tools offer a safer alternative for litigants, yet these require development and extensive testing.


Self-represented litigants have always pieced together legal help from whatever sources they can access. Now that AI is part of that mix, courts are working to help people use this advanced technology responsibly without implying an endorsement of any particular tool or even the use of AI.

Many litigants cannot afford an attorney; others may distrust the representation they have or may not know where to begin. In any case, people need a meaningful way to interact with the legal system. Used carefully and responsibly, AI can support access to justice by helping self-represented litigants understand their options, organize information, and draft documents, while still requiring litigants to verify their information and consult official court rules and resources.

These issues were discussed in a recent webinar, , hosted by . The panel explored the potential benefits of AI for access to justice and the operational challenges of integrating AI into public-facing guidance for litigants.

The problem with “Just ask AI”

Angela Tripp of the Legal Services Corporation noted that people handling legal matters on their own have long relied on a mix of resources, “some of which were designed for that purpose, and some of which were not.” AI is simply a new tool in that environment, she added. The primary challenge is that court processes are rule-based and time-sensitive; and a mistake can mean missing a deadline, submitting the wrong document, or misunderstanding a requirement that affects the case.

Access to justice also requires more than just access to information in general. Court users need information that is relevant, complete, accurate, and up to date. Generic AI systems, such as most public-facing tools, are trained on broad internet text may not reliably deliver that level of specificity for a particular court, case type, or stage of a proceeding. In these cases, jurisdiction, timing, and procedure all matter. Unfortunately, AI can omit key steps or emphasize the wrong issues, and self-represented litigants may not have the legal experience to recognize what is missing.

At the same time, AI offers several potential benefits to self-represented litigants. It can explain concepts in plain language, help users structure a narrative, and produce a first draft faster than many people can on their own. The challenge is aligning those strengths with the precision that court processes demand.

A strategic pivot: from teaching litigants to equipping staff

In the webinar, Stacey Marz, Administrative Director of the Alaska Court System, described her team鈥檚 early efforts to give self-represented litigants clear guidance about safer and riskier uses of AI, including examples of how to properly prompt generative AI queries.

The team tried to create traffic light categories that would simplify decision-making; however, they found this approach very challenging despite several draft efforts to create useful guidance. Indeed, AI use can shift from low-risk to high-risk depending on context, and it was hard to provide examples without sounding like the court was endorsing a tool or sending people down a path to which the court could not guarantee results.

The group ultimately shifted to a more practical approach 鈥 training the people who already help litigants. The new guidance targets public-facing staff such as clerks, librarians, and self-help center workers. Instead of teaching litigants how to prompt AI, it equips staff to have informed, consistent conversations when litigants bring AI-generated drafts or AI-based questions to the counter.

The framework emphasizes acknowledgment without endorsement. It suggests language such as:

“Many people are exploring AI tools right now. I’m happy to talk with you about how they may or may not fit with court requirements.”

From there, staff can explain why court filings require extra caution and direct users to court-specific resources.

This approach also assumes good faith. A flawed filing is often a sincere attempt to comply, and a litigant may not realize that an AI output is incomplete or incorrect.

Purpose-built tools take time

The webinar also discussed how courts also are exploring purpose-built AI tools, including judicial chatbots designed around court procedures and grounded in verified information. Done well, these tools can reduce common problems associated with generic AI systems, such as jurisdiction mismatch, outdated requirements, or fabricated or hallucinated citations.

However, building reliable court-facing AI demands significant time and testing. Marz shared Alaska’s experience, noting that what the team expected to take three months took more than a year because of extensive refinement and evaluation. The reason is straightforward: Court guidance must be highly accurate, and errors can materially harm someone’s legal interests. In fact, even after careful testing, Alaska still included cautionary language, recognizing that no system can guarantee perfect answers in every situation.

The path forward

Legal Services鈥 Tripp highlighted a central risk: Modern AI tools can be clear, confident, and easy to trust, which can lead people to over-rely on them. And courts have to recognize this balance. Courts are not trying to prevent AI use; rather, many are working toward realistic norms that treat AI as a drafting and organizing aid but require litigants to verify claims against official court sources and seek human support when possible.

Marz also emphasized that courts should generally assume filings reflect a litigant’s best effort, including in those cases in which AI contributed to confusion. The goal is education and correction rather than punishment, especially for people navigating complex processes without representation.

Some observers describe this moment as an early AOL phase of AI, akin to the very early days of the world wide web 鈥 widely used, evolving quickly, and uneven in its reliability. That reality makes clear guidance and consistent messaging more important, not less.

This shift among courts from teaching litigants to use AI to teaching court staff and other helpers how to talk to litigants about AI reflects a practical effort on the part of courts to reduce the risk of harm while expanding access to understandable information.

As is becoming clearer every day, AI can make legal processes feel more navigable by helping self-represented litigants draft, summarize, and prepare; and for courts to realize that value requires clear guardrails, court-specific verification, and careful implementation, especially when a missed detail can change the outcome of a case.


You can find out more about how AI and other advanced technologies are impactingbest practices in courts and administrationhere

]]>
Invisible no more: Confronting the missing and murdered Indigenous women crisis in Canada /en-us/posts/human-rights-crimes/indigenous-women-crisis-canada/ Thu, 16 Oct 2025 15:53:42 +0000 https://blogs.thomsonreuters.com/en-us/?p=68043

Important highlights:

    • Technology-enhanced tools needed 鈥 Key recommendations in the fight against these disappearances include establishing a national database for Indigenous disappearances, using facial recognition technology to match missing persons with sex ads, and leveraging data analysis to identify patterns.

    • Disproportionate impact driven by systemic factors 鈥 Though Indigenous peoples are about 5% of Canada鈥檚 population, about half of women and girls trafficked are Indigenous.

    • Geographic patterns and cross鈥慴order links 鈥 Urban hotspots show concentrated disappearances and trafficking activity, with evidence of connections between Canadian and US sex ads.


The crisis of missing and murdered Indigenous women in Canada represents an urgent human rights concern, with Indigenous women disproportionately affected by violence and exploitation. This issue, often obscured by geographical and societal barriers, demands the attention and action of governments and law enforcement.

Research completed by 成人VR视频 in late-July illuminates the alarming intersection between missing and murdered Indigenous women and human trafficking. These insights are captured in a report titled Missing and Stolen: Disappearance and Trafficking of Indigenous Peoples in Canada. Findings in the report shed light on the systemic factors that contribute to these tragedies, and the report offers actionable recommendations to address and prevent further injustices against potential victims in Canada.

Examining disappearances and trafficking activities

Missing and murdered Indigenous women and girls are overrepresented in cases of violence and trafficking, the report shows. Indigenous peoples (First Nations, Inuit, and Metis) comprise roughly 5% of Canada鈥檚 total population; but despite this low figure, the 2014 National Task Force on Sex Trafficking of Women and Girls in Canada found that 51% of women and 50% of girls .

Systemic factors also contribute to the crisis of these victims, including a history of sexual abuse. Other adverse childhood experiences such as are disproportionately prevalent among Indigenous communities in Canada. These previous childhood abuse experiences contribute to the heightened vulnerability to gender-based violence, sexual exploitation, and human trafficking into adulthood.

Further, these systemic issues are compounded by previous experience with the child welfare system, which continues to disrupt Indigenous family structures. Although represent only about 8% of the population under the age of 15, they accounted for nearly as of 2021. Research also shows that many survivors of sexual exploitation and trafficking have prior involvement with the .

Sex ads points to cross-border activity

By analyzing data from reported Indigenous disappearances and sex ads, the study identified urban areas as hotspots in which these issues are most prevalent. Notably, cities such as Vancouver, Edmonton, and the Windsor-Toronto-Ottawa corridor, emerge as key centers of disappearances and trafficking. Edmonton also is a point of interest because of its high Indigenous population but relatively remote nature in comparison to other hotspots.

Additionally, the study highlights the cross-border nature of trafficking, with connections between Canadian sex ads and those in the United States. This tracks with the general population demographics of Canada, in which much of the population lives within driving distance of the US border. However, when examining some of the ads in urban areas along the border, many involved cross-border connections.

Recommended actions

To address the crisis of missing and murdered Indigenous women and human trafficking, several key actions are recommended for government agencies and law enforcement, including:

Consolidate reporting into a central repository 鈥 Establishing a national database for Indigenous disappearances is crucial for improving the speed and effectiveness of investigations.

Use advanced technology and data analysis 鈥 Likewise, using advanced technology to integrate and analyze data on missing and murdered Indigenous women and comparing that with sex ad data using facial recognition technology could help to quickly identify and locate missing individuals featured in sex ads. In addition, technology could be used in identifying potential victims in sex ads by homing in on specific terms that are used in ads, although this is tricky. Indeed, ads may falsely state ethnicity due to prejudices against Indigenous peoples, and some ads mislabel individuals to avoid devaluation or risk. At the same time, some ads used derogatory terms and specific tribal affiliations associated with the demand from sex buyers.

Put a face on the data 鈥 It is easy to see how the stories of these women and girls get lost as a data point. This is why it is important to amplify the stories of survivors and build awareness of the problem. Behind each data point is a person and family鈥檚 heartbreak, pain, and loss 鈥 those stories should be emphasized and disseminated.

Prioritize investigative resources in known epicenters and across borders 鈥 Investigations should focus on hotspots in which significant patterns of disappearances and sex trafficking have been identified.

Addressing the crisis of missing and murdered Indigenous women and sex trafficking is of paramount importance. Policymakers, communities, and individuals must unite to support these recommend actions to help ensure that every effort is made to prevent future tragedies and uphold the rights and dignity of Indigenous peoples.


You can find more about the ongoing fight against sex trafficking here

]]>
2025 Emerging Technology and Generative AI Forum: Human creativity and feedback drive ethical AI adoption /en-us/posts/technology/emerging-technology-generative-ai-forum-ethical-ai-adoption/ Tue, 30 Sep 2025 14:45:38 +0000 https://blogs.thomsonreuters.com/en-us/?p=67743

Key takeaways:

      • Embrace value, risk, and execution 鈥 for good and bad 鈥斕齈rofessional services firms must weigh the value of AI applications against potential risks, embracing both successes and failures as learning opportunities to improve responsible adoption.

      • Ethical oversight is everyone鈥檚 responsibility 鈥斕鼸nsuring responsible AI use in professional services requires active participation from all members of an organization, not just legal or IT teams.

      • Human creativity and feedback remain essential 鈥斕齏hile AI can generate ideas and accelerate processes, human judgment, creativity, and continuous feedback provide the proper pathways for ethical decision-making and successful integration.


AUSTIN, Texas 鈥 With the professional services world now squarely into the AI era, it鈥檚 clear that the speed of business is quicker than ever. Clients expect results in hours or even minutes rather than days, while generating documents can happen at the click of a button. Ask a research question, and a machine can intuit what you鈥檙e looking for with striking accuracy.

Alongside these business changes, however, it鈥檚 clear that the ethics of technology usage within professional services is shifting just as quickly. 鈥淓very time you come and do a talk with a group of people, within four weeks if not sooner, it鈥檚 changed,鈥 says Betsy Greytok, Associate General Counsel in Responsible Technology at IBM. 鈥淪o, it really does require you to keep on your toes.鈥

Ensuring that AI is used responsibly is paramount within professional services than in other professions, given the ethical and regulatory constraints placed on legal, tax, audit & accounting, financial services and risk, and more. During a recent session, A Unified Field: Ethical Considerations amid AI Development and Deployment, at the 成人VR视频 Institute鈥檚 2025 Emerging Technology and Generative AI Forum, panelists describe an ethical world that should be tackled as a challenge, rather than shied away from as an unsolvable risk.

Or, as Paige L. Fults, Head of School at the AI-centric Alpha School & 2-Hour Learning program, put it: 鈥淣ot being afraid of replacement, but leaning into repurpose.鈥

Embracing success 鈥 and failure

John Dubois,听the Americas AI Strategy Leader at Big 4 consultancy Ernst & Young, says he regularly gets questions from customers about AI and how they should use it, given that there are new AI applications arising seemingly every day. 鈥淭he way we describe it is a balance,鈥 Dubois explains. 鈥淟et鈥檚 start with value. If we know there鈥檚 value in something, then we can figure out the risk behind it, then we can figure out how we can execute.鈥

Just as importantly, however, this focus on value, risk, and execution can also aid professional services firms when an AI plan fails. For example, Dubois cites an MIT report from August 2025 that showed , often because of flawed integration. Embracing the value, risk, and execution strategy from the beginning not only allows for better chances of success, but even in the event of failure, 鈥渨e actually have a better shot at mitigating, when it does fall down.鈥

This sort of planning is not limited to just one group, Dubois says, noting that ethical oversight is seen as a key responsibility of everyone in the organization. He explains that E&Y has an internal implementation of OpenAI that has 150,000 distinct users each month. Because of an internal process called SCORE that removes customer data at the source, E&Y鈥檚 instance of OpenAI is largely clear of customer data 鈥 but it鈥檚 still not perfect.

E&Y has set a culture so that if someone sees proprietary data when using GenAI to develop a proposal or create a PowerPoint, they not only delete the data before use, but work to scrub it from the system entirely. 鈥淚t is all of our job to ensure that whatever you鈥檙e putting into that system or extracting out of that system, you鈥檙e cleansing,鈥 Dubois says. 鈥淚t鈥檚 not the job of the general counsel, or the risk team, or the IT team, it鈥檚 all of our job.鈥


When it comes to keeping up with AI ethics in a rapidly advancing space, professionals can rely on the same methods they have been employing for years to solve ethical quandaries: human creativity.


IBM鈥檚 Greytok agreed, noting that she鈥檚 part of an internal review board that examines major AI-related projects for ethical issues. There is a board review at the beginning of the development process to determine how risky a use case is, and then the system will give a response, considerations, and steps. If there is an issue, the board is empowered to stop development, even on a major project.

She drew an analogy to writing a paper in high school, in which there is a marked difference between simply turning in the paper, proofreading your own work, and asking a friend for peer review feedback. 鈥淭hat鈥檚 what you want, is that disagreement, because that鈥檚 critical thinking.鈥

She adds: 鈥淭he researchers sometimes get so excited about what they鈥檝e discovered that they forget to look at the other side of what can happen. You should want that. You shouldn鈥檛 be punished for saying, Is this the right thing or not?

The importance of feedback

Fults says that at the Alpha School, AI is not only baked into the curriculum, it . Students spend just two hours a day on academics, led by AI tools that are supplemented by off-line learning on a variety of subjects by in-person instructors that fill in the gaps that AI is not able to provide.

It鈥檚 a revolutionary concept but not a static one. Fults notes that 鈥渢he two-hour learning model has already changed so much since I鈥檝e been part of the school,鈥 and the instructors have a Slack channel on ways to find improvement that receives hundreds of messages a day.

It鈥檚 through this marrying of human intuition and the possibilities of the technology that Fults says she believes the school has found success and used AI ethically within education. 鈥淓ven though we have this tool, the human levers, the motivational levers that are happening day to day, actually make it work,鈥 she says, insisting that she 鈥渃an鈥檛 just hand [the technology] to any school鈥 without the corresponding processes in place.

Dubois and Greytok also call feedback a crucial part of the process in order to overcome AI barriers. Dubois tells the story of a large retailer that bought satellite images to determine footfall within a store. Shoppers, however, felt that was a privacy risk, and the idea was almost scrapped. Then, however, the legal and IT teams worked together to come up with an idea: Can you track clothing, but not faces, to get the same information of where within the store shoppers were going?

鈥淚t鈥檚 a creative workaround to get us to the same thing,鈥 Dubois explains. 鈥淲hen you have a constraint, what鈥檚 a clever way to work around this so we鈥檙e not taking a brand risk or a compliance risk?鈥

Indeed, when it comes to keeping up with AI ethics in a rapidly advancing space, professionals can rely on the same methods they have been employing for years to solve ethical quandaries: human creativity. AI can provide information and context more rapidly than ever before, but ultimately, professionals themselves will be the ones relied upon to make sure AI is used ethically and responsibly.

鈥淎I is an idea generator,鈥 Greytok says. 鈥淭he solution comes from the human.鈥


You can find out more about how emerging technologies are impacting professional services here

]]>
Cultivating practice readiness: New report highlights need for radical change in law school and bar admissions /en-us/posts/government/lawyer-readiness/ Thu, 07 Aug 2025 01:47:42 +0000 https://blogs.thomsonreuters.com/en-us/?p=67079

Key highlights:

      • Education and licensing misalignment 鈥 Legal education and attorney licensing are misaligned with the real-world skills and practical competencies new lawyers need to serve clients and address the nation鈥檚 growing access to justice crisis.

      • Strong support for licensing reform 鈥 There is strong momentum and support for reforming traditional pathways to legal licensure, according to research conducted by a body of chief justices and state court administrators.

      • Change will require leadership 鈥 Lasting, systemic change requires leadership and collaboration among state supreme courts, law schools, bar examiners, and the practicing bar.


For decades, cracks have widened in the nation鈥檚 promise of justice for all, with millions of people every year unable to find or afford legal help when they need it most. As the legal system in the United States faces a reckoning, one outline for change has emerged with the recently released (CLEAR), a body of chief justices and court administrators from a variety of states across the country. (CLEAR cited support from the 成人VR视频 Institute in the production of the report.)

The CLEAR group is calling for a radical change in how lawyers are taught and licensed. The report cites several factors driving the need for reform, including:

Increases in legal deserts and self-represented litigants 鈥 Judges in courtrooms across the country routinely see self-represented litigants, while so-called legal deserts, especially in rural areas, leave entire communities with few or no attorneys at all. Indeed, according to the American Bar Association, are considered legal deserts, with less than one lawyer per 1,000 people. As a result, most litigants are left to navigate a complex court system with inadequate or no legal assistance in family, probate and estate, housing, consumer, and criminal matters, according to the .

Declining interest in public sector work 鈥 The public interest sector, which includes civil legal aid, public defenders, and prosecutors, is buckling under the weight of crushing caseloads, stagnant federal and state funding, and a persistent shortage of lawyers. Indeed, students face numerous barriers to pursuing a career in public interest law, according to the CLEAR report, from less predictable career paths as compared to private practice, to a perceived lack of prestige in many schools, to the prospect of managing educational loans on a public interest lawyer鈥檚 salary.

Rapid technology changes 鈥 Compounding these challenges, advanced technology and especially AI are rapidly reshaping the legal profession. This, in part, is leading to that are essential for skill development because AI 鈥 which excels in tasks like legal research, writing, and drafting 鈥 now is handling work that had been historically assigned to associates and was a big part of how they learned their craft.

Defining practice readiness and minimum competence

Against this backdrop, the CLEAR report calls for overhauling how law schools educate attorneys and how bar admissions assess attorney readiness. More specifically, the report recommends a sharper, modern definition of practice readiness that more clearly defines the blend of knowledge, skills, and professional abilities that new lawyers must possess to competently serve clients from day one across four essential pillars. These pillars are i) foundational legal knowledge and analytical skills; ii) strong ethics and professionalism; iii) durable communication and interpersonal abilities; and iv) practical legal skills like advocacy, negotiation, and client management.

For the report, CLEAR surveyed of more than 4,000 judges, 4,000 attorneys, and 600 law students; and the committee鈥檚 findings consistently reveal that new lawyers struggle with practical legal skills, which include effective client communication, negotiation, and courtroom advocacy in addition to 17 other skills.

Feedback from survey participants points to the fact that these skills, which are crucial for the daily realities of legal practices, are not taught in law schools to a large degree. For example, only 7%听of experienced attorneys with more than five years of practice report that newly admitted attorneys, most of which are right out of law school, were very well or extremely well prepared to communicate effectively with clients. Likewise, 61%听of experienced attorneys said new lawyers were not well prepared or only slightly well prepared in negotiation, and 55%听of experienced attorneys said the same about new lawyers when it came to questioning and interviewing witnesses.

In addition, 66%听of judges say that new attorneys in their first five years of practice sometimes, rarely, or never competently conducted direct and cross examinations.

New pathways to licensure beyond the bar exam

Meanwhile, an additional insight from the CLEAR report highlights how the bar exam continues to focus heavily on theoretical knowledge and memorization, rather than the practical, day-to-day skills that define minimum competence. At the same time, the is more focused on foundation skills, including legal research, legal writing, and issue-spotting and analysis.

To address the dissatisfaction with the traditional bar exam, some states have been piloting innovative licensure pathways that better align with the skills new lawyers need. Such approaches include curricular pathways, such as in the in New Hampshire, and at the University of Wisconsin鈥檚 law school. Other methods are supervised practice models, such as in Oregon鈥檚 , , and temporary pandemic-era alternatives that provided graduates with the ability to prove their competence under the guidance of experienced attorneys.

Top recommendations for state supreme courts

The CLEAR group advocates for state supreme courts, as the profession鈥檚 primary regulators, to lead and foster innovation in licensure and practice readiness. The report urges state supreme courts to take such action as:

Lead collaborative efforts to realign legal education, bar admissions, and new lawyers鈥 readiness with public needs 鈥 State supreme courts are uniquely well-positioned to lead efforts to create a legal system that better addresses the legal needs of the communities they serve.

Encourage law school accreditation that serves the publicState supreme courts should encourage an accreditation process that promotes innovation, experimentation, and cost-effective legal education geared toward the goal of having lawyers meet the legal needs of the public.

Reform bar admissions processes to better meet public needs 鈥 This reform includes adjusting bar admission by setting passing scores based on evidence and piloting alternative pathways to passing the exam or equivalent assessment.

To put CLEAR鈥檚 recommendations for state supreme courts into practice, however, bold, coordinated action by law school administrators and the American Bar Association (as the accreditor of law schools) are critical as well. In particular, there is a need for expansion of experiential learning, such as clinics, externships, and simulation courses, to help students gain meaningful, hands-on experience and have direct responsibility with clients. In addition, aligning curricula with the realities of practice by integrating practical skills, ethics, and professional identity formation throughout, rather than relegating those factors to optional or add-on courses is another necessary reform.

Legal education and licensing must rapidly evolve to meet the nation鈥檚 urgent access-to-justice challenges, the CLEAR report notes. Law schools and state supreme courts must work together with renewed urgency and vision to lead this transformation. The failure to act by both law schools and courts means the justice gap in the US will only widen. Only with urgent, collaborative innovation to enact these changes can the legal profession deliver on the promise of justice for all in the decades to come.


You can access the full here

]]>
Bridging the AI gap: How professionals can turn awareness into action /en-us/posts/technology/bridging-ai-gap/ Thu, 24 Jul 2025 14:02:28 +0000 https://blogs.thomsonreuters.com/en-us/?p=66850

Key findings:

    • There鈥檚 a gap between AI awareness and understanding 鈥 The recent 2025 Future of Professionals Report shows that while 96% of professionals have at least a basic awareness of AI, 71% lack a strong understanding of its practical applications. This gap limits professional services organizations from truly maximizing their investment in AI tools.

    • AI strategy drives both professional development and ROI听鈥 Professionals with good or expert AI knowledge were found to be 2.8-times as likely to see organizational benefits from AI when compared to those with lesser knowledge. Similarly, companies and firms with a visible, top-down AI strategy are 3.5-times as likely to see positive returns on investment from AI. Both show a clear benefit to aligning skills training and overall organization AI strategy.

    • Identifying and addressing AI-related skills gaps is essential 鈥 Nearly half of professionals said they see skills gaps within their teams, including both technical and soft skills needed for successful AI adoption. Leaders who identify these gaps and tailor AI training to specific team needs will maximize the benefits of AI in the workplace.


In the nearly three years since ChatGPT introduced generative AI (GenAI) to a wide public audience, AI applications have increasingly been making their way into business tools and workflows. Whether AI, GenAI, or (increasingly) agentic AI, professionals in the legal, tax & accounting, and government industries have been introduced to new AI concepts at a dizzying speed.

As many professionals try to keep up, they are understandably having trouble staying ahead of the pace of change. According to results from the recent 成人VR视频 2025 Future of Professionals Report, which examines the trends impacting professionals careers, most professionals at this point know what AI can do. Where they鈥檙e struggling, however, is making the next step in determining how those use cases apply to them. This leap is even more pronounced with more senior members of many organizations, the report shows.

Although many professionals now have access to these next-generation tools, it鈥檚 clear that despite their best efforts, some don鈥檛 quite know how to apply AI, GenAI, and other related technologies to the best of their ability and for maximum advantage. For senior leaders of organizations, this means a change in approach is needed 鈥 and that may mean crafting an overarching AI strategy that allows professionals to achieve real goals that will also help the organization at large.

A gap between awareness and understanding

The idea that professional services organizations are behind the times with regards to technology may be an antiquated concept. Law firms of all sizes, for instance, have continued to invest in technology at a rate above inflation for the past decade, according to the 成人VR视频 Institute鈥檚 Law Firm Financial Index. Studies in tax & accounting and government have yielded similar results. Further, the interest in GenAI has amplified, with many large organizations adopting GenAI technologies, and even beginning to build their own proprietary systems.

With this in mind, it鈥檚 not surprising that the Future of Professionals Report found that 96% of surveyed professionals said they have some basic awareness of AI capabilities. AI has been baked into the systems that underpin daily work product and back-office functions at a rapid pace.

However, when asked whether they have an understanding of AI鈥檚 practical applications, rather than simply awareness, many professionals begin to falter. In fact, 71% said they feel they do not have a good understanding of the practical applications of AI to their own careers. This percentage is even higher for the Baby Boomers, who due to seniority are more likely to hold positions of leadership.

AI gap

There are a number of reasons why this gap can have occurred, according to the research. For one, less than half (39%) of all professionals say they have personal goals linked to AI adoption, creating less of an impetus to actually setting aside precious time to discover the practical applications of these tools. Some professionals also reported that they do not feel like they have input into AI policy, or do not feel encouraged to explore new ways of working, particularly at more junior levels.

The business implications of this are clear. The research found that knowledge of AI鈥檚 applications has a direct correlation with receiving benefits from AI鈥檚 use in the organization, as professionals with good or expert AI knowledge were found to be 2.8-times as likely to see organizational benefits from AI when compared to those with lesser knowledge.

The evolution of the modern professional

Given the rapid rate of AI adoption, it鈥檚 no surprise that corporations and firms alike are increasingly looking to develop more business strategy around AI usage. And indeed, the Future of Professionals research indicates that organizations with a visible AI strategy are 3.5-times as likely to be experiencing at least one form of positive return on investment from overall AI usage.

So, how does that top-down strategy filter down to legal, tax & accounting, and government professionals themselves? According to the research, the discrepancy between AI awareness and AI understanding is not a matter of desire. Professionals want to be upskilled in this area. In fact, more than three-quarters of professionals said they are voluntarily reading reports and articles about AI in their industry, and more than two-thirds have said they鈥檝e voluntarily experimented with AI tools or held informal learning sessions with their colleagues.

AI gap

Yet, the difference between awareness and understanding remains, even with these increased opportunities for learning. According to the research, however, it鈥檚 not one way of learning alone that lends itself best to closing this gap. Instead, the biggest predictor of AI proficiency is engaging in a wide variety of learning methods, both on an organizational and personal level. Put another way, it鈥檚 a plan for comprehensive training and education, rather than simply a single training session or a module.

This clearly indicates that organization leaders need to take an active role in developing a more comprehensive strategy to convert awareness into understanding 鈥 and map AI understanding to what skills their teams need to grow.

AI gap

Almost half of professionals reported skills gaps within their teams that are need to be addressed before the team can become a fully actualized contributor to the organization. In many cases, these skills gaps may be technology or data skills and could include the ability to use technologies such as GenAI. In other cases, however, there may be gaps in more soft skills, areas that touch technology but are not inherently technical 鈥 such as organizational and efficiency skills, interpersonal effectiveness, and higher-order thinking.

Closing the gap between AI awareness and AI understanding will not be the same for every person and every team. The most effective leaders will be the ones who take the time to identify where those gaps exist and determine the specific use cases in which AI can be leveraged to aid those deficiencies. As the 2025 Future of Professionals Report shows, however, taking this time can yield tangible results 鈥 both in getting the most out of these new technologies and helping professionals reach their true potential in an AI-enabled future.


You can download a copy of听成人VR视频 Future of Professionals 2025 here

]]>
Future-proofing the message: Leveraging corporate sustainability strategies and communication /en-us/posts/sustainability/corporate-communication-strategies/ Fri, 18 Jul 2025 13:57:21 +0000 https://blogs.thomsonreuters.com/en-us/?p=66749

Key takeaways:

      • Frame sustainability as future-proofing the business 鈥 Corporate leaders should characterize sustainability investments in this way to better communicate their value and importance to stakeholders.

      • Strong governance enables clear sustainability messaging 鈥 Effective board oversight and governance can help companies maintain internal clarity and emphasize their commitment to sustainability.

      • Prioritize present action over future ambitions 鈥 Focusing on current sustainability actions and progress can aid corporate leaders in building credibility and trust with stakeholders, rather than just making long-term promises.


Sustainability leaders find themselves at a crossroads in a volatile landscape. While the urgency for climate action and responsible business has never been greater, the external environment is rife with uncertainty, politicization, and hostility. Indeed, the challenge for corporate leaders is how can they keep internal momentum, communicate with credibility, and maintain resilience in the face of skepticism and shifting regulatory winds?

At Reuters Events鈥 recent , sustainability professionals came to learn how their peers are approaching sustainability action and corporate communications during this tumultuous time. Community played a big part in the learning as attendees were organized into buddy groups categorized by their primary learning objectives, such how best to communicate with stakeholders with varying interests or how to navigate changing regulatory and compliance rules.

Across the board, attendees learned the essential tenets for effective sustainability actions and messaging. Indeed, a key insight heard multiple times from the event鈥檚 speakers was the success of characterizing sustainability investments as future-proofing the business in an environment in which the only certainty is uncertainty.

Elements for sustainability messaging & engagement

Achieving clear and impactful sustainability messaging, coupled with genuine engagement, necessitates a strategic approach grounded in several fundamental elements, including:

Rethinking sustainability to focus on how it secures future performance 鈥 By aligning communication and action to withstand external shocks 鈥 be they political, regulatory, or reputational 鈥 leaders can take the first step in future-proofing company operations. This lies at the heart of strategic sustainability activities and starts by reinforcing sustainability鈥檚 connection to the company鈥檚 core purpose and ensuring that every team member understands why these actions are being taken. Indeed, in the words of one speaker: 鈥淕aining buy-in is easier when it is closely tied to purpose.鈥 If a sustainability activity does not tie into the company鈥檚 purpose, it is time to rethink it.

To put this into practice, leaders should convey a consistent internal message that sustainability is not a passing trend but rather a vital strategy for long-term value and risk management. As one executive noted: 鈥淐lients are willing to pay for future proofing and resilience.鈥

This future-ready mindset also means that leaders should seek to build agility and adaptability into their companies鈥 operations. And today, given the current politicized atmosphere, companies face a challenge in operating in a “volatile and even polarized” environment, said Jennifer Duran of , adding that this only underscores the need for “value protection” and a “resilience-building program.”

Enabling internal clarity through strong governance 鈥 In the words of one executive: 鈥淪trong governance is the foundation for steadfast commitment to sustainability.鈥 Clear messaging is easier when there is effective board oversight and strong governance with clearly defined roles and responsibilities, going from the C-Suite down to individual contributors.

When the external conversation grows noisy or hostile, internal clarity 鈥 from the board, the C-Suite, and the operational managers 鈥 becomes the organization鈥檚 shield. As boards grapple with key issues, sustainability is an effective strategic lens to consider, and during these debates, the cost and the return on investment (ROI) is often a major component. That said, several conference speakers highlighted another ROI 鈥 the risk of inaction 鈥 upon which chief sustainability officers must consistently keep their boards focused.

Building trust through data, transparency & accountabilityRobust, actionable data is the foundation of credible sustainability communication. Stakeholders expect transparency not just on companies鈥 successes, but also on their challenges and setbacks. 鈥淚t is important to keep every stakeholder on the same page and invite them to engage more,鈥 said Dave Stangis of Apollo Global Management.

Internally, sustainability is a team sport. 鈥淕etting people on board and keeping them on board鈥 is the key to embedding sustainability across the organization, said Estee Lauder鈥檚 Al Iannuzzi. For example, consistent efforts to collect data from data owners while reminding them of the important role the data plays is key to operationalizing sustainability data for transparent and accountable reporting.

However, the biggest data challenge, according to several speakers, is the reliability of data coming from the supply chain, particularly partners based overseas. While there is no magic pill to solve this problem, embedding the requirement in vendor agreements that suppliers have to share data is a useful way of operationalizing this area of data collection.

Engagement & communication actions in hostile times

Sustainability executives shared their best lessons learned to ensure their corporate sustainability strategies remain funded and move forward during this tumultuous time, including:

Prioritize action now over ambitions in the futureIn an era of skepticism, ambitious long-term promises, such as like 2050 net zero targets, can sound hollow because of the long time frame. Effective sustainability messaging involves the urgency of now, because stakeholders 鈥 whether employees, customers, or regulators 鈥 want to know what the company is doing today.

Executives from pharmaceutical giant Novartis and tech heavyweight Ericsson highlighted the power of storytelling that鈥檚 rooted in current action. The key message from both companies was: 鈥淒on鈥檛 focus on 2050, communicate what you are doing now.鈥

Urging 鈥渁ctions over commitments,鈥 Sonya Gafsi Oblisk of Whole Foods Market echoed this attitude as well. 鈥淲e can impact change and lead change every day, and small actions across the stakeholder board is the way to get there.鈥

Institute audience-centric, authentic messaging 鈥 Authenticity and transparency, rooted in the specific needs and context of each audience, are non-negotiable in both effective engagement and sustainability messaging. When speaking with investors, framing sustainability risks as business issues are crucial. Mindy Lubber of framed the challenge succinctly: 鈥淐limate issues, water issues are business issues 鈥 climate change is a fundamental risk to our economy.鈥

Establish strength in numbers for collaboration & advocacySuccess in sustainability communications in a politicized environment is sometimes achieved through strength in numbers. Indeed, industry coalitions and trade associations offer credibility in a hostile political environment. 鈥淲e have to collaborate, and we need to make coalitions,鈥 said Gina McCarthy, former White House climate advisor. 鈥淭hat is how change works.鈥 Likewise, working together on standards, advocacy, and best practice-sharing not only amplifies the message but also provides a buffer against sector-specific backlash, other attendees said.

Communication as a tool for resilience

Insights from the Reuters Events鈥 Responsible Business USA 2025 conference made it clear that framing sustainability through the lens of resiliency is now mission-critical for sustainability leaders. By anchoring messaging in purpose, focusing on present action, and collaborating broadly, companies can weather any potential backlash while building lasting value.

鈥淚f you鈥檙e not adopting change, you are succumbing to it,鈥 Kenvue鈥檚 Duran explained, adding that sustainability leaders should let their communication be a tool for resilience, not retreat in order to keep pushing forward, together, toward a sustainable future.


You can find more information in our Sustainability Resource Center here

]]>