Workflow Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/workflow/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Sun, 05 Apr 2026 09:35:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The 4 Plates: Are you measuring the real value of AI in your legal department? /en-us/posts/corporates/4-plates-measuring-efficiency/ Wed, 01 Apr 2026 13:15:21 +0000 https://blogs.thomsonreuters.com/en-us/?p=70085

Key takeaways:

      • Efficiency is a means, not an end 鈥 Gains from AI only count when you can show what they enabled: better advice, stronger protection, smarter business support.

      • Narrow measurement invites cuts 鈥 Legal departments that measure AI value only through cost savings are telling C-Suites that legal costs less, thereby inviting budget and headcount reductions.

      • Measure across all four plates 鈥 A framework that captures effectiveness, risk, and enablement alongside efficiency is what shifts perception of the legal department from cost center to strategic asset.


Your legal department has invested in AI tools, adoption is growing, your team is saving time on routine work and, by most accounts, work operations are running faster. Then your CFO asks a simple question: What has AI delivered for the legal department?

If your answer centers on hours saved and cost reduced, you are not alone. However, you may be leaving your most important value story untold. And in a climate in which legal departments are under more scrutiny than ever to demonstrate the full return on their AI investment, that gap matters.

This is the fourth and final part of our series on the 鈥淔our Spinning Plates鈥 model, which frames the GC’s evolving responsibilities as:

      1. delivering effective advice
      2. operating efficiently
      3. protecting the business, and
      4. enabling strategic ambitions.

This article focuses on the Efficient plate and specifically on the risk of letting it do too much of the talking.

plates

The Efficient plate under pressure

For a GC, making the best use of what are often limited resources is a constant pressure. The Efficient plate sits alongside, not above, the other three plates and must be kept always spinning. Right now, however, for many in-house legal teams the Efficient plate is receiving disproportionate attention, and for understandable reasons.

AI adoption in corporate legal departments is accelerating quickly. According to the 成人VR视频 Institute’s AI in Professional Services Report 2026, nearly half (47%) of corporate legal respondents surveyed said their department has already integrated generative AI (GenAI) into their work 鈥 more than double the figure from the previous year. A further 18% reported that they鈥檙e already using agentic AI, with more than half expecting agentic AI to be central to their workflow within the next two years.

GCs are genuinely excited about what this makes possible. As one GC said in the survey that underpinned the AI in Professional Services Report: “It presents the promise of getting out of low-value work and into higher-value work that supports the business.鈥 Another described their vision of a legal department that is “boldly digital-first, relentlessly innovative, and tightly woven into business priorities.”

Clearly, the opportunity is real, but so is the risk of measuring it badly.

The measurement trap

Our 2026 research found that only one-quarter of legal departments are currently measuring the ROI of their AI tools. That alone is striking given the pace of adoption but the follow-up finding is where the real problem lies 鈥 of those departments that are measuring ROI, 80% are tracking it in terms of internal cost savings.

Reducing external spend, automating high-volume processes, and bringing more work in-house are all legitimate efficiency gains and worth reporting, of course. However, when cost reduction becomes the only story being told, two things can happen. Your C-Suite learns to associate your department’s value with how little it costs, a frame that is very difficult to escape once it鈥檚 established. And the wider value that efficiency enables in terms of sharper risk identification, faster business support, and higher-quality advice goes unmeasured and therefore unrecognized.


听If your metrics only capture time saved and cost reduced, and not what that freed-up capacity actually delivered, you are measuring the means and ignoring the end.


Think about what GCs themselves say they want from AI. As several GCs said in the survey, they鈥檙e hoping AI will provide them with “better output on more meaningful tasks,” “proactive, strategic insight,” and “getting out of low-value work.” These are not efficient outcomes, per se; rather, they are effectiveness, protection, and enablement outcomes, made possible by improved efficiency.

So, if your metrics only capture the input (time saved, cost reduced) and not what that freed-up capacity actually delivered, you are measuring the means and ignoring the end. This is the efficiency trap 鈥 measuring the plate so narrowly that it starts to work against you.

Reframing how you measure efficiency

Measuring efficiency well does not mean measuring it more. It means measuring it differently, and always in relation to the business you support. A few principles worth applying include:

Present spend in a business context 鈥 Legal spend as a percentage of company revenue tells a more credible story than a raw cost figure. It scales with the business and can be benchmarked meaningfully against peers.

Show what technology investment actually delivered 鈥 Time saved through automation is a useful starting point, but the stronger case is what the team did with that time. Tracking the shift from routine to strategic work over a period of time is a far more compelling ROI story.

Connect efficiency gains to business outcomes 鈥 An efficiency gain that enabled a faster product launch, prevented a compliance risk, or improved stakeholder satisfaction has a value that no cost metric will capture. Build those connections explicitly into how you report the value of the legal department to the C-Suite.

New resources to help

To support GCs in getting this right, the 成人VR视频 Institute has added two new resources to its Value Alignment Toolkit that directly address this measurement gap.

The Metrics Library brings together more than 100 metrics organized across all four spinning plates. It is a practical starting point for GCs to browse, select, and adapt to the specific goals of their departments, making it easier to build a measurement framework that reflects everything departments do, not just the part that appears in a budget line.

The AI Success Metrics guide addresses the AI measurement gap head-on with a best practice guide and a hands-on worksheet designed specifically for legal departments navigating AI adoption and asking: How do we actually know whether this is working? It looks beyond cost savings to capture the fuller picture of AI value including quality, capacity, strategic contribution, and risk.

Getting the balance right

In today鈥檚 environment, every GC needs to consider their answer when their C-Suite asks what the legal department delivers. Are your department鈥檚 metrics giving them the full answer or just the part that’s easiest to count?

Efficiency is not the enemy of strategic value. A department that runs well, uses its resources wisely, and embraces technology thoughtfully can in turn create the conditions for everything else the business needs from its legal function. However, that case only lands if your metrics measure across all four plates, not just one.


You can explore the new Metrics Library and AI Success Metrics guide, along with the full 成人VR视频 Institute鈥檚 Value Alignment toolkit听丑别谤别

]]>
Helping the legal profession get AI鈥憆eady: A new advisory board takes shape /en-us/posts/legal/ai-advisory-board/ Thu, 26 Mar 2026 11:31:32 +0000 https://blogs.thomsonreuters.com/en-us/?p=70080 Key insights:

      • AI is already reshaping the legal profession 鈥 AI听is already embedded in lawyers’ day-to-day legal work with a significant share of both law firm attorneys and in-house legal teams actively using GenAI tools, with many expecting it to become central to their work within the next five years.

      • AIFLP Advisory Board was formed to prepare lawyers for an AI-reshaped profession 鈥 TRI convened 21 respected leaders from legal education, private practice, the judiciary, and AI ethics and governance to help ensure lawyers and law students are prepared for a profession reshaped by AI.

      • Human judgment remains central in an AI enabled legal future听鈥 Becoming AI ready is not simply about learning to use new tools; the Advisory Board emphasizes strengthening irreplaceable human capabilities is critical.


In today鈥檚 tech-driven environment, AI is no longer a future concept for the legal profession 鈥 it鈥檚 already here, and it鈥檚 changing how lawyers work, learn, and serve clients. Recognizing just how fast the evolution is moving, the 成人VR视频 Institute (TRI) has launched the AI and the Future of Legal Practice (AIFLP) Advisory Board, bringing together a group of respected leaders from across the legal ecosystem to help guide what comes next.

The board includes 21 accomplished voices from legal education, private practice, the judiciary, and AI ethics and governance. Their shared goal is simple but ambitious: Help ensure that both today鈥檚 lawyers and tomorrow鈥檚 law students are prepared for a profession being reshaped by AI.

Why now?

Because the shift is already underway. According to TRI鈥檚 recent 2026 AI in Professional Services Report, 41% of law firm attorneys say their organizations are already using some form of generative AI (GenAI); and nearly half of those at corporate legal departments report that AI tools are being rolled out there too. Even more telling, most professionals said they expect GenAI to become central to their day鈥憈o鈥慸ay work within the next five years.

That pace of change raises big questions about competence, ethics, education, risk, and access to justice. And those questions don鈥檛 have easy answers.

What the Advisory Board will focus on

The AIFLP Advisory Board is designed to tackle those challenges head鈥憃n. Its work will center on four key areas that are already under pressure as AI adoption accelerates:

      • Legal education and talent development
      • Ethics, professional competence, and accountability
      • Governance, risk management, and client counseling
      • Access to justice and modern service delivery

The Advisory Board鈥檚 early focus areas will look at how AI is actually changing legal practice today, what future鈥憆eady lawyers really need to know, and how legal education and real鈥憌orld practice can better align. The emphasis is not just on using AI tools, but on strengthening the human skills that matter most, such as sound judgment, critical thinking, and careful verification of AI鈥慻enerated outputs.

Shaping the future, not reacting to it

Citing the critical need for this Advisory Board鈥檚 creation, Mike Abbott, Head of the 成人VR视频 Institute, notes that the legal profession is at a crossroads, and it can either react to AI鈥慸riven disruption or take an active role in shaping how these technologies are used to support lawyers, courts, and the public.

鈥淏y assembling a board of distinguished leaders, our goal is to help practicing lawyers and the lawyers of the future navigate a rapidly evolving landscape,鈥 Abbott said. 鈥淓nsuring that legal education strengthens irreplaceable skills such as critical thinking, human judgment and effective communication helps make AI use safe and effective. The Board鈥檚 efforts will ultimately help shape a future-ready profession, leading to better outcomes for all.鈥

Meet the AIFLP Advisory Board Members

By convening experienced leaders from across the profession, TRI hopes to help lawyers navigate this landscape with confidence. Advisory Board Members include:

      • Michael Abbott, Head of the 成人VR视频 Institute
      • Soledad Atienza, Dean, IE Law School
      • The Honorable Jennifer D. Bailey, (Ret.), Partner, Bass Law
      • Benjamin Barros, Dean, Stetson University College of Law
      • Professor Sara J. Berman, University of Southern California, Gould School of Law
      • Megan Carpenter, Dean Emeritus, University of New Hampshire Franklin Pierce School of Law
      • Ronald S. Flagg, President, Legal Services Corporation
      • Donna Haddad, AI Ethics and Governance expert, and founding member, IBM AI Ethics Board
      • Johanna Kalb, Dean and Professor of Law, University of San Francisco School of Law
      • The Honorable Nelly Khouzam, Florida Second District Court of Appeal
      • The Honorable William Koch, Dean, Nashville School of Law, and former Tennessee Supreme Court Justice
      • Sheldon Krantz, retired partner, DLA Piper, and a founder, DC Affordable Law Firm
      • Stefanie A. Lindquist, Dean, School of Law, Washington University in St. Louis
      • The Honorable Mark Martin, Founding Dean and Professor of Law, Kenneth F. Kahn School of Law at High Point University, and former Chief Justice, Supreme Court of North Carolina
      • Caitlin (Cat) Moon, Professor of the Practice and founding co-director, Vanderbilt AI Law Lab, Vanderbilt Law School
      • Hari Osofsky, Myra and James Bradwell Professor and former Dean, Northwestern Pritzker School of Law; Founding Director, Northwestern University Energy Innovation Lab; and Founding Director, Rule of Law Global Academic Partnership
      • Joanna Penn, Chief Transformation Officer, Husch Blackwell
      • The Honorable Morris Silberman, Florida Second District Court of Appeal
      • The Honorable Samuel A. Thumma, Arizona Court of Appeals, Division One
      • Mark Wasserman, Partner and CEO Emeritus, Eversheds Sutherland
      • Donna E. Young, Founding Dean, Lincoln Alexander School of Law, Toronto Metropolitan University

What鈥檚 next?

The Advisory Board held its first meeting in February and will meet quarterly going forward. As the work progresses, TRI plans to publish research findings, best practices, and practical recommendations for legal educators, law firms, and courts.

In a profession built on precedent and careful reasoning, the rise of AI presents both opportunity and responsibility. The AIFLP Advisory Board is an effort to make sure the legal community meets that moment thoughtfully and on its own terms.


You can learn more about the impact of advanced technology on the legal profession here

]]>
2026 State of the Corporate Law Department Report: GCs align strategy to corporate imperatives, but C-Suites want more /en-us/posts/corporates/state-of-the-corporate-law-department-report-2026/ Tue, 24 Mar 2026 12:09:01 +0000 https://blogs.thomsonreuters.com/en-us/?p=70047

Key takeaways:

      • Disconnect between legal departments and C-Suite perceptions 鈥 While many general counsel believe their departments are significant contributors to business success, most C-Suite executives do not share this view. Fully 86% of GCs say they believe their department is a significant contributor, but only 17% of C-Suite executives agree.

      • A need to find new ways to demonstrate value 鈥 Legal departments are under increasing pressure to do more with less, as nearly half of GCs surveyed cite staffing and resource constraints as their top barrier to delivering additional value. Despite these limitations, expectations from the C-Suite continue to rise.

      • AI adoption accelerates, business strategy comes next 鈥 Legal departments are rapidly embracing technology to improve efficiency, manage resources, and address cost pressures. Not surprisingly, the proportion of GCs calling AI a strategic imperative has doubled.


Over the past several years, general counsel and corporate law departments at large have transformed their operations. Many have become more efficient enterprises, leveraging technology, in particular AI, at an increased pace. GCs have adjusted their hiring practices to conform with the modern corporation, taking new ways of working into account. And they have embraced data-driven decision-making, evaluating outside counsel and their own operations alike with a wider suite of new metrics and KPIs.

But do you know who hasn鈥檛 yet realized the fruits of that labor? The corporate C-Suite.

Jump to 鈫

2026 State of the Corporate Law Department Report

 

The , released today by the 成人VR视频 Institute, reveals a disconnect between how GCs and their corporate law departments view their own alignment to the wider business, and what C-Suite executives believe the legal department contributes. Within this gap, the message is clear: GCs not only need to align with their organizations鈥 overall business strategy, they need to learn how to prove that alignment to the rest of the company.

Indeed, when asked how they view legal鈥檚 contribution to the rest of the business, 86% of GCs surveyed said they viewed the legal function as a significant contributor. However, only 17% of other C-Suite executives said the same 鈥 and 42% said legal contributes little or not at all.

corporate law departments

As the report explains, this disconnect lays the inherent groundwork for the tension facing many GCs today. While they are increasingly aiming to align to business standards, the rest of the organization is not recognizing those actions. Instead, many C-Suites are looking for even more out of today鈥檚 legal departments to prove their contributions to organizations鈥 business imperatives.

As in past years, many in-house legal departments are being tasked to do more with less. Nearly half of GCs cited staffing and resource constraints as the top barrier they face to delivering additional value. Indeed, many said they expected outside counsel spend in some key areas 鈥 such as regulatory work and mergers & acquisitions 鈥 to remain high. As of the fourth quarter of 2025, more than one-third (36%) of GCs said they expect to increase overall spend on outside counsel over the next year, while only 20% said they plan to decrease their spend.


Despite legal departments’ gains, their C-Suites are looking for them to take the next step, turning operational excellence into business success.


Not surprisingly, many GCs said they view technology as one of the primary ways they have to combat these resourcing and cost issues. In fact, the proportion of GCs mentioning technology as a strategic priority entering 2026 doubled over the year prior. Legal departments have begun to feel positive effects of AI in their own organizations, the report notes, such as increased efficiency or time feed up for strategic work.

Despite these gains, C-Suites are looking for are looking for their legal functions to take the next step, turning operational excellence into business success. This can take a number of different forms, such as explicitly tying advice to client business objectives, presenting legal spend in the context of the business by showing it as a percentage of revenue, or approaching risk management with the goal of aiding business imperatives. 鈥淲hen we have a risky legal subject, the company never prefers just to see the legal opinion,鈥 said one retail GC. 鈥淭hey鈥檙e also requesting you to drive them how to make a decision.鈥

AI and technology should also be approached in this same way, the report argues. Although almost half of all corporate legal departments have some type of enterprise-wide GenAI tool, according to the survey, very few are collecting success metrics around AI鈥檚 implementation or linking its use to business revenue. Put a different way, many legal departments are focused on unlocking capacity, rather than deploying capacity in a business-centric way 鈥 much to the chagrin of their C-Suites.

corporate law departments

Although legal departments have established a solid foundation upon which a business can stand, ultimately, C-Suites don鈥檛 want just a foundation. They want help building the entire house, the report shows, directly enabling the services that companies provide to customers. In that, GCs and legal departments have more work to do, not only tying strategy to overall business initiatives but actively communicating how the legal function鈥檚 work aids the company as a whole.


You can download

a full copy of the 成人VR视频 Institute’s “” here

]]>
Move over, 鈥淒eath of the billable hour,鈥 Legalweek 2026 has found a new existential crisis /en-us/posts/legal/legalweek-2026-new-existential-crisis/ Thu, 19 Mar 2026 13:25:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=70031

Key takeaways:

      • Structural change in firms 鈥 The traditional law firm pyramid, in which junior lawyers perform high-volume work at billable rates, is losing its foundation as AI compresses tasks that once took hours and clients increasingly bring more work in-house.

      • Finding new ways to train 鈥 AI-powered simulations are emerging as a concrete answer to the associate training problem, allowing new lawyers to build courtroom skills faster and fail safely behind closed doors.

      • The associate role isn’t dying, it’s being redefined 鈥 Those law firms that figure out the right mix of legal training, technological fluency, and management skills will have a significant edge over those that are still debating it.


NEW YORK 鈥斕齇n more than one occasion, I have written seriously and at length about the death of the billable hour. I’ve argued that alternative fee arrangements (AFAs) are the future, that the economic logic of hourly billing is irreconcilable with AI-driven productivity gains, and that the industry needs to prepare for a fundamentally different pricing model. I meant every word. I still do.

Yet, at last week鈥檚 one attendee pointed out they鈥檝e been hearing about the death of billable hour since the 1990s. At this point, it’s less a prediction and more of a tradition. Indeed, Matthew Kohel, a partner at Saul Ewing, said despite the legal press coverage connecting AI to the billable hour’s demise that narrative is now entering its third or fourth decade. And Kohel said his firm simply isn’t seeing meaningful client-driven movement toward AFAs.

So let鈥檚 be honest: the billable hour is not dead, and in fact, it may not be even close to dead.

However, if you’re looking for something that is facing a genuine existential reckoning 鈥 something the legal industry whispered about in the early days of generative AI (GenAI) and is now discussing openly 鈥 Legalweek 2026 may have found it. It turns out the billable hour was never the thing in danger, rather it鈥檚 the person billing the hours.

It’s the associate.

The question nobody wanted to ask out loud

The future of the junior lawyer surfaced in virtually every breakout session across the three-days event, and while it may not be the point of inception for the question, it was certainly the moment this idea graduated from a half-whispered aside to main-stage conversation.

Moreover, the problem has grown more urgent since its inception in the early GenAI days, when the question was simply whether a firm would need fewer associates. Now, that question hasn’t gone away, but it’s been joined by harder ones concerning training, hiring, and legal and technical skills. For example, what if AI is already better than a junior associate at some of the tasks that defined the role in the past? And what happens if someone says it out loud?

Someone said it out loud.


If you’re looking for something that is facing a genuine existential reckoning, Legalweek 2026 may have found it. It turns out the billable hour was never the thing in danger, rather it鈥檚 the person billing the hours.听It’s the associate.


During a panel on Measuring What Matters, the conversation turned to client trust. Clients want to know: How can you be sure AI will catch everything? How do you trust it to find what matters across 5,000 pages of documents?

The response from the panel was direct, and it landed like a brick in the room: it’s 5,000 pages, and someone was reading those five thousand pages. That someone is an associate. If that associate 鈥 who, more often than not, is one of the least experienced lawyers in the building 鈥 is the one reading all those pages, why would you trust them to do it better than a machine?

While that question hung in the air during the panel, it does deserve to sit with you for a moment afterward. Because embedded in it is the uncomfortable arithmetic that drives the entire associate question. The traditional law firm pyramid is built on a base of junior lawyers performing high-volume, lower-complexity work such as document review, due diligence, first-pass research, and doing so at rates that generate revenue while the activity is simultaneously (in theory) training the next generation of partners. If AI can do that base-layer work faster, cheaper, and with accuracy that one panelist described as “beyond very good,” then the pyramid doesn’t just shrink. It loses its foundation.

Barclay Blair, Senior Managing Director of AI Innovation at DLA Piper, noted that tasks like due diligence on some types of financial contracts are already being compressed to two hours, down from 15 to 20 鈥 with zero hours being a realistic possibility in the near future.

Further, as one attendee observed, clients increasingly are adopting AI internally, and they’re bringing work in-house that was previously sent to outside counsel. Clearly, the work that trained generations of associates isn’t just being automated 鈥 in some cases, it’s leaving the firm entirely.

Fewer reps, greater weight

Yet here is where it would be easy (and wrong) to write the doom-and-gloom version of the future, in which AI replaces associates, the pipeline collapses, nobody knows how to train lawyers anymore, civilization crumbles, etc. It’s a clean narrative, but it’s also not what Legalweek panels actually said.

Because alongside the anxiety, something else was happening. People were building answers.

In another panel, Developing the Future Lawyer, panelists spent an hour in the weeds of what associate training actually looks like when the old model breaks down 鈥 and the conversation was far more concrete than you might expect.


Panelist spent an hour in the weeds of what associate training actually looks like when the old model breaks down 鈥 and the conversation was far more concrete than you might expect.


Panelist Abdi Shayesteh, Founder and CEO of AltaClaro, laid out the core problem with precision, noting that there’s a growing gap in critical thinking among associates. Templates getting copy-pasted without relevance analysis, and there is a lack of knowing what you don’t know. And the traditional training methods such as videos, lectures, and passive learning, don’t fix it. Indeed, those outdated models may be making it worse. Shayesteh鈥檚 analogy was blunt: You don鈥檛 learn to swim by watching videos 鈥 you need to jump into the deep end.

His solution is AI-powered simulations. Not hypothetical ones, but working deposition simulations available today, with real-time AI feedback, in which associates can practice cross-examination, deal with opposing counsel objections, and build the muscle memory that used to require years of live experience.

Kate Orr, Managing Director of Practice Innovation at Orrick, picked up the thread with two observations that reframed the stakes. First, AI simulations allow associates to fail behind closed doors, a radical improvement over the old model, in which blowing it had real consequences because failure often happened directly in front of the partners Second, the tool isn’t just for juniors. Even experienced lawyers are using simulations to test different approaches, tweak personas, and sharpen arguments. Orrick’s own Supreme Court team had a lawyer use AI to review a draft brief and identify paragraphs that could be tighter.

Todd Heffner, Partner at Smith, Gambrell & Russell, said the real question isn’t whether associates will use AI, but rather whether it gets them to lead at trial in year 10 instead of year 20. Right now, most associates are lucky to see the inside of a courtroom in their first seven years, and even then, they spend most of their time back in the hotel prepping for the more experienced attorneys instead of arguing themselves. If simulations can compress that learning curve, the associate’s career doesn’t disappear, rather, it gets accelerated.

The dinosaur that adapted

During the Measuring What Matters panel, Mitchell Kaplan, Managing Director of Zarwin Baum, introduced himself with a memorable bit of self-deprecation: He’s a dinosaur 鈥 but one, he clarified, who understands how AI can revolutionize what he does.

Kaplan’s perspective threaded through both days of programming like a quiet counterweight to the anxiety. He’d seen this before 鈥 not AI specifically, but the fear of it. He watched the legal industry transition from physical libraries to digital research tools, and he watched attorneys adapt. And his message was consistent: the work changes, but the need for lawyers doesn’t disappear. Associates may be taking shortcuts, but they still need to read, still need to review, and still need to think.

They’re developing differently than his generation did, Kaplan said, but it鈥檚 the same way every generation develops differently from the one before it. And different doesn’t mean wrong.


The work changes, but the need for lawyers doesn’t disappear. Associates may be taking shortcuts, but they still need to read, still need to review, and still need to think.


It’s a perspective that found an unexpected echo in the Enterprise Alignment panel. Mark Brennan, a partner at Hogan Lovells, relayed a comment he heard at a previous AI conference: The next generation of entry-level jobs will be managers 鈥 because they’ll be managing agents and other tech tools. Brennan admitted he didn’t have all the answers on what that means for legal training, but the implication was clear. The associate role isn’t dying, instead, it’s being redefined. And the firms that figure out what that redefined role looks like, what mix of legal training, technological fluency, critical thinking, and management skills it requires, will have a significant advantage over those firms that are still debating it.

Another panelist, Andrew Medeiros, Managing Director of Innovation at Troutman Pepper Locke, made a prediction that felt like the sharpest version of this idea. He said that at some point, new lawyers are going to be doing simulated matters as a standard part of the development process. Eventually, there’s going to be a generation that walks in as new attorneys and finds themselves litigating right away.

That’s not the death of the associate. Rather, that’s the beginning of a different kind of associate 鈥 one who arrives at the courtroom sooner, with different preparation, carrying different tools.

The billable hour, for all the prophecies, refuses to die. The associate, it turns out, has no intention of dying either 鈥 just evolving. Mitchell Kaplan called himself a dinosaur 鈥 but Legalweek was full of dinosaurs, and every one of them was adapting and in that adaptation, thriving. The harder question is whether the firms that forged them will be brave enough to follow.


You can find more of听our coverage of Legalweek events听丑别谤别

]]>
Corporate tax teams eager for AI, but frustrated by pace of change, new report shows /en-us/posts/corporates/corporate-tax-department-technology-report-2026/ Mon, 16 Mar 2026 13:06:11 +0000 https://blogs.thomsonreuters.com/en-us/?p=69963

Key insights:

      • Possibilities vs. practicality 鈥 There is a growing frustration gap between what corporate tax professionals want to achieve and what their current technological tools will allow.

      • Expectations about AI 鈥 Tax professionals have significantly accelerated the timeframe in which they expect AI to become a central part of their workflow.

      • Proactive progress 鈥 Automation is enabling a gradual shift toward more strategic, proactive tax work, although not as quickly as many tax professionals would like.


The recently released , from the 成人VR视频 Institute and Tax Executives Institute, reveals that while automation of routine tax functions is indeed enabling a long-desired shift toward more strategic, proactive tax work in some corporate tax departments, a majority of tax leaders surveyed say upgrading their department鈥檚 tax technology is still a relatively low priority at their company.

Jump to 鈫

2026 Corporate Tax Department Technology Report

 

The report surveyed 170 tax leaders from companies of all sizes to find out how corporate tax professionals are using technology, overcoming obstacles, and planning for the future.

A growing 鈥渇rustration gap鈥

In general, the report found that while many companies (especially larger ones) are actively upgrading their tax department鈥檚 technological capabilities, there is a growing frustration gap between what tax professionals know they can accomplish with more robust technologies and what their current tools allow them to do.

Adding to this frustration is a growing discrepancy between the additional budget and resources tax departments hope to get each year and the harsher reality they often face. Indeed, even though tax leaders remain optimistic that their budgets and capabilities will expand and improve in the coming years, fewer than half of the respondents surveyed said their departments received a budget increase last year, and many saw budget cuts.


corporate tax

Further, the report shows that the prospect of incorporating ever more sophisticated forms of AI and AI-driven tools into tax workflows is also very much on the minds of tax professionals. Even though the actual usage of AI in corporate tax departments is still relatively low, the report reveals that tax professionals now expect AI become a central part of their workflow within one to two years, much faster than they did in last year鈥檚 report.

Indeed, as the report explains, this expectation of more imminent AI adoption represents a significant shift in attitude, because most corporate tax departments are rather circumspect about how, when, and why they incorporate new tech tools into their established routines.

If today鈥檚 technological capabilities continue to accelerate, companies that have been slow to invest in the infrastructure necessary to keep pace may soon find themselves struggling to catch up with their more tech-savvy counterparts, the report warns.

Moving toward more proactive work, albeit slowly

For companies that have invested in the technological infrastructure necessary to support advanced tax technologies, the payoff is becoming increasingly evident.

According to the report, about two-thirds (67%) of tax professionals surveyed said their company鈥檚 investment in technology had enabled a shift toward more proactive tax work within their departments. This shift is particularly noticeable at large corporations, at which, unsurprisingly, investment in tax technology has been more generous.

The 2026 Corporate Tax Department Technology Report also explores other aspects of corporate tax departments, including their hiring practices, tech training, purchasing strategies, what they see as the most popular tech tools for tax, and numerous other factors that affect how tax departments operate.


You can download

a full copy of the 成人VR视频 Institute’s here

]]>
Reinventing the data core: The arrival of the adaptable AI data foundry /en-us/posts/technology/reinventing-data-core-adaptable-data-foundry/ Thu, 05 Mar 2026 16:08:59 +0000 https://blogs.thomsonreuters.com/en-us/?p=69795

Key takeaways:

      • There is a widening gap between AI ambition and readiness 鈥 The gap between AI ambition and data readiness is widening, making the adoption of an adaptable data foundry essential for scalable, explainable, and compliant AI outcomes.

      • A data foundry model directly addresses the root cause 鈥 A data foundry model enables organizations to industrialize data production, automate compliance, and ensure consistent data lineage, thereby overcoming the limitations of brittle, legacy data architectures.

      • Incorporate the data core into your AI planning 鈥 Reinventing the data core is now a strategic imperative for those enterprises that aim to thrive in 2026 and beyond, as agentic AI, regulatory demands, and integration complexity accelerate.


This article is the third and final installment in a 3-part blog series exploring how organizations can reset and empower their data core.

A defining theme of this year so far is the widening distance between organizational ambition and data readiness. Leaders want the hype and inherent capabilities they believe are instantly contained within agentic AI 鈥 automated compliance, predictive integration for M&A, and decision-intelligence pipelines that reduce operational friction.

Without a data foundry, however, much of that will be impossible. Instead, workflows will remain brittle, AI agents will hallucinate under inconsistent semantics, and data lineage will break down across federated sources. Further, without a data foundry, regulatory mappings involved with the Financial Data Transparency Act (FDTA) and the Standard Business Reporting (SBR) framework cannot be automated, cross-functional insights will require manual reconciliation, and auditability will collapse under scrutiny.

This is not a failure of leadership. It is a failure of architectural design to recognize the congealment of data as a predecessor to technologies and the critical priorities of data security, auditability, and lineage.

data core

For decades, organizations built monolithic systems that were optimized for stability and reporting. Today鈥檚 world demands modularity, continuous adaptation, and agent-driven interoperability. Architecture has shifted from build and operate to build and evolve. This is precisely what a data foundry enables.

Why reinvention can no longer wait

Throughout 2025 and now into the early months of 2026, data and AI have quietly shifted from innovation topics to enterprise constraints. Leaders across regulated markets are starting to recognize that the obstacles limiting their AI ambitions are neither mysterious nor technical 鈥 they are structural. These obstacles sit inside the data core, waiting inside the silent architecture that determines whether any form of automation, intelligence, or compliance can scale beyond a pilot.

The data bears this out. When you examine the work coming from Tier-1 research bodies, supervisory institutions, and global transformation benchmarks, a consistent narrative emerges beneath the headlines: AI is accelerating, regulation is hardening, and integration demands are expanding. Moreover, organizational data remains pinned to assumptions that were forged in static, pre-AI operating environments. This gap is not theoretical; rather, it is measurable, persistent, and directly correlated to business performance.

data core

Let鈥檚 look at the AI results first. Across industries, organizations continue to experience a familiar pattern: early promise, limited adoption, and rapid degradation once the model encounters inconsistent semantics or fragmented lineage. Global studies show that the vast majority of enterprise AI initiatives still struggle to reach full production maturity, and among those that do, many encounter performance drift within the first year.

The driver is remarkably consistent. It is not the sophistication of the model nor the skill of the data science team 鈥 it is the quality, clarity, and traceability of the data that is feeding the system.

Taken together, these signals deliver a clear message. The gap between AI ambition and data readiness is widening, not narrowing. This is why the data foundry conversation matters now. It is not an abstract architectural concept. It is a response to the full stack of quantitative pressures the market has been telegraphing for years 鈥 costs rising, compliance hardening, AI accelerating, and integration straining under inconsistent semantics and fragile lineage.

A data foundry model directly addresses the root cause of this by industrializing the creation of consistent, reusable, explainable data products that can fuel agentic AI, support regulatory defensibility, and accelerate enterprise reinvention.

The numbers point to a simple conclusion. Reinvention is no longer optional, and the window to address the data core before agentic AI becomes standard practice is narrow and closing. The organizations that act now will be the ones that define what compliant, explainable, interoperable AI looks like in the next decade. Those that defer the work will find themselves restructuring under pressure rather than reinventing by choice.

This is the inflection point. In truth, the quantitative signals have made the case more clearly than a multitude of strategy narratives ever could.

The data foundry: A model for continuous alignment

Unsurprisingly, agentic AI introduces new, more demanding requirements, including:

      • machine-interpretable semantics;
      • context-preserving lineage across federated systems;
      • decomposition of enterprise knowledge into reusable data products;
      • dynamic trust-scoring tied to source reliability and timeliness;
      • automated compliance overlays and regulatory logic; and
      • cross-domain metadata orchestration.

These capabilities are not optional, and they are non-negotiable. Indeed, they determine whether AI elevates risk or mitigates it, whether it accelerates productivity or introduces unrecoverable inconsistencies. And they determine whether AI augments decision quality or produces volatility.

A data foundry shifts organizations from artisanal, one-off data preparation and toward industrialized data production, in which patterns replace pipelines, and building blocks replace custom engineering. This shift will mean that lineage is generated, not documented; semantics are governed, not patched; and compliance is automated, not reconstructed. In this way, reuse becomes the default, not the exception.

In fact, this process is analogous to manufacturing. Instead of producing bespoke components for each need, the enterprise creates standardized, high-fidelity data assets that can be assembled into any workflow, any AI use case, and any reporting requirement.

A data foundry becomes the quiet architecture behind every future capability, making these capabilities systematic rather than ad-hoc. The chart below showcases the progressive build-up using a data factory, beginning with data intake and harmonization and ending with the AI agent orchestration and reusable data products that learn from their deployment.

data core

Unfortunately, organizations are still building increasingly advanced AI decisioning and efficiency solutions on top of an aging and brittle data foundation. The results are predictable: stalled initiatives, compliance exposure, and stakeholder frustration. Additionally, instead of asking why, organizations keep adding more tools 鈥 more dashboards, more cloud services, more AI pilots, and more flavors of transformation.

Clearly, enterprises aren鈥檛 dealing with an AI problem. They鈥檙e dealing with a data alignment problem disguised as progress within fragmented, AI enclosures.

Reinvention starts at the data core

For more than a decade, firms across regulated industries have repeated the same mantra: Data is our most critical asset. When you peel back the layers or when you sit in board review sessions or integration meetings or regulatory remediation audits, however, the evidence does not match the rhetoric.

Reinvention is no longer optional. Instead, it is the starting point for meeting the demands of 2026 and beyond. The institutions that thrive will be those that understand that the data core is not a technical asset 鈥 it is the operational backbone of the enterprise. Indeed, the institutions that succeed will be those that recognize the truth early: AI is an output, and the data core is the strategy. And the organizations able to industrialize their data 鈥 through a foundry model, through AXTent, through repeatable semantic structures 鈥 will be the ones leading innovation, reducing compliance risk, accelerating M&A synergies, and achieving enterprise-wide reinvention.

In the end, the real question isn鈥檛 whether AI will transform business; the question is whether the data foundation will allow it. And the answer is rebuilding your data core so AI can actually deliver the outcomes your organization needs 鈥 and that work begins now.


You can find more blog posts听by this author here

]]>
The professional judgment gap: Tracing AI’s impact from lecture hall to professional services /en-us/posts/corporates/ai-professional-judgment-gap/ Thu, 05 Mar 2026 12:59:12 +0000 https://blogs.thomsonreuters.com/en-us/?p=69771

Key highlights:

      • Universities face pressure over pedagogy鈥 Academic institutions are adopting AI as a reputational marker that鈥檚 driven by market pressure rather than educational need, creating a risk for students who can work with AI but not independently of it.

      • Entry-level roles under threat鈥 AI is being deployed most heavily to automate the grunt work of entry-level positions in which foundational professional skills are traditionally built through struggle and feedback.

      • K-shaped cognitive economy emerging鈥 Experienced professionals with existing expertise are gaining efficiency from AI, while entry-level workers are losing access to skill-building experiences.


According to Harvard University’s Professional & Executive Development division, innovation is defined as a 鈥減rocess that guides businesses through developing products or services that deliver value to customers in new and novel ways.鈥 Along this journey, professional judgement in decision-making is used numerous times to determine next steps at key stages.

Notably, the word technology is nowhere to be found in this definition 鈥 an absence , Assistant Professor of Learning Technologies at the University of Minnesota, has long found revealing. Instead, innovation is framed as creative problem-solving, contextual intelligence, and the ability to work across perspectives. Interestingly, Dr. Heinsfeld adds, none of these require constant automation. In fact, many of them are undermined by it.

However, AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices. With notable data already suggesting that , the risk that the current approaches to AI use in universities and companies are engineering away innovation and professional judgement skills is real, notes , Group Leader in AI Research at Harvard and NTT Research.

Indeed, some observers view AI as the largest unregulated cognitive engineering experiment in human history. Yet, unlike medical drugs that require years of approval and testing, AI systems are reshaping how millions of students think, learn, and make decisions without a comparable approval process or a shared framework for discussing any potential 鈥渟ide effects,鈥 as Dr. Heinsfeld pointed out.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built.


So, what happens when an entire generation of future employees learn to delegate judgment before they develop it? And what actions do universities and companies need to take now to avoid this reality?

Risks of universities adopting AI under pressure

For universities, AI 鈥渉as become a reputational marker, and not adopting AI is framed as institutional risk, regardless of whether an educational case has been made or not,鈥 says Dr. Heinsfeld, adding that this is being driven, in part, by market pressure rather than pedagogical need.

Already, companies can greatly influence universities as employers of new graduates; and as such, AI systems are currently being optimized for speed, agreeability, and accessibility to stimulate ongoing use. However, as Dr. Heinsfeld contends, as universities race to earn the label AI ready without a careful, cautious and detailed understanding of how it may impact students鈥 cognitive processes, they run the risk of damage to their reputations of pedagogical integrity.

In addition, the “data as truth” paradigm is a complicating factor, she says. Drawing on her research, Dr. Heinsfeld explains how data 鈥渋s often framed as the idea of being a single source of truth based on the assumption that when collected and analyzed, it can reveal objective, indisputable facts about the world.鈥 Indeed, this ubiquitous mindset across universities and corporations treats data 鈥 such as that used to train large and small language models 鈥 as objective and indisputable.

Yet this obscures critical decisions about what gets measured, whose perspectives are included, and what forms of knowledge are systematically excluded from AI systems. As Dr. Heinsfeld warns, when data becomes synonymous with truth, “knowledge is what is measurable and optimizable.鈥 This narrows professional judgment to efficiency metrics rather than the interpretive depth, ethical reasoning, and cultural context that are essential for sound decision-making.

Judgment gap widens in workforce downstream

Under the current AI adoption approach, students could leave universities able to work听with听AI but not independently听of听it, a distinction emphasized by Dr. Heinsfeld. Like calculators, AI works as a tool only when foundational skills for its use exist first. Without this, graduates enter the workforce with a critical judgment gap that compounds from their lives as students at college campuses to becoming employees working in corporations.


AI adoption has the real potential to automate away the very experiences that build these capabilities from university lecture halls to corporate offices.


Most worrisome is that AI is being deployed most heavily to automate precisely the entry-level roles where foundational professional skills are built, warns Dr. Tanaka. Indeed, this is exactly the type of grunt work that teaches judgment through struggle and feedback. Over time, overuse of AI will result in quality being sacrificed because critical evaluation skills have atrophied.

Looking into the future, Dr. Tanaka foresees a K-shaped economy of cognitive capacity. Experienced professionals with existing expertise and contextual judgment built through years of experience will gain increasing efficiency from AI. Entry-level workers, however, will lose access to the valuable experiences that build professional judgement. This gap widens between professionals who can independently accelerate their workflows using AI and those whose traditional tasks are merely displaced by it.

Intervention may be able to break the cycle

The pattern is not inevitable, as both Dr. Tanaka and Dr. Heinsfeld explain. Drawing on Dr. Heinsfeld鈥檚 emphasis on institutional agency, meaningful intervention will depend on conscious, intentional choices made at every level. Both experts share their guidance for how different organizations can manage this:

Academic institutions 鈥 Universities must first recognize that AI adoption is a decision rather than an inevitability and make educational need the North Star for decision-making around AI. In her analysis, Dr. Heinsfeld emphasizes that when vendors set defaults, they quietly redefine academic practice. Defaults shape what is made visible or invisible and what becomes normalized. In AI-driven environments, universities often lose control over how models are trained and updated, what data shapes outputs, how knowledge is filtered and ranked, and how student and faculty data circulate beyond institutional boundaries 鈥 especially if decision-making is left to vendors. As a result, the intellectual byproducts of teaching and learning increasingly become inputs into external systems that universities do not govern.

Private entities 鈥 For organizations, Dr. Tanaka calls for feedback loops and other mechanisms that will promote more open discussion about AI use without stigma. In addition, companies need to proactively redesign entry-level roles听to ensure these positions continue to cultivate judgment and foundational skills in an AI-driven environment. Likewise, Dr. Tanaka suggests that companies explicitly provide feedback about cognitive trade-offs to employees, fostering an understanding of possible skill entrophy.

Employees 鈥 Similarly, individuals working for organizations bear much of the responsibility for making sure critical thinking is enhanced by AI. Indeed, strategic decisions about when to use AI while seeking to preserve cognitive capacity and professional judgement are key.

Looking ahead

In today鈥檚 increasingly AI-driven environment, a new paradigm is needed to combat the current operating assumption that optimization from AI is the sole path to progress. And because the current trajectory sacrifices human development for efficiency, the need for universities and companies to choose a different path is urgent 鈥 while they still have the judgment capacity to do so.


You can find out more about how organizations are managing their talent and training issues here

]]>
The AI Law Professor: When AI makes lawyers work more, not less /en-us/posts/technology/ai-law-professor-ai-makes-lawyers-work-more-not-less/ Tue, 03 Mar 2026 14:58:48 +0000 https://blogs.thomsonreuters.com/en-us/?p=69696

Key points:

      • The productivity promise is largely wrong 鈥 Emerging research shows that AI doesn鈥檛 reduce work 鈥 it intensifies it. Lawyers work faster, take on broader responsibilities, and extend their hours without recognizing the expansion. Further, because prompting AI feels like chatting rather than laboring, lawyers slip work into evenings and weekends without registering it as additional effort.

      • Self-reinforcing acceleration is the real risk 鈥 AI speeds tasks, which raises expectations, which increases reliance, which expands scope, ultimately creating a cycle that drives burnout in a profession already plagued by it.

      • Purposeful integration is the antidote 鈥 Legal organizations need to promote intentional governance structures that account for how people actually behave with AI, not how leadership imagines they will or should.


Welcome back to The AI Law Professor. Last month, I examined how AI is forcing us to rethink training for junior lawyers. This month, I examine a question that affects every lawyer: What happens when the efficiency gains we’ve been promised don’t materialize the way we expected? A recent study out of UC-Berkeley suggests the answer is more troubling than most law firm leaders realize.

If you鈥檝e attended a legal technology conference anytime over the past two years, you鈥檝e heard the pitch: Automate the mundane and elevate the meaningful.

A in the Harvard Business Review by UC-Berkeley researchers Aruna Ranganathan and Xingqi Maggie Ye suggests we should be more skeptical. They tracked how generative AI (GenAI) changed work habits over eight months at a 200-person technology company. Their findings were striking 鈥 AI tools didn鈥檛 reduce work; rather, they intensified it.

According to the study, the tech employees studied were shown to work faster, take on broader responsibilities, extend their hours into evenings and weekends, and multitask more aggressively 鈥 all without being asked to do so. The promise of liberation became a reality of acceleration and overwork.

For those of us in the legal profession, this should be a wake-up call.

Three forms of intensification

The researchers identified three patterns that will sound familiar to anyone watching lawyers adopt GenAI in their work processes.

Task expansion

Because AI fills knowledge gaps, professionals stepped into responsibilities that previously belonged to others. Product managers started writing code, and researchers took on engineering tasks. In legal contexts, the parallel is obvious. Associates use AI to attempt tasks once reserved for senior lawyers. Paralegals draft documents that previously required attorney oversight. Solo practitioners take on matters outside their core expertise because their AI tools make it feel manageable. The result isn鈥檛 less work distributed more efficiently, it鈥檚 more work concentrated in fewer hands, with less institutional knowledge guiding the output.

Blurred boundaries

AI blurred the boundaries between work and non-work. Because prompting an AI feels more like chatting than labor, lawyers (like the tech workers in the study) may slip work into lunch breaks, evenings, and commutes without registering it as additional effort. The conversational interface is seductive precisely because it doesn鈥檛 feel like work. It is work, however, and much more of it.

Pervasive multitasking

Workers managed multiple AI threads simultaneously, generating a sense of momentum that masked increasing cognitive load. For lawyers, this means running parallel research queries, drafting multiple documents at once, and constantly monitoring AI outputs, all while believing they鈥檙e saving time.

The productivity trap

The most important insight from the research is that these effects are self-reinforcing. AI accelerates tasks, which raises expectations for speed. Higher speed increases reliance on AI, and greater reliance expands the scope of what people attempt. And expanded scope generates even more work. Rinse and repeat.

Parkinson鈥檚 law: 鈥淲ork expands to fill the time available for its completion.鈥

In a profession already plagued by burnout, this cycle should alarm us. The legal industry鈥檚 adoption of AI is being driven largely by the promise of doing the same work in less time. But if the Berkeley research is any guide, what actually happens is that we do more work in the same amount of time, or more work in more time, while telling ourselves we鈥檙e being more productive.

And because the extra effort feels voluntary, firm leadership may not see the problem until it manifests as errors, attrition, or ethical lapses. In law, the cost of impaired judgment isn鈥檛 just a missed deadline 鈥 it鈥檚 a client鈥檚 liberty, livelihood, or life savings.

From productivity to purposeful practice

The Berkeley researchers propose what they call an AI practice consisting of intentional norms and routines that structure how AI is used, including determining when to stop and how work should and should not expand. I鈥檇 go further. For legal organizations, purposeful AI integration requires more than workplace wellness norms. It requires a strategic framework that aligns AI capabilities with organizational mission, ethical obligations, and sustainable human performance.

This means, first off, being honest about what AI actually does to workloads rather than what we hope it will do. If your firm adopted AI expecting to reduce associate hours, audit whether that has actually happened, or whether associates are simply filling reclaimed time with more work.

Second, it means building governance structures that account for how people actually behave with these tools, rather than how leadership imagines they will. The Berkeley study found that workers expanded their workloads voluntarily, without management direction. Top-down AI policies that focus solely on permissible use will miss the intensification that could be happening in plain sight.


The most important insight from the research is that these effects are self-reinforcing. AI accelerates tasks, which raises expectations for speed. Higher speed increases reliance on AI, and greater reliance expands the scope of what people attempt. And expanded scope generates even more work.


Third, it means preserving space for the distinctly human work that AI cannot replicate, such as judgment, empathy, ethical reasoning, and the kind of creative problem-solving that emerges from genuine human dialogue 鈥 not from a conversation with a chatbot. The researchers also found that AI-enabled work became increasingly solitary and continuous, a dangerous trajectory.

The narrative that AI will free lawyers for higher-value work isn鈥檛 just optimistic. It鈥檚 a misunderstanding of how these tools interact with human psychology. AI doesn鈥檛 create leisure. It creates capacity 鈥 and without intentional structures, that capacity gets filled, not with strategic thinking, but with more of everything.

While it鈥檚 clear that AI will change the legal profession, the real challenge is whether law firms will integrate AI with purpose, shaping it to serve their values, their clients, and their professionals鈥 well-being. Or, whether they鈥檒l be allowing the technology to quietly shape us into something we didn鈥檛 intend to become.

Tom Martin is CEO & Founder of LawDroid, Adjunct Professor at Suffolk University Law School, and author of the forthcoming听. He is 鈥淭he AI Law Professor鈥 and writes this eponymous column for the 成人VR视频 Institute.


You can find more aboutthe use of AI and GenAI in the legal industryhere

]]>
2026 AI in Professional Services Report: AI adoption has hit critical mass, but now comes the tough business questions /en-us/posts/technology/ai-in-professional-services-report-2026/ Mon, 09 Feb 2026 13:05:35 +0000 https://blogs.thomsonreuters.com/en-us/?p=69356

Key findings:

      • AI adoption accelerates across professional services听鈥 Organization-wide use of AI in professional services almost doubled to 40% in 2026, with most individual professionals now using GenAI tools, and many preparing for the next wave of tools such as agentic AI.

      • Strategic integration and measurement lag behind usage 鈥 While AI use is widespread, only 18% of respondents say their organization tracks ROI of AI tools, and even fewer measure AI’s impact on broader business goals such as client satisfaction or revenue generation.

      • Communication around AI use remains inconsistent听鈥 While most corporate departments want their outside firms to use AI on client matters, less than one-third are aware whether their firms are doing so. Meanwhile, firms report receiving conflicting instructions from clients about AI use, highlighting a need for clearer dialogue and shared strategy around AI adoption.


Over the past several years, AI usage within professional services industries has come into focus. As we enter 2026 in earnest, the early adoption phase of generative AI (GenAI) has come and gone. Today, most professionals have experimented with some form of GenAI, and many organizations integrated GenAI into their workflows 鈥 and now, a number are preparing for the next wave of technological innovation such as agentic AI.

Given this, the question for professionals and organizational leaders has now become: What will be AI鈥檚 long-term impact on my business?

Jump to 鈫

2026 AI in Professional Services Report

 

To delve into this question further, the 成人VR视频 Institute has released its 2026 AI in Professional Services Report, which takes a broad view into the current usage and planning, sentiment towards, and business impact of AI for legal, tax & accounting, corporate functions, and government agencies. Taken from a survey of more than 1,500 respondents across 27 different countries, the report finds a professional services world that has embraced AI鈥檚 use but is continuing to evolve business strategy around its implementation.

For instance, the report shows that to 40% in 2026, compared to 22% in 2025 鈥 and for the first time, a majority of individual professionals reported using publicly-available tools such as ChatGPT. Additionally, a majority of respondents said they feel either excited or hopeful for GenAI鈥檚 prospects in their respective industries, and about two-thirds said they felt GenAI should be applied to their work in some manner.

At the same time, however, many are exploring GenAI tools without much guidance as to how that use will be quantified or measured. Only 18% of respondents said they knew their organization was tracking return-on-investment (ROI) of AI tools in some manner, roughly the same proportion as last year. And even among those tracking AI metrics, most are tracking mainly internally-focused, operational metrics; and only a small proportion analyzed AI鈥檚 impact on their organization鈥檚 larger business goals 鈥 such as client satisfaction, external revenue generation, and new business won.

AI in Professional Services

This slow move to strategic thinking also impacts client-firm relationships. Although more than half of both corporate legal departments and corporate tax departments want their outside firms to use AI on client matters, less than one-third said they were aware whether their firms were doing so or not. From the firm standpoint, meanwhile, confusion reigns: 40% of firm respondents said they have received orders both to use AI on matters and not to use AI on matters from various clients.

Indeed, bout three-quarters of corporate respondents and firm respondents agreed that firms should be taking the lead in starting these conversations around proper AI use. Yet these discussions have not yet happened en masse. 鈥淔irms are reluctant 鈥 they claim it would compromise quality and fidelity,鈥 said one U.S.-based corporate chief legal officer. 鈥淚 think they are threatened by it.鈥

All the while, technological innovation progresses ever quicker. This year鈥檚 version of the report measures agentic AI use for the first time, finding that already 15% of organizations have adopted some type of agentic AI tool. Perhaps more interesting, however, is that an additional 53% report their organizations are either actively planning for agentic AI tools or are considering whether to use them, indicating perhaps an even more rapid pace of adoption than we鈥檝e already seen with the speedy rise of GenAI.

AI in Professional Services

Overall, the report makes it clear that most professionals do understand that change, driven by AI in the workplace, is undoubtedly here. Even compared with 2025, a higher proportion of professionals said they believe that AI will have a major impact on jobs, billing and revenue, and even the need for legal or tax & accounting professionals as a whole. The percentage of lawyers calling AI a major threat to the unauthorized practice of law rose to 50% in 2026 from 36% in 2025.

Further, this report paints the picture of a professional services world that has embraced AI, begun to see its impact, and realized that it will have broader business and industry implications than previously imagined. As a result, the time for professionals and organizations to begin planning in earnest for an AI future has already arrived.

As a corporate general counsel from Sweden noted: 鈥淲e cannot keep up with the modern-day corporations鈥 demands unless we also develop and adapt our way of working.鈥

You can download

a full copy of the 成人VR视频 Institute’s 2026 AI in Professional Services Report听丑别谤别


]]>
Understanding the data core: From legacy debt to enterprise acceleration /en-us/posts/technology/understanding-data-core-enterprise-acceleration/ Tue, 03 Feb 2026 14:47:41 +0000 https://blogs.thomsonreuters.com/en-us/?p=69255

Key takeaways:

      • The real bottleneck for AI is the data core 鈥 AI is advancing rapidly, but most organizations’ data architectures, governance, and legacy assumptions can’t keep up. Without a repeatable, business-aligned data foundation, AI initiatives will struggle to scale and deliver reliable results.

      • AI success relies on explainable, traceable, and reusable data 鈥 For AI to be reliable and compliant, organizations must design data environments that emphasize lineage, semantics, and trust; and that means that compliance and auditability need to be built into the data core, not added on later.

      • Business should shift from tool-centric upgrades to business-driven, data-centric reinvention 鈥 Efforts focused only on modernizing tools or platforms miss the root issue: legacy data structures. Leaders must prioritize building a cohesive, reusable data core that aligns with business strategy.


This article is the first in a 3-part blog series exploring how organizations can reset and empower their data core.

Across boardrooms, regulatory briefings, and strategic off-sites, leaders are asking with growing urgency some variation of the same question: How do we make AI reliable, scalable, auditable, and economically defensible? The surprising answer is not in the AI technology, nor in the cloud stack, nor in another round of system upgrades.

It is in the data. Not the data we store, not the data we report, and not the data we move across our pipelines. It is in the data that we must now explain, contextualize, trace, validate, and reuse continuously as agentic AI becomes embedded in every workflow, every decision system, and every regulatory outcome.

The stark reality across industries then becomes what to do as AI matures faster than our data cores can support. For the first time, technology is not the bottleneck 鈥 architecture is, organizational assumptions are, and governance strategies are. More importantly, the lack of a repeatable, business-aligned data foundry has become the strategic inhibitor standing between today鈥檚 operations and tomorrow鈥檚 autonomy-ready enterprises.

The realities of 2026

As 2026 gets underway, the pressures of regulation, AI adoption, data lineage requirements, and cross-system consistency have converged into a single strategic reality: We can鈥檛 keep modernizing data at the edges. The data core itself must be reimaged and compartmentalized.

For leaders across highly regulated industries, the challenge is recognizing that our data architectures were never designed for the world we鈥檙e moving into. Historically, solutions were built for predictable siloed-data systems, linear programmatic processes, and dashboard reporting. Today鈥檚 demands are continuous, variable, cross-domain, and machine-interpreted and not bound by traditional methods and techniques of process efficiency and system adaptability. Tomorrow鈥檚 systems will be comprehensively trained by data. To properly frame these realities, leaders must understand:

      • Agentic AI exposes weak data architecture immediately 鈥 Models may scale, but data debt does not. This is a new, priority constraint.
      • Lineage, semantics, and trust scoring 鈥 not models 鈥 will determine enterprise readiness 鈥 AI will only be as reliable as the meaning and traceability of enterprise data.
      • Compliance cannot be retrofitted; rather, it must be designed into the data core 鈥 Compliance no longer ends in reporting, it must exist upstream and be addressed continuously.
      • Return on investment in AI is impossible without composable, modular, and reusable data products 鈥 Data that cannot be composed, traced, and made consistent cannot be automated.
      • The bottleneck is not talent or tools, it is the absence of a data foundry 鈥 Without robust, industrial-grade data production, AI will remain fragmented and experimental.

By delivering a practical, business-first path integrated with a data-centric design, organizations enable reuse, compliance, and measurable ROI. AI is accelerating, but data readiness is not. This mismatch is where many transformation efforts die.

Agentic AI demands a data environment that simply does not exist with most legacy solutions. It requires decision-aligned semantics, federated trust scoring, cross-domain lineage, dynamic compliance overlays, and consistent interpretability. No model, no matter how advanced, can compensate for data environments that have been engineered for static reporting and linear process logic. We are entering a cycle of reinvention in which data becomes the organizing principle.

The business need, not the engineering myth

Executives are rightfully fatigued by transformation programs. They have seen modernization initiatives expand scope, escalate cost, and ultimately underdeliver. They have heard the promises of clean data, enterprise data platforms, microservices, cloud migration, and AI-readiness. However, when agentic AI begins interacting with these ecosystems, the fragility of the entire operation becomes instantly visible.

Why? Because most data modernization initiatives have been driven by tool-centric solutions rather than architecture-centric capabilities. Prior data governance is about oversight, not enablement and reuse, as is being demanded by emerging AI designs. Often, legacy methods kept their audit and lineage contained within siloed processes, bridging bridged them with replicated data warehouses, extract, transform, load systems (ETLs), and application programming interfaces (API) protocols.

However, this tool-centric, legacy-enabled approach is the problem. We keep optimizing the wrong layers, and we keep modernizing the components.

As a result, we too often see that AI pilots succeed, but enterprise scaling fails. Or, that regulatory reporting improves marginally, but compliance costs increase. Or M&A integrations appear straightforward, but post-close data convergence drags on for years.

The gap between ambition and reality

As a solution, a data foundry approach corrects that imbalance by formalizing the factory-grade patterns required to support agentic AI systems. It becomes the production line for reusable data products, compliant semantics, and decision-aligned datasets. It also eliminates reinvention by institutionalizing repeatable structures; and, most importantly, it restores business leadership over AI outcomes, rather than relegating decision logic to engineering workstreams and emerging technologies.

As illustrated below, AI requirements and realities need to be tempered with business demands, organizational risks, and data agility capabilities (including skill sets) to achieve realistic roadmaps of action 鈥 not strategic aspirations.

data core

Today, the question isn鈥檛 whether organizations understand the importance of data, it鈥檚 whether leaders know how to build environments in which data becomes reusable, trustworthy, and ready for agentic AI. The issue, however, continues to be that our data cores 鈥 the architectural, operational, and standards ecosystems beneath all this 鈥 were not designed for continuous change.

Before they mobilize and execute against AI plans, business leaders need to answer the question: What business decisions are we trying to improve 鈥 and what data do these decisions actually requires today, and for tomorrow?

The organizations that will lead in the coming decade will do so not because they found the perfect technology stack, but because they built a reusable, continuously improving data foundation that can support AI, regulation, risk, and innovation simultaneously.

The question for leaders then becomes: Are we prepared to reinvent?

The work begins now 鈥 quietly, deliberately across the data core where tomorrow鈥檚 competitive advantages will be created. The chart below illustrates the business-driven AI elements that must be addressed, and how the old sequence of system provisioning must be replaced, beginning with outcomes and ending with engineered AI tools.

data core

AI is an output 鈥 a capability that鈥檚 unlocked after the underlying data foundation becomes coherent, traceable, explainable, and aligned with business decisions. For leaders, the data core is no longer a back-office concern or one-off IT initiative. It is a strategic asset that can shape speed, resilience, and trust across the organization.


In the next post in this series, the author will explain how to architect an integrated data core, particularly through the AXTent architectural framework for regulated organizations. You can find more blog postsby this authorhere

]]>