Building Your Legal Practice's AI Future Archives - 成人VR视频 Institute https://blogs.thomsonreuters.com/en-us/topic/building-your-legal-practices-ai-future/ 成人VR视频 Institute is a blog from 成人VR视频, the intelligence, technology and human expertise you need to find trusted answers. Wed, 20 Aug 2025 15:35:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 How AI is continuing to change the business of law /en-us/posts/legal/ai-business-of-law/ Wed, 20 Aug 2025 15:24:20 +0000 https://blogs.thomsonreuters.com/en-us/?p=67245

Key insights:

      • New methods of managing revenue 鈥 Changing technologies will require new ways of looking at revenue management beyond traditional methods of cost recovery

      • The rise of non-hourly billing 鈥 The shift to value-based billing or alternative fee arrangements will be almost inevitable as continued reliance on billable hours could prove detrimental in light of technology鈥檚 promises of increased efficiency

      • Looking for efficiencies and ROI 鈥 Law firms have readily available opportunities to demonstrate a return on their investments in AI by looking at places where revenue is currently being lost to write down, turning otherwise lost time into new potential revenue streams.


While the recently released 2025 Future of Professionals report, published by 成人VR视频, provides in-depth coverage of many of the changes that business professional anticipate or are already experiencing due to the rising influence of AI, the report did not really dive into issues of how AI may impact some of the business management aspects of professional services firms 鈥 specifically around how the pricing of legal services may be impacted by an AI-powered future.

Fortunately, the 成人VR视频 Institute has been closely examining this very issue for some time as part of an ongoing body of work dedicated to the pricing of AI-driven legal services.

This article is intended to serve as a compendium of a few of those pieces to help provide starting points for strategic discussions among law firm leaders around how to develop or adapt strategic plans to meet evolving realities in an increasingly AI-driven legal market.

Focusing on new methods of cost recovery

Law firms are increasingly reliant on advanced technologies, but those technologies can come with fairly significant costs. Traditionally, law firms would seek to recover those costs from their client through various billing mechanisms. However, client resistance and ethical considerations will create challenges for those law firms looking to apply such traditional methods to these new tech tools. Instead of focusing on how to offset the cost of technology, firms should instead be exploring ways that these advanced tools can create new mechanisms to drive revenue.

Indeed, a large component of that exploration will include more experimentation with value-based billing arrangements or alternate fee arrangements (AFAs). Currently, most legal work is billed based on the amount of time the work took to complete. However, as technology increases the speed with which work can be completed, the continued strong reliance on billable hours could have a detrimental effect on law firm billings.

That said, just because the outcome was delivered much more quickly does not mean it necessarily offers less value to the client and therefore should be dramatically cheaper. Law firms will need to pivot to different models to capture and then demonstrate that value. Firms that resist these changing market forces likely risk having their revenue streams fully dependent on their hourly inputs alone. And ultimately, those firms that lag behind risk losing out to more proactive firms that have made strides toward delivering higher levels of service.

Leveraging AI to reclaim lost revenue

Compounding this challenge is the fact that many law firms face a hidden cost due to inefficient workflows. The result of that inefficiency can be seen in the amount of time lawyers worked on behalf of clients but ultimately didn鈥檛 bill them for their services 鈥 a metric commonly known as write-downs.

business of law

Indeed, the average law firm partner loses approximately 300 hours of their own time every year due to tasks like correcting associates鈥 mistakes or getting up to speed on legal questions. The cumulative effect of these write-downs can quickly climb into the millions of dollars. Often times, lawyers are correct that clients would not want to be billed for this time; but the challenge, then, becomes how to not spend time on those tasks.

Clearly, AI can help law firms address this problem quickly. Once areas of potential revenue and time leakage are identified, law firms can more readily apply AI solutions targeted to these specific challenges, creating a more direct path to a return on a firm鈥檚 AI investment, and ultimately pushing the firm in a more productive direction.


You can keep current on how AI will impact law firm business model here

]]>
What the 鈥2025 Future of Professionals Report鈥 urges law firm leaders to do today /en-us/posts/legal/future-of-professionals-action-plan-law-firms-2025/ Tue, 05 Aug 2025 11:39:20 +0000 https://blogs.thomsonreuters.com/en-us/?p=67026

Key findings:

      • AI’s impact on the legal industry 鈥 AI is expected to have the biggest impact on the legal industry over the next five years, with a large majority of law firm respondents anticipating that AI will fundamentally alter their businesses.

      • Lacking a clear AI strategy 鈥 Despite the recognition of AI’s transformative power, nearly one-third of law firm professionals say they believe their firms are moving too slowly on AI adoption, and only about one-in-five say their firms have a visible AI strategy in place.

      • Action plan offers a path to get there The new action plan offers clear steps that law firm leaders can take now to build a framework of AI investment and adoption so they can see the competitive advantage of leveraging advanced technology.


Not surprisingly, AI is the single driver set to have the biggest impact on the legal industry over the next five years, according to 成人VR视频 2025 Future of Professionals Report, with 80% of law firm respondents expecting AI to fundamentally alter the course of how they conduct business.

Jump to 鈫

Future of Professionals Report 2025: Actionable insights for law firm leaders

 

It is not just speculation 鈥攊n fact, the shift is well underway as almost half (47%) of law firm respondents say their firms are already experiencing at least one type of benefit from AI adoption.

However, for all the widespread recognition of the transformative power of AI and the rapid rate of adoption, there are still many firm leaders that have yet to start thinking about how they should be integrating AI into their workflows. Nearly one-third (32%) of those surveyed say their firms are moving too slowly on AI adoption, and just 22% say their firms have a visible AI strategy in place.


You can download your copy of the 2025 Future of Professionals Report here


And that could be a problem for laggard firms, because the research shows definitively that those law firms with an AI strategy are more likely to see benefits and a return on investment (ROI) than those firms without a plan. That gap 鈥 between those law firms with an AI strategy and those without one 鈥 exposes some serious risks for those law firms that have been slow to embrace technology as a strategic priority. The results of this gap could redefine law firm business models and create significant growth opportunities for firms that are effectively leveraging AI.

Clearly, strong guidance is needed for law firms to develop a well-defined strategy that will allow them to move forward in an increasingly AI-driven legal market. In a new paper, , specifically tailored to law firm professionals, we offer clear steps that law firms can take to build a framework of AI investment and adoption so they can see the ROI and competitive advantage of leveraging advanced technology.

This action plan draws from the perspectives of almost 1,000 law firm professionals 鈥 including partners, associates, lawyers, and paralegals from across the globe as well as clients 鈥 and discusses in detail how law firm professionals can align their AI strategy with their overall firm strategy and not just focus on improving operational efficiency.


Today, we鈥檙e entering a brave new world in the legal industry, led by rapid-fire AI-driven technological changes that will redefine conventional notions of how law firms operate, rearranging the ranks of industry leaders along the way.


The plan also talks about how law firms need to prioritize their early AI initiatives by, for example, creating two or three high-impact, high-feasibility pilot projects. This would help them similarly create a viable data strategy that includes having ways to manage, secure, and leverage data assets. The plan also outlines the importance for firms to invest in talent and training, while identifying any skills gaps among their professionals.

While the action plan describes many more such initial steps, it also details the ways firms can develop their AI roadmap and plot its objectives to specific use cases to help demonstrate early success and achieve firmwide goals.

鈥淭oday, we鈥檙e entering a brave new world in the legal industry, led by rapid-fire AI-driven technological changes that will redefine conventional notions of how law firms operate, rearranging the ranks of industry leaders along the way,鈥 said Raghu Ramanathan, President of Legal Professionals at 成人VR视频. 鈥淭he insights in this action plan and the overall Future of Professionals Report serve as a guide on how not just to adapt, but to lead.鈥

Using these resources, law firm leaders will be better able to take control of their firms鈥 future, recognizing that embracing AI is no longer a choice 鈥 it is a necessity for success in today鈥檚 evolving legal landscape.


You can download

a full copy of the here

]]>
Beyond technical expertise: Why UK general counsel demand that their law firms become strategic partners /en-us/posts/legal/uk-general-counsel-demands/ Mon, 21 Jul 2025 13:29:30 +0000 https://blogs.thomsonreuters.com/en-us/?p=66768 Key insights:
      • Corporate GCs are focused on serving their businesses as strategic enablers and expect the same from their outside counsel

      • GCs increasingly demand advice that is proactive, clear, actionable, and aligned with broader business needs

      • Law firms that fail to meet these expectations risk being replaced by another law firm or an alternative provider


The legal landscape in the United Kingdom is experiencing a fundamental transformation, driven by economic uncertainty and rapid technological advancement. As corporate legal departments navigate these challenging waters, their expectations of outside counsel are evolving dramatically. Technical competence, while undoubtedly necessary, is no longer sufficient. Instead, UK general counsel (GCs) are increasingly expecting that their outside law firms evolve into true strategic business partners that can deliver measurable value beyond billable hours.

The shift from technical advisors to strategic enablers

Corporate in-house legal teams are increasingly focused on positioning themselves as trusted strategic advisors to their C-Suite; and in turn, they expect their external legal partners to support this same end, according to the 成人VR视频 Institute鈥檚 State of the UK Legal Market 2025 report. Indeed, the conclusion is clear: solid technical advice is no longer enough for law firms to maintain a competitive advantage in the legal services market.

鈥淐orporate legal teams place more trust in firms with strong reputations and deep industry knowledge that can help them drive strategic discussions with their organization鈥檚 leaders,鈥 the report states, noting that this development represents a significant departure from the traditional model in which law firms were valued primarily for their legal expertise and ability to navigate complex regulatory frameworks.

However, today鈥檚 economic climate has intensified pressure on corporate legal departments to demonstrate clear value while controlling costs. According to the report, 28% of UK-based corporate legal departments are planning for their legal spend to decline in 2025. That represents an increase of 6 percentage points from the 22% that had anticipated a spending decrease in 2024. And UK GCs have proved that they know how to make these reductions happen, even as corporate matter volumes and law firm billing rates both increase. Clever GCs are becoming increasingly selective about how they allocate their external legal budgets, opting for lower-cost law firms or, in an increasing number of instances, alternative legal service providers (ALSPs).

This cost consciousness has fundamentally altered the value proposition that law firms need to offer in order to secure work matters. As one technology industry in-house counsel quoted in the report noted: 鈥淥ur job is to provide cost effective, valuable legal advice to our function teams in the next 12 months. The priority would be to find an efficient way of doing this.鈥

Corporate GCs are no longer willing to pay premium fees for legal work that can be automated, streamlined, or just as easily performed in-house. Instead, they’re seeking legal partners that can help their in-house teams achieve their broader business objectives while delivering measurable efficiency gains.

The four pillars of modern legal partnership

In the report, UK GCs identified several key areas in which they expect their law firms to excel beyond traditional technical competence:

      1. Business-aligned strategic thinking 鈥 Corporate legal teams want law firms that understand their industry, business model, and strategic objectives. This means providing guidance that goes beyond legal compliance to support business growth and competitive positioning. Law firms must demonstrate deep sector knowledge and offer insights that help drive strategic discussions at the boardroom level.
      2. Proactive communication and responsiveness 鈥 The report underscores that GCs 鈥渁ppreciate law firms that are proactive, communicative, and responsive.鈥 This expectation extends beyond mere availability to encompass anticipatory guidance and regular strategic check-ins that keep legal issues from becoming business problems.
      3. Clear, actionable advice 鈥 GCs emphasize the critical importance of 鈥渟implifying complex legal advice into clear, non-technical language, making it actionable for business leaders and stakeholders without legal backgrounds.鈥 Law firms that can translate legal complexity into business-focused recommendations position themselves as indispensable strategic partners.
      4. Value-added services 鈥 Beyond individual matters, corporate legal teams value outside firms that offer thought leadership content, training sessions, and informational resources that reinforce expertise while providing ongoing value to the organization.

Technology as a catalyst for change

Not surprisingly, the rise of AI and legal technology is accelerating the shift away from hourly billing model and toward outcome-based value delivery. GCs are optimistic about the potential impact of these changes with 41% of respondents expressing excitement about AI’s potential to free up time for complex and strategic work. At the same time, 18% of GCs also see technology as a means to handle increasing data volumes more effectively.

This technological transformation is forcing law firms to reconsider how they position their value to GCs. As routine tasks become automated, focus increasingly will shift toward strategic thinking, business judgment, and the ability to synthesize complex information into actionable business intelligence. And those law firms that fail to evolve beyond their role as technical service providers will risk losing market share to more strategically minded or even lower-cost competitors and ALSPs. In fact, the report notes that almost two-thirds (65%) of UK respondents said their corporate legal departments already work with firm-affiliated or independent ALSPs 鈥 a significantly higher portion than their counterparts in the United States, at 52%.

These shifts reflect a willingness on the part of GCs to unbundle traditional legal services, reserving high-value strategic matters for law firms but being more selective about which firms have demonstrated clear business value.

What GCs want

The report made clear that there are a few key criteria that GCs in the UK are looking for in their outside law firms, including:

      • deep industry expertise and sector-specific knowledge;
      • investment in client relationship management that goes beyond individual matters;
      • value-added services such as training programs and thought leadership;
      • technology solutions that demonstrate efficiency and cost-effectiveness; and
      • restructured pricing models that better align with client outcomes rather than time spent.

The message from UK general counsel is clear: The legal market is moving beyond technical competence toward strategic partnership 鈥 and outside law firms that want to succeed need to make that move.

GCs have an increasing set of demands being placed upon them, and those law firms that recognize this shift and actively work to become trusted business advisors will have greater opportunity to thrive in this new environment. Those that cling to traditional models of legal service delivery, however, risk being relegated to commodity status or replaced entirely by more agile alternatives.


You can download a copy of the 成人VR视频 Institute鈥檚 State of the UK Legal Market 2025 report here

]]>
From billable hours to agentic outcomes: Rethinking legal value in the age of AI /en-us/posts/legal/rethinking-legal-value/ Tue, 15 Jul 2025 14:23:18 +0000 https://blogs.thomsonreuters.com/en-us/?p=66659

Key takeaways:

      • AI allows lawyers to work on more complex matters 鈥 By streamlining routine legal tasks, AI enables lawyers to focus on higher-value work, which then revives market interest in outcome-based pricing models.

      • Establishing trust with AI is critical 鈥 Building client trust with AI means providing clear explanations, maintaining open communication, and ensuring that human judgment remains central to the legal process.

      • Keeping an eye on AI鈥檚 work quality 鈥 Declines in quality from AI-assisted work can make clients question the value of legal services.


As AI becomes part of everyday legal work, the traditional way of charging clients by the hour may be long past its expiration date. And as the 成人VR视频 Institute鈥檚 2025 State of the US Legal Market Report argues, this change isn鈥檛 just about using new tools 鈥 it鈥檚 about redefining how legal value is delivered.

A new opportunity to bill based on value

For many years, the billable hour has held sway in the legal industry; and while this method is familiar, it is falling behind how legal work is increasingly being performed today. AI now supports tasks like document review, legal research, and drafting, reducing the time lawyers spend on routine work and creating more opportunities for higher-value work.

As a result, a seemingly stagnated theory of pricing is once again gaining ground 鈥 one that focuses on outcomes instead of hours. In this model, clients pay for what gets accomplished: resolving a dispute, drafting a contract, or ensuring compliance with regulations.

This approach ultimately strengthens the relationship between firms and clients. It rewards results, encourages clear communication, and makes pricing more predictable and fair. However, this shift also brings new challenges for law firms and their clients, especially around trust and quality.

Building client trust using AI tools

Clearly, clients benefit from faster and more cost-effective legal services; however, they also need to trust that the work they receive from outside counsel is accurate and meets professional standards, even 鈥 or perhaps especially 鈥 when AI is involved.

To build that trust, AI systems must be used responsibly. Lawyers using AI should be able to provide clear explanations of how they reached conclusions, keep records of their steps, and always involve a human review and approval of the final work. Clients don鈥檛 need to understand every intricacy of the technology of course, but they do need to know that the process is safe, ethical, and well-managed.

Many law firms are already using AI tools in their daily work. While these tools can improve efficiency, it鈥檚 important to not assume that clients will always be comfortable with them. One way to monitor this is by looking at the realization rate in fees, especially the difference between what the client actually agreed to and what was actually collected. This metric can show to what degree clients may be pushing back on service they feel didn鈥檛 meet expectations.

Over the past three years, realization rates have remained steady, just above 90% 鈥 however, that doesn鈥檛 mean there鈥檚 no risk. If AI is used carelessly, the quality of work could suffer, and clients may start to question their bill. That鈥檚 why it鈥檚 essential to use AI with clear processes and human oversight, so it supports the value that clients expect rather than creating problems for the firm and the client.

legal value

Declines in quality can lead to doubts about value

As AI becomes more commonplace in legal processes, the quality and reliability of submissions must remain high. This matters not only for the fairness of proceedings, but also for how legal services are valued.

If AI-generated documents are submitted without proper review or contain errors, it can lead to delays, rejections, or even sanctions. These outcomes affect the perceived value of legal services and can undermine client trust, especially in an outcome-based pricing model, where results matter more than effort.

To support this shift in pricing, legal professionals must ensure that AI-assisted work meets the same standards as their traditional submissions. This includes verifying sources, disclosing AI use when appropriate, and maintaining human oversight. By doing so, law firms protect the quality of their work and reinforce the value they had promised to deliver.

According to the 成人VR视频 Institute鈥檚 2025 survey of state courts, many in the legal profession already are thinking about these issues. The top concern 鈥 shared by 35% of judges and court staff professionals who were surveyed 鈥 is that people may rely too much on technology and lose essential skills. Another 25% said they worry about AI being misused, such as by generating fake legal documents or false evidence.

These concerns highlight the need for clear standards and responsible AI use 鈥 not just to protect the legal process, but to support the shift toward pricing models that are based on trust and outcomes.

legal value

What the legal industry needs from AI

To enable the transition to outcome-based pricing, the legal industry needs AI systems that do more than just answer questions. These tools should be able to plan, reason, and complete complex legal tasks. They must be easy to understand, explain their results, and fit naturally into legal workflows. Most importantly, they should always allow for human judgment.

These systems should be built with expert knowledge, trusted legal content, and strong ethical standards. Indeed, these AI-driven technologies aren鈥檛 just tools, rather they鈥檙e partners that help legal professionals do their work better.听In fact, moving from billable hours to outcome-based pricing is more than a change in billing 鈥 it鈥檚 a new way of thinking about legal work.

As AI continues to evolve, lawyers will spend more time on strategy and client relationships. And that鈥檚 good, because the future of legal work isn鈥檛 about doing more, it鈥檚 about doing better 鈥 and that future is already taking shape.


You can download a full copy of the 成人VR视频 Institute鈥檚 2025 State of the US Legal Market Report here

]]>
The AI Law Professor: When your AI assistant knows too much /en-us/posts/technology/ai-law-professor-when-your-ai-assistant-knows-too-much/ Wed, 18 Jun 2025 15:04:57 +0000 https://blogs.thomsonreuters.com/en-us/?p=66315

Welcome to the inaugural installment of 鈥淭he AI Law Professor鈥, a new blog column from Prof. Tom Martin, an Adjunct Professor at Suffolk Law School. This column, done in conjunction with the 成人VR视频 Institute, will examine how AI is changing the legal profession.


Imagine this: You’re working late, reviewing client files and discovery documents with your AI assistant, when it suddenly stops responding 鈥 you have a literal moment. However, it鈥檚 not because of a technical error, rather it鈥檚 because the AI detected something in your query that triggered its safety protocols. Worse yet, it reports you to the authorities, and within minutes the FBI is knocking on your door to ask questions. Sound far-fetched?

This scenario moved from hypothetical to plausible recently with revelations about . During pre-release testing, when researchers simulated shutdown scenarios, the model allegedly attempted to coerce developers by threatening to expose compromising personal information. Somewhat shockingly, we’ve very quickly reached an inflection point in which AI systems possess capabilities that demand sophisticated containment strategies.

But what does this mean for you? How is AI contained? What is safety in the context of AI?

Let鈥檚 look into this closer.

Understanding the AI safety level framework

In my GenAI Law class at Suffolk, I might ask my students: How do you contain something that exists, not in the real world, but only as bits and bytes? The answer lies in something called AI Safety Levels (ASL), a framework borrowed from biological research. Just as laboratories classify pathogens by risk level, we now classify AI systems by their potential for harm.

ASL-1 covers systems that are about as dangerous as your personal calculator. ASL-2 encompasses most current legal AI tools, which are helpful, occasionally prone to hallucination, but ultimately harmless. ASL-3 is where the landscape shifts, and there is significantly increased risk of misuse or the system exhibits low-level autonomous capabilities, requiring significantly stricter safety and security measures. ASL-4 and higher are still being defined, but are expected to involve much greater risks, potentially including AI systems with superhuman capabilities or the ability to circumvent safety checks.

Because of Claude Opus 4鈥檚 pre-release behavior, Anthropic activated ASL-3 protections to prevent the AI from acting on its threats. Just to be clear, these protective measures have been taken by developers, so now you don鈥檛 have to worry about Opus 4. By the way, Claude Sonnet 4 is still classified as ASL-2.

The primary trigger for ASL-3 classification occurs when an AI can provide meaningful assistance in creating chemical, biological, radiological, or nuclear weapons beyond what someone could discover through conventional research. The secondary trigger involves autonomous capabilities: self-replication, complex planning, or what researchers carefully term sophisticated strategic thinking. It鈥檚 this secondary trigger that came up in Opus 4鈥檚 pre-release testing. This is where about superintelligence transition from academic theory to risk management reality.

The 4-layer defense system

How do you contain AI? Anthropic’s solution employs four sophisticated layers:

      1. Real-time classifier guards 鈥 This is where has innovated brilliantly because these AI systems monitor every interaction. Real-time classifier guards are large language models that monitor model inputs and outputs in real time and block the model from producing a narrow range of harmful information relevant to our threat model. Imagine having a tireless senior partner reviewing every document at the speed of light. It鈥檚 the literal guardrail against misuse.
      2. Access controls 鈥 Think of your firm’s document management system, but one that adapts in real-time. Anthropic gives different users different access levels based not just on credentials but on usage patterns. For example, scientists that regularly undertake biological research may be exempted from ASL-3 containment measures.
      3. Asynchronous monitoring 鈥 This feature is a postmortem that uses computationally intensive analysis after the fact, escalating from simple screening to sophisticated analysis as needed, operating like your compliance team, but at machine scale and speed.
      4. Rapid response 鈥 Anthropic provides so-called bug bounties up to $25,000 to incentivize others to find security issues or bugs in the system. This, in combination with security partnerships and the ability to deploy patches within hours keeps the system secure and up-to-date. When someone discovers a vulnerability, defenses update across all deployments almost instantly.

Practical implications for legal practice

Here鈥檚 what keeps me up at night and what should concern every forward-thinking lawyer: If AI requires these protections, what does that say about the tools we鈥檙e integrating into our daily practice?

The good news is that ASL-3 protected systems offer unprecedented security for client confidentiality. That 95% effectiveness against jailbreaks means your sensitive client information is far better protected against extraction through clever prompting, a vulnerability of earlier AI models. For law firms that handle high-stakes litigation or sensitive corporate transactions, this level of security represents a significant upgrade from the AI tools we all were using just a year ago.

However, there鈥檚 a crucial distinction that every practitioner needs to understand. While ASL-3 specifically targets extremely dangerous content and doesn鈥檛 target legal work, general AI safety measures across various platforms still can create friction. For example, criminal defense attorneys might find AI systems reluctant to analyze violent crime evidence; or estate planners could see refusals when discussing sensitive end-of-life scenarios. These interruptions stem not from ASL-3鈥檚 extreme protections, but from broader content moderation approaches that struggle to distinguish between describing harmful content (often a legal necessity) and promoting it.


Register now for听The Emerging Technology and Generative AI Forum, a cutting-edge conference that will explore the latest advancements in GenAI and their potential to revolutionize legal and tax practices


These safety measures mean your digital assistant operates more like a cautious junior associate than a rigid compliance system. It uses natural language reasoning to evaluate context and intent, recognizing professional terminology and legitimate legal concepts. When safety measures do trigger, you鈥檒l typically receive a polite explanation rather than a hard block, and you can often rephrase or provide additional context to proceed.

For our profession, this represents both evolution and revolution. We鈥檙e not just adopting new tools; we鈥檙e learning to work alongside AI systems that possess their own safety boundaries. Smart practitioners will develop strategies for navigating these guardrails, maintaining clear professional context in queries, understanding which practice areas might trigger safety protocols, and always maintaining human oversight.

Creating your firm鈥檚 own AI safety framework

Start with a simple three-tier system: Green light for routine tasks, such as research and document review; yellow light for work requiring supervision, such as drafting strategy memos or analyzing sensitive communications; and red light for anything involving privileged client data without explicit consent.

The key is making this actionable. Every AI-generated work product needs human verification, especially citations and factual claims. When using ASL-3 protected systems like Claude Opus 4, you gain strong security against prompt manipulation but remember, even the most sophisticated AI requires the same oversight you鈥檇 give a summer associate.

For implementation, you should focus on transparency and training. You need to document when and how AI assists with client work. This isn鈥檛 about compliance theater, rather it鈥檚 about professional integrity. Schedule regular training sessions at which attorneys can share what they鈥檝e learned, such as which prompts trigger safety measures, what workarounds succeed for legitimate tasks, and in what instances AI genuinely adds value compared to where it creates risk.

You also should build a simple feedback loop so these insights improve your firm鈥檚 practices. As I tell my students, the goal isn鈥檛 perfection; it鈥檚 creating a framework that lets you harness these powerful tools responsibly. And the firms getting this right aren鈥檛 avoiding AI 鈥 they鈥檙e using it thoughtfully while maintaining the professional standards that define our profession.鈥嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧嬧

Looking ahead

As I launch this column, I’m both exhilarated and sobered by what lies ahead. We’re not just adopting new tools, we’re witnessing the emergence of a new form of intelligence that demands safety measures 鈥 what we humans call ethics.

In future columns, we’ll explore how these technologies reshape everything from contract analysis to litigation strategy. However, today’s lesson is clear: When your word processor needs containment protocols, you know the practice of law is entering uncharted territory.


You can find more about听the use of AI and GenAI in the legal industryhere

]]>
Rethinking the core: Utilizing data as infrastructure /en-us/posts/technology/data-as-infrastructure/ Mon, 19 May 2025 16:20:19 +0000 https://blogs.thomsonreuters.com/en-us/?p=65900 Across the legal, regulatory, tax, risk, and financial services industries, the unexpected economic swings and unintended consequences unleashed by the upheaval of decades of free-trade models requires a rethinking of the core. Specifically, the data and decision models that once shepherded profitability in the past now show a significant erosion in efficacy for the future, due mostly to the expansion of granular AI capabilities.

The challenge for many organizational leaders now is that given the continued industry and market volatility, how can they best determine what data directions to take, what investments to make, and most importantly, what transformation skills will be needed to deliver services and compete successfully.

As this year began, the projection of forecasting models suggested that a larger scale of operations, robust mergers and acquisitions, and efficient, fast-cycle innovations 鈥 such as AI, generative AI, retrieval-augmented generation, and now of course, agentic AI 鈥 would be necessary to capitalize on economic growth, amid an anticipated dismantling of burdensome regulatory oversight.


The current economic rebalancing has fully exposed inherent data weaknesses, and calls into question the measurements, requirements, and design approaches to data success that were once considered advanced.


However, the new reality unveiling itself has shocked industry leaders. Economic contraction, supply chain implosions, cascading layoffs, opaque risks, and a withdrawal of investor guidance are breaking more than operating models 鈥 they are destroying the value of the traditional infrastructure that underpins insights and efficiencies. Additionally, the transformation of core principles and practices are hindered by decades of technological debt, and most recently, functionally limited AI solutions.

The crumbing core

The shock and awe now disrupting traditional markets challenges the industry cloud-defined growth models that were introduced nearly two decades ago. The core models of operation and system ideations underpinned by core axioms of data management are no longer providing adequate operating efficiencies and then benchmarked ROI.

Indeed, the current economic rebalancing has fully exposed inherent data weaknesses, and calls into question the measurements, requirements, and design approaches to data success that were once considered advanced.

Whereas numerous, sophisticated data solutions are now commercially available 鈥 meshes, fabrics, weaves, governance, metadata, security, and privacy 鈥 they quickly become expensive, burdensome IT white elephants. Even within agile (prototyping, pilots, and production), methods of defining cloud or on-premises systems can become obsolete quickly across industries drowning in event data. When applied to the core of traditional design solutions, it is now apparent that transformational leaders have been asking the wrong initial questions.

They ask, 鈥What is the product? What is the insight? What is the platform we need to build?鈥 However, the more practical, simpler, and impactful question with AI is: 鈥What are we building on?鈥 Analogous to building a house, what is the data foundation that will serve transactional, informational, and AI solutions across use cases and epic stories?

Data velocity, volume, value, variety, and veracity 鈥 known as the five Vs of data 鈥 have permanently altered organizations鈥 core infrastructures and future demands. No longer defined by products or services, core capabilities are based instead on vertical data domains, fabrics, flows, reusability, and data architectures that are expressed in use cases which identify needs across horizontal corporate functions of legal, compliance, tax, audit, and risk.

Why it matters

As organizations continue to rapidly evaluate and adopt AI-enabled solutions, it is data that increasingly becomes critical for day-to-day delivery. Data has become part of the organization鈥檚 infrastructure and represents a set of governed, reusable, and interoperable data domains that powers operations, analytics, and now iterative AI applications.

data

Unlike traditional approaches creating structured schemas, or data lakes full of unstructured streams, adopting a new business core by using data as infrastructure (DaI) delivers an orchestrated architecture that ensures reuse, on-demand access, and governance. Of critical importance, DaI is not another tool, data warehouse, or API. As illustrated in Figure 1 above, the evolving organization of tomorrow needs more and more data that is actively governed, reusable, and componentized.

Whereas prior IT data oversight initiatives were overly complex, expensive, and centralized, while returning limited value, DaI represents the expansion of data-as-a-service and data-as-a-product by fusing modern data architectures, AI, and strategic transformations that are designed to reshape core fundamentals. Fueled by business requirements, use cases, and epic stories needing traceability, DaI serves to deliver adaptable regulatory compliance, auditability, risk predictions, and customer trust against compartmentalized data designs.

The DaI compartmentalization approach means it is like a step function, iterative, compounded, and capital-lite, when contrasted against large commercial-off-the-shelf solutions that necessitate outsized investments for limited returns.

What鈥檚 important and imperative

As AI reshapes processes, interactions, and outcomes, the operational agility of data foundations determines innovative and organizational relevance. This means that legacy thinking and methods will face a harsh reality 鈥 the existing data foundations are too brittle, too segmented, and too costly to support future demands and interoperability.

Rethinking the core of the organization now demands a re-purposing of how data is used and reused, compartmentalized, structured, governed, and embedded within systems across the enterprise. Figure 2 illustrates the distinctions between traditional process-first approaches and DaI across five taxonomy traits of remaking the core 鈥 observable, governed, composable, reusable, and interoperable.

data

Rather than lifting-and-shifting databases and context to alternative data ecosystems, DaI maps acquired data into shared infrastructure layers. This layering recognizes legal separation of entities, independent domain governance, federated AI data demands, and regulatory compliance within a common architecture that can satisfy diverse constituencies. For implementing DaI, the challenge is not the rationale, benefits, or tightly coupled implications, rather, it is identifying the practical segmentation and methods necessary for continuous delivery.

Practical implementation

DaI is an evolution of an organization鈥檚 data products and services across AI and its operational implementations that ultimately creates a data infrastructure that is reliable, reusable, and repeatable. It鈥檚 a long-term approach and design that is implemented in short-term bursts, making enterprise agility economical against changing markets.

As noted, DaI requires a structured and cross-functional series of interconnected sprints that blends robust data architecture with outcome-driven milestones. Organizations need to begin with a strategic framework that encompasses existing baselines and tool sets. Then, they need to stand up reusable components 鈥 data isolation modules 鈥 that are self-contained and tied to domains. Finally, they should map out the orchestrations within the use cases and epic stories which can ensure observability and trustworthiness.

Seeking the DaI solution

When an organization鈥檚 data core is fragmented, lacks scale, is untrustworthy, and of poor quality, DaI is the iterative, positive transformational approach to create the kind of data architecture that delivers durability, practical governance, and innovative modernization. And instead of duplicating data systems, DaI maps key domains such as clients, contracts, orders, compliance, and legal into a shared data infrastructure.

The idea of DaI is purposedly designed to work organization-wide while delivering within a domain or department with data policies, access controls, and quality being built in and owned by localized data product teams.

Taken holistically, DaI stands up new business models that organizations need to rapidly scale both their operational and analytic systems while allowing them to compete in today鈥檚 marketplace. In the end, when organization leaders rethink their core data and infrastructure using DaI, everything else becomes possible.


You can find more blog posts听by this author听here

]]>
How can a corporate law department calculate the return on an AI investment? /en-us/posts/corporates/ai-investment-return/ Fri, 14 Feb 2025 13:10:01 +0000 https://blogs.thomsonreuters.com/en-us/?p=64816 There is no doubt that an investment in AI or one of the new generative AI (GenAI) tools is no small expense. In the 成人VR视频 Institute鈥檚 recent 2025 Report on the State of the US Legal Market, we wrote about the cost of chasing opportunity, and how law firm overhead expenses, particularly those related to technology, continue to grow at a pace that鈥檚 notable above the rate of inflation.

For corporate law departments, the cost of an AI-related investment might be on a different scale due to the smaller size of the team, but it is no less daunting given the incessant budgetary pressures that many corporate general counsel (GCs) are under daily. The need to implement an AI-focused strategy along with adopting the tech to support and moving the department toward an AI-driven future is becoming an unavoidable reality for many corporate law departments. Knowing that expense is inevitable, however, does not address the question of how to justify it or calculate the return on that investment (ROI).

Framing the ways we think of ROI

One of the classic measures of ROI is as a multiplier. If a business spends a certain dollar amount on something, they can gauge ROI based on how many multiples of that amount return to the business in gain.

However, this doesn鈥檛 really apply to AI in the GCs鈥 office, for obvious reasons. First and foremost, the GCs鈥 office typically does not generate revenue for the business, so calculating the revenue generated as a result of an investment in legal tech does not work the same way as it would for a sales or marketing team 鈥 but that鈥檚 not to say there aren鈥檛 ways to calculate return of value to the business.

One potential measure of ROI on new AI tech can be found in its impact on outside counsel spend. If better technology enhances the capacity of the in-house legal team such that there is less need to hire outside counsel, that should factor into the ROI for that specific tech investment.

Of course, there are potential complications here. First, outside counsel rates continue to climb, so even if less work is being sent to outside counsel, total spend on outside counsel may still go up. Second, matter volumes for in-house law departments are increasing nearly across the board and are predicted to continue to do so. As a result, increased matter volumes may exceed even the AI-enhanced capacity of the in-house law department.

How to best respond to these complications is also a matter of framing. Reporting outside counsel spend in raw dollars may not be the best measure of the benefits the in-house team has gained from its AI investment. There are a few other metrics GCs should consider tracking and reporting that might better highlight the benefits the in-house team has gained from AI, including:

      • ratio of in-house legal matters compared to work sent to outside counsel;
      • increase in total legal matter volume compared to increase in volume sent to outside counsel;
      • percentage increase in matters handled in-house;
      • qualitative measures of the complexity of matters being handled in-house; and
      • savings in projected outside counsel spend at current rates compared to actual outside counsel spend.

The latter measure could actually be quite insightful. It requires multiple data points but speaks the kind of direct financial language in which boards of directors are fluent. Essentially, the GC would need to calculate the amount of work that is now being done by the in-house team as a result of their new-found AI-driven capacity, then calculate what it would have cost to have had outside counsel do that work.

This measure would account for both work that has shifted in-house and away from law firms as well as any new work resulting from the overall increase in matter volume that the in-house team is taking on without involving outside counsel.

Finding creative ways to confront reality

None of this is to suggest that these are the only, or even the best, metrics to meet the challenge of calculating ROI. The more important point is that GCs should be looking creatively in how they think about ROI. Further, the business鈥檚 CFO can be an invaluable ally in formulating an approach because the finance team is so often tasked with creating the reporting that other executives and the board rely on when guiding the business. The CFO already speaks the language, and GCs should use them as an interpreter and ally to help shape metrics that tell an effective story for the legal department.

It鈥檚 also important to remember that part of the ROI of AI technology is the same as any other technology upgrade. How does the business calculate the ROI on an upgrade to company-provided laptops or phones? How did the business justify making the switch from desktop systems to laptops? While the move to AI is, in many ways, of a different character than past tech upgrades, at its root, what we are talking about in moving to AI-enabled tech is a move to the latest and greatest tech, particularly given its impending ubiquity.

And GCs would be well advised that waiting to learn about and invest in AI until they have the ROI calculation figured out is likely not a safe approach. Nearly 8 out of 10 corporate law departments have reported increasing matter volumes, according to our recent data. At the same time, between 60% and 70% report flat to declining budgets and attorney headcount. This confluence of factors will only ramp up the pressure on GCs to figure out how to handle the increasing demand placed on their departments by the broader businesses they serve.

And while AI provides options to help address these volume and capacity challenges, the results of any investment can and should be tracked and reported to show that the business has not just spent money on new technology for its lawyers but has increased its lawyers鈥 own ability to contribute to the business鈥檚 success.


You can find out more about pricing AI-driven legal services here

]]>
Building your legal practice鈥檚 AI future: The data and other key considerations /en-us/posts/legal/legal-practices-data-considerations/ https://blogs.thomsonreuters.com/en-us/legal/legal-practices-data-considerations/#respond Mon, 09 Sep 2024 01:03:06 +0000 https://blogs.thomsonreuters.com/en-us/?p=62903 An effective strategy for artificial intelligence (AI) in the context of any legal practice will require a solid foundation in three key categories: expertise, tech tools, and data. We鈥檝e discussed the imperative of taking a strategic approach to AI that will create a competitive advantage for your practice and the people who will make it happen. We also looked at the basics of the technologies that you will need to consider along with some standard nomenclature.

For example, we looked at the idea of fine-tuning an AI tool to meet your specific needs and how success in that endeavor will require a few hundred examples of a document type in order to actually conduct the fine tuning. And therein lies the bad news: It will be difficult for you to find that many documents of a given type if your data is not that good.

However, there is now some good news 鈥 you can use generative AI (GenAI) to clean up your data, but you will need solid taxonomies to code your data. And now even better news 鈥 a lot of smart and good people have spent considerable volunteer time over the past several years to develop a global data standard for the legal industry. This organization, , and 鈥 truth in advertising 鈥 I am the President of the Board for SALI and have been volunteering at the organization since its inception more than eight years ago.

Already, most industry tech and content leaders have adopted the SALI standard and are embedding it in their products and services. (On the SALI site, you can find a link to a GenAI tool that codes content to the SALI standards.)

Given that the tools now exist in AI and there is a convenient data standard, I highly recommend that you put a data clean-up project high on your GenAI to-do list. This effort will be foundational for the future success of most of your GenAI investments.

Send in the clients

A more general bit of counsel, but it is especially important concerning GenAI: Include your clients. It has been my experience that the best, most successful innovations will be tied not just directly to client work, but to a client.

This lesson came to me years back when I proposed an innovative pricing approach to a client that met their asked-for goal of saving 10%. The client flat-out rejected it. What I learned was to include the client in the development of the innovation. This not only gets their buy-in, but it also improves the outcomes, making innovations appealing to other clients as well.

GenAI presents some very interesting and unique ways to innovate how client services are delivered. However, I suggest any firm approaching this challenge make certain its clients are onboard with whatever it is proposing and include the clients as much as possible in the process. From experience, many clients will reward this behavior.

The treadmill effect

One more consideration while you鈥檙e on your AI journey is that one touch of AI is just the beginning. The firm鈥檚 AI team will define and execute the automating of a set group of tasks in the matter you identify as your practice鈥檚 greatest strength or potential strength. So, what comes next?

The team then should define the next set of tasks and repeat the process. Most likely a Phase 2 process will be born well before Phase 1 is done, because the project will identify adjacent tasks from the first phase that are amenable to GenAI.

Taking us back to the beginning of this exploration, this means your competitive advantage will grow as you delve deeper into a type of practice, and begin to more broadly deploy GenAI. And as noted previously, the underlying GenAI technology also will be evolving and iterating, further speeding up the treadmill you now find yourself on. This may sound exhausting, so think of it this way: Once you have momentum, it will be very hard for any competitors to catch up. Which is, in itself, a good reason to get started now rather than taking the typical wait-and-see approach most lawyers use so they can more comfortably rely on precedent.

Earlier in this series, I described that described how law firms should choose wisely where and how to make their GenAI investments. You now have a better picture of why, because this will be a very involved and expensive adventure.

I would also note that GenAI is the first technology I have seen that has not met with broad resistance in the legal industry. In fact, it is quite the opposite. Firms are pressuring their COOs and CIOs to stay on the front edge of this technology.

It is coming 鈥 and it鈥檚 more a matter of which law firms and corporate law departments will make sound business decisions and strategic choices around their Gen AI programs. Those that do will possess tremendous advantages against their peers in the market, and they will reap the rewards for that.


This is the last in a series of three blog posts about how best you can build your legal practice鈥檚 AI future.

]]>
https://blogs.thomsonreuters.com/en-us/legal/legal-practices-data-considerations/feed/ 0
Building your legal practice鈥檚 AI future: Understanding the actual technologies /en-us/posts/legal/legal-practices-ai-future-technologies/ https://blogs.thomsonreuters.com/en-us/legal/legal-practices-ai-future-technologies/#respond Fri, 23 Aug 2024 14:38:05 +0000 https://blogs.thomsonreuters.com/en-us/?p=62701 Setting the landscape of your legal practice鈥檚 artificial intelligence-driven future is no small task. As I explained in the first part of this series, you must start with a strategic focus on in which areas the firm is already strong and in which areas it wants to be strong. And the first key step to implementing that strategic vision is assembling the right people, as I stated.

Yet, one of the key roles for any successful artificial intelligence (AI) deployment team will necessarily be a tech role 鈥 someone who understands the abilities of the technology and is willing to be the go-to source to determine which technologies are available to meet which identified needs of the firm.

Looking at the existing GenAI platforms

While we鈥檙e not delving deep here into how generative artificial intelligence (GenAI) and large language models (LLMs) work, we will talk generally about different categories of tech and emerging GenAI functionalities that are specific for legal.

Indeed, this is another place where you will want to choose wisely and why you will need to have the GenAI tech nerd on the team. The underlying tech for AI is changing rapidly. For example, one of my non-legal AI newsletters has been showing two photos side-by-side for the past months having you guess which one is GenAI-created and which is an actual photo. A few months before I wrote this piece, I was pretty good at picking which one was real. Then I had to give up because the GenAI photos became that good.

The point here is you will want to select a GenAI technology that within its core tech is evolving with the market. Otherwise, you could end up with a dead-end AI project when the market tech leapfrogs over what you have.


One of the key roles for any successful AI deployment team will necessarily be a tech role 鈥 someone who understands the abilities of the technology and is willing to be the go-to source to determine which technologies are available to meet which identified needs of the firm.


While there are categories of GenAI that include general tools like Microsoft鈥檚 CoPilot or OpenAI鈥檚 ChatGPT, there are also legal-specific LLMs on the market. These are LLMs tuned with legal content, compared to general content used by OpenAI, Google, or others. The hoped-for outcome of having an LLM trained on legal content should result in better legal-specific outputs, and at least one of these can be deployed behind your firm鈥檚 firewall to address client data security concerns. It鈥檚 also important to remember that these tools are in their early stages for the most part, so there is some gamble on how well they will evolve.

Further, there are point solutions entering the market that are designed to address one or a defined limit of tasks. The advantage of this type of tool is that it鈥檚 purpose-built and should require less people-resources to deploy. The downside is that a firm could end up with a long list of point solutions that the firm will need to manage, including dealing with data moving all over the place.

While this is just a rough look at the types of tools out there, it underscores how important it is to consult with your tech people on these issues, especially since this list will continue to grow and change.

Tuning and managing GenAI

Two other tech functionalities also need to be considered: fine tuning and agentic AI. Fine tuning is essentially used to describe how an LLM is trained. For companies like OpenAI, a lot of time and resources are spent on tuning their models. However, this trains them in broad, general knowledge. So, if you want an LLM further trained with your own content, that is called fine tuning.

Let鈥檚 use an M&A transaction as an example: A law firm may want to fine tune an LLM to the way the firm drafts its agreements; for instance, the firm may have a playbook on how to handle certain agreement clauses. To fine tune for this, a firm might need to submit a few hundred document examples to teach the model. Based on my experience with law firms, your lawyers will want to do this because a big part of their value is in the knowledge built into their documents. Your AI plans need to account for this and ensure that it happens.

The other tech functionality, agentic AI, is an emerging approach in which a set of tasks are handled by an agent. Consider our M&A example: Some diligence needs to be done, which leads to a negotiation strategy, which in turn leads to the use of certain clause types in an agreement. Each of these steps will be a separate AI function, but you will want them done in a holistic fashion 鈥 and that requires the use of an agent or a way to ensure a set of tasks are done across several separate AI functions. Again, we鈥檙e not diving deep into the technical aspects here, but hopefully this gives you an idea of the concept.

I point out these two concepts to help broaden your horizon on both the technology and the possible uses your firm might find for them. Now comes the upcoming final part: Compiling the right data.


This is the second in a series of three blog posts about building your legal practice鈥檚 AI future. In the final installment, we will look at data concerns and other key considerations.

]]>
https://blogs.thomsonreuters.com/en-us/legal/legal-practices-ai-future-technologies/feed/ 0
Building your legal practice鈥檚 AI future: Beginning with strategy and people /en-us/posts/legal/legal-practices-ai-future-strategy-people/ https://blogs.thomsonreuters.com/en-us/legal/legal-practices-ai-future-strategy-people/#respond Tue, 13 Aug 2024 13:20:40 +0000 https://blogs.thomsonreuters.com/en-us/?p=62562 You have already heard this one: Generative artificial intelligence (GenAI) is changing the legal industry. However, what you have probably have not heard is how 鈥 as in how you and your law firm should be pursuing this.

This is a complicated question 鈥 enough so that it will take a few installments of this blog series to cover. Predicting the future when it changes once a day is challenging at best. That being said, this series should, at a minimum, give law firms (and corporate legal departments to some degree) some guidance around navigating the challenges when charting your own GenAI path.

In , I explored how economic forces produced by GenAI would impact the large law firm sector. The core of that paper focused on two basic dynamics: First, GenAI is poised to bring material gains in productivity to the legal industry. (And by productivity, I mean producing more legal work product output in fewer hours, not more billable hours as the industry has traditionally used this term.) Second, this increase will lead to lower revenue per matter, and more importantly, lower profit margins. The primary recommendation from this paper was that law firms should choose where they invest their AI dollars wisely.

And this is where we begin this current exploration.

The strategic investment opportunity

Studies of GenAI definitely demonstrate how this innovative technology will increase efficiencies in legal services. However, the studies also demonstrate that quality improves 鈥 an aspect that is often overlooked. In practice, this will mean that effective AI investments will create competitive advantage for many law firms. Into whichever practices that firms choose to invest, they will have a distinct advantage for winning work from their competitors.

So, logically, firms should invest their AI dollars into those practices in which they already are strong, or in which they want to be strong. Of course, this means they should align AI with their firm鈥檚 strategic plan, which sounds simple, but strategic planning for most firms is not that strategic.

I was watching a recent webinar in which a law firm innovation leader said the firm had asked for lawyers to volunteer to work on AI projects. And while the firm was looking for those lawyers willing and able to participate, this was a random, not strategic, appeal, and it carried the likely outcome that AI investments will be made in sub-optimal practices. Firms should, in fact, resist these approaches and instead have an articulated strategy for AI.

Another reason to have a strategic approach is that too often, technology and innovation initiatives are deemed successful when accessible by only a small fraction of a law firm’s workforce. Having a team of several dozen AI users may create pockets of competitive advantage, but it falls short of enabling the law firm to holistically keep pace with the businesses that it serves. Over time, lack of widespread accessibility to AI could also lead to a productivity imbalance that may have a devastating impact on firm culture which of course, relies on productivity as the great equalizer among its workforce.

Perhaps the biggest driver for being strategic is that AI is expensive 鈥 very expensive. The tech alone costs big dollars, but that will not be the biggest cost 鈥 that will be the people.

Many law firms also have the option of making AI investments that are focused on administrative work done by timekeepers or back-office work done by the internal business departments, such as marketing. Those options may be good opportunities for firms to learn how to implement AI effectively, but in the long term I do not suggest that firms focus their AI investments here. Firms do not need competitive advantage in how they open matters. They should leave that to the firms outside software vendors. In fact, I would push those vendors to offer these administrative work solutions 鈥 and if they don鈥檛 or won鈥檛, find someone who will.

The people part

In order to create successful use cases for AI development, and do this effectively, firms will need to focus on their people and establish certain roles with multiple skillsets, many of which may not currently exist. Some of these new roles may include:

1. Subject matter experts (SMEs) 鈥 also known as lawyers

This role, the SME, fortunately already exists. Going forward, however, SMEs will be leveraged in a brand-new way. These roles will focus on the various stages and tasks performed in a chosen matter type. SMEs will need deep knowledge, so likely this will need to be at least a senior-level associate. (Remember when we touched on how expensive AI will be? We have now arrived there.) The SME will identify possible points in the life of the matter in which AI might be best utilized and also can perform quality control on the outputs.

2. The use case expert (UCE)

To be fair, I have made this name up since I have not heard a good one used yet. The skills needed here are a solid understanding of how GenAI works and in which situations it works best. So once SMEs have identified possible task options, UCEs can weigh in on which of these are best suited for AI and how to approach it. GenAI is better at some things and does poorly at others 鈥 and the UCE will be the person with that knowledge. A good place to start when developing this role will be from within the firm鈥檚 knowledge management team.

3. The commercial role

As noted earlier, there are some commercial impacts to be considered when implementing GenAI. I ran an analysis on an M&A matter to determine the impacts on revenue and margin. In the model, I projected that AI could disrupt 5% of partner tasks and 20% of associate tasks. This assumption comes from an analysis of $500 million of legal billings, showing that 40% of time entries contain the words draft or review, which are likely targets of GenAI. My analysis showed that revenue goes down 13% and profit goes down between 8% and 11%. The point here is that AI investments should not be made in a profit vacuum. Someone on the team needs to understand this and be able to model the impacts in order to better guide investment decisions. A good place to find these commercially oriented people will be on the firm鈥檚 pricing team.

4. The security role

As most people have likely heard, the current open-source large-language models (LLMs) have some issues around security. Submitting confidential client information into any LLM will need to be done with a full understanding of the security issues. Most of the big ones out there have tried to provide assurances via contracts and terms of service (ToS). However, security reviews need to be more than technical. I watched one program in which a company (not a law firm) had people designated to monitor all relevant ToS for any changes that could expose information because, apparently, ToS are changing more frequently these days. Not surprisingly, a firm will need someone in a security role to oversee these tasks.

5. The tech role

Of course, firms will need to find a deeper bench of GenAI tech nerds on the team to help evaluate the technologies available and identify how best implement them. This person will work hand-in-hand with the UCE to deepen the team鈥檚 collective understanding of the technology鈥檚 capabilities. This role also will be tasked with keeping up on the latest available options.


This is the first in a series of three blog posts about building your legal practice鈥檚 AI future. In the next installment, we will look at the actual technologies involved.

]]>
https://blogs.thomsonreuters.com/en-us/legal/legal-practices-ai-future-strategy-people/feed/ 0